-
-
Notifications
You must be signed in to change notification settings - Fork 24k
Description
Describe the bug
When using Flowise integrated with Langfuse, only OpenAI Chat models (e.g., gpt-3.5, gpt-4) show accurate token usage, cost, and input/output trace data in the Langfuse dashboard.
However, other LLM providers like Google Gemini, Mistral, or Anthropic Claude only display latency — while the input, output, token usage, and cost fields remain empty.
This inconsistency significantly affects observability and debugging in non-OpenAI workflows.
To Reproduce
Use a node like Gemini Chat or deepseek in Flowise
Send a message through the Flowise UI or API
Open Langfuse and inspect the trace
Expected behavior
All LLM providers (Gemini, Claude, Mistral, etc.) integrated via Flowise and logged via Langfuse should:
Display prompt (input) and completion (output)
Show token usage (input/output/total)
Calculate and display costs
Reflect all traces consistently, regardless of the provider
Screenshots
No response
Flow
No response
Use Method
None
Flowise Version
No response
Operating System
None
Browser
None
Additional context
No response