-
-
Notifications
You must be signed in to change notification settings - Fork 24k
Description
Describe the bug
When using LangSmith or Langfuse analytics with Agentflow V2, traces are sent successfully but token usage data (prompt_tokens, completion_tokens, total_tokens) and model name are not captured.
All LLM generations show:
total_tokens: 0prompt_tokens: 0completion_tokens: 0model: null
This affects both LangSmith and Langfuse integrations, suggesting the issue is in how Flowise extracts and passes OpenAI response usage data to analytics providers.
Environment:
- Flowise version: 3.0.13
- Node.js: v22 (Docker image)
- LLM Provider: OpenAI (direct, not Azure)
- Models: gpt-5.1, gpt-5-mini, gpt-5-nano
- Analytics: LangSmith (langsmith-js 0.1.6 bundled) and Langfuse
- Mode: Queue mode with workers
Expected behaviour:
Token usage from OpenAI responses should be passed to analytics providers. OpenAI returns usage data in every response:
json
{
"usage": {
"prompt_tokens": 1234,
"completion_tokens": 567,
"total_tokens": 1801
}
}
This data should appear in LangSmith/Langfuse traces for cost tracking and optimization.
To Reproduce
- Create an Agentflow V2 with ChatOpenAI nodes
- Enable LangSmith or Langfuse in flow Settings > Configuration > Analyse Chatflow
- Run a prediction through the flow
- Check LangSmith/Langfuse dashboard
- Observe that traces appear, but token counts are 0 and the model is null
Expected behavior
Token usage from OpenAI API responses should be captured and passed to analytics providers (LangSmith, Langfuse).
When viewing traces in LangSmith or Langfuse, LLM generations should show:
- Model name (e.g., "gpt-5.1")
- Input/prompt tokens
- Output/completion tokens
- Cached tokens
- Total tokens
- Calculated cost (based on model pricing)
This data is essential for:
- Cost tracking and optimisation
- Identifying expensive prompts
- Comparing model efficiency
Screenshots
No response
Flow
No response
Use Method
None
Flowise Version
No response
Operating System
None
Browser
None
Additional context
No response