Skip to content

Token Usage and Cost Missing for Non-OpenAI Providers (e.g., Gemini) in Flowise + Langfuse Integration #5015

@RockybhaiRakesh

Description

@RockybhaiRakesh

Describe the bug

When using Flowise integrated with Langfuse, only OpenAI Chat models (e.g., gpt-3.5, gpt-4) show accurate token usage, cost, and input/output trace data in the Langfuse dashboard.

However, other LLM providers like Google Gemini, Mistral, or Anthropic Claude only display latency — while the input, output, token usage, and cost fields remain empty.

This inconsistency significantly affects observability and debugging in non-OpenAI workflows.

To Reproduce

Use a node like Gemini Chat or deepseek in Flowise

Send a message through the Flowise UI or API

Open Langfuse and inspect the trace

Expected behavior

All LLM providers (Gemini, Claude, Mistral, etc.) integrated via Flowise and logged via Langfuse should:

Display prompt (input) and completion (output)

Show token usage (input/output/total)

Calculate and display costs

Reflect all traces consistently, regardless of the provider

Screenshots

No response

Flow

No response

Use Method

None

Flowise Version

No response

Operating System

None

Browser

None

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions