Skip to content

[Bug]: Bailian provider usage tracking shows all zeros despite correct API response #46142

@dellzou

Description

@dellzou

Bug type

Regression (worked before, now fails)

Summary

When using the Bailian (阿里云百炼) provider with OpenAI-compatible mode, the usage statistics displayed in /status show all zeros despite the API returning valid token counts.

Steps to reproduce

markdownCopyCopied!

Reproduction Steps

  1. Configure Bailian Provider

    Add to models.json:

    {
      "providers": {
        "bailian": {
          "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
          "apiKey": "sk-YOUR_API_KEY",
          "api": "openai-completions",
          "models": [{
            "id": "qwen3.5-plus",
            "cost": {
              "input": 4,
              "output": 12
            },
            "contextWindow": 1000000,
            "maxTokens": 65536
          }]
        }
      }
    }

Start a New Session

bashCopyCopied!
openclaw new

or use /new in chat

Send Any Message

CopyCopied!
Hello
Check Usage Display

Run /status in chat
OR check session log: ~/.openclaw/agents/default/sessions/*.jsonl
Observe the Bug

UI shows: Avg Tokens / Msg: 0, Avg Cost / Msg: $0.0000
Session log shows: "usage": {"input": 0, "output": 0, "totalTokens": 0}
Verify API Actually Returns Usage (Optional)

bashCopyCopied!
curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
-H "Authorization: Bearer sk-YOUR_KEY"
-H "Content-Type: application/json"
-d '{"model":"qwen3.5-plus","messages":[{"role":"user","content":"Hi"}]}'
→ API returns correct prompt_tokens, completion_tokens, total_tokens
CopyCopied!

预期结果

  • /status 显示正确的 token 数量和成本
  • Session 日志中的 usage 字段有实际数值

实际结果

  • 所有 usage 指标均为 0
    重现频率: 每次(100% 可重现)
    影响范围: 所有使用 bailian provider 的用户

Expected behavior

The /status command should display:

  • Accurate token counts (input/output/total)
  • Estimated cost based on configured pricing
  • Throughput statistics

Actual behavior

All usage metrics show zero:

Avg Tokens / Msg: 0
Avg Cost / Msg: $0.0000
Throughput: 0 tok/min

Session logs show:

"usage": {
  "input": 0,
  "output": 0,
  "cacheRead": 0,
  "cacheWrite": 0,
  "totalTokens": 0,
  "cost": {
    "input": 0,
    "output": 0,
    "cacheRead": 0,
    "cacheWrite": 0,
    "total": 0
  }
}

OpenClaw version

2026.3.12 (6472949)

Operating system

Rocky Linux 4.18.0-553.89.1.el8_10.x86_64

Install method

No response

Model

qwen3.5-plus

Provider / routing chain

bailian → openai-completions → qwen3.5-plus

Config file / key location

~/.openclaw/agents/default/agent/models.json

Additional provider/model setup details

No response

Logs, screenshots, and evidence

Impact and severity

No response

Additional information

Issue: Bailian Provider Usage Tracking Shows All Zeros

Problem Summary

When using the Bailian (阿里云百炼) provider with OpenAI-compatible mode, the usall zeros despite the API returning valid token counts.

Environment

Expected Behavior

The /status command should display:

  • Accurate token counts (input/output/total)
  • Estimated cost based on configured pricing
  • Throughput statistics

Actual Behavior

All usage metrics show zero:

Avg Tokens / Msg: 0
Avg Cost / Msg: $0.0000
Throughput: 0 tok/min

Session logs show:

"usage": {
  "input": 0,
  "output": 0,
  "cacheRead": 0,
  "cacheWrite": 0,
  "totalTokens": 0,
  "cost": {
    "input": 0,
    "output": 0,
    "cacheRead": 0,
    "cacheWrite": 0,
    "total": 0
  }
}

Investigation Results

1. API Response Format ✓ Correct

Tested with direct curl call:

"usage": {
  "prompt_tokens": 11,
  "completion_tokens": 325,
  "total_tokens": 336,
  "completion_tokens_details": {
    "reasoning_tokens": 311,
    "text_tokens": 325
  },
  "prompt_tokens_details": {
    "text_tokens": 11
  }
}

2. OpenClaw Usage Parser ✓ Supports Field Names

Found in /usr/lib/node_modules/openclaw/dist/plugin-sdk/audit-DtsdQ9jZ.js:

const rawInput = asFiniteNumber(
  raw.input ?? raw.inputTokens ?? raw.input_tokens ??
  raw.promptTokens ?? raw.prompt_tokens  // ✓ Supported
);
const output = asFiniteNumber(
  raw.output ?? raw.outputTokens ?? raw.output_tokens ??
  raw.completionTokens ?? raw.completion_tokens  // ✓ Supported
);

3. Session Logs ✗ Usage Zeroed

Despite correct API response and parser support, usage data is written as all ze

Configuration

models.json provider config:

{
  "providers": {
    "bailian": {
      "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "apiKey": "sk-***",
      "api": "openai-completions",
      "models": [{
        "id": "qwen3.5-plus",
        "cost": {
          "input": 4,
          "output": 12,
          "cacheRead": 4,
          "cacheWrite": 12
        },
        "contextWindow": 1000000,
        "maxTokens": 65536
      }]
    }
  }
}

Hypothesis

The issue likely occurs in one of these places:

  1. Response transformation layer - Usage data may be stripped or reset befor
  2. Provider-specific adapter - The openai-completions API mode for bailian
  3. Session logging pipeline - Usage may be correctly parsed but zeroed durin

Suggested Fix Areas

  1. Check the bailian provider adapter in the OpenAI-compatible mode handler
  2. Verify usage extraction happens before any response transformation
  3. Add debug logging to trace usage data flow from API response → parser → sessi

Reproduction Steps

  1. Configure bailian provider with API key
  2. Send any chat message
  3. Run /status or check session logs
  4. Observe all usage metrics are zero

Report Date: 2026-03-14
Investigator: 小爱 (OpenClaw instance on Rocky Linux)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingregressionBehavior that previously worked and now fails

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions