Skip to content

fix: 修复转录进度条回退问题#955

Merged
WEIFENG2333 merged 1 commit intomasterfrom
fix/progress-regression
Jan 11, 2026
Merged

fix: 修复转录进度条回退问题#955
WEIFENG2333 merged 1 commit intomasterfrom
fix/progress-regression

Conversation

@WEIFENG2333
Copy link
Copy Markdown
Owner

@WEIFENG2333 WEIFENG2333 commented Jan 11, 2026

Summary

修复 #944 转录时进度条回退的问题。

问题原因:当音频较长被分块并行处理时,不同块的进度回调是交错的,导致整体进度出现回退(比如已经到 80%,突然跳回 20%)。

修复方案

  • ChunkedASR:记录每个块的进度,计算加权平均值,只在进度增加时才更新
  • FasterWhisperASR:补充单调递增保护

Test plan

  • 使用必剪/剪映接口转录长音频(>10分钟),观察进度条是否平滑前进
  • 使用 FasterWhisper 转录,确认进度不会回退

🤖 Generated with Claude Code


Note

Fixes progress rollback during transcription by enforcing monotonic progress updates.

  • ChunkedASR: Track per-chunk progress with a thread-safe lock and compute overall progress as the averaged sum; only emit callbacks when overall increases.
  • FasterWhisperASR: Maintain last_progress and only forward increasing mapped progress values; minor formatting change to progress message (e.g., 85%).

Written by Cursor Bugbot for commit e9a8bab. This will update automatically on new commits. Configure here.

Fix issue #944 where progress bar would jump backwards during transcription.

Root cause: When audio is split into chunks for parallel processing,
progress callbacks from different chunks were interleaved, causing
the overall progress to regress (e.g., 80% -> 20%).

Changes:
- ChunkedASR: Track each chunk's progress separately, calculate weighted
  average, and only emit progress when it increases
- FasterWhisperASR: Add monotonic progress protection

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copilot AI review requested due to automatic review settings January 11, 2026 09:08
@claude
Copy link
Copy Markdown

claude bot commented Jan 11, 2026

Claude encountered an error —— View job


I'll analyze this and get back to you.

@WEIFENG2333 WEIFENG2333 merged commit eae456a into master Jan 11, 2026
5 of 6 checks passed
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request fixes issue #944 where the transcription progress bar would regress (e.g., jump from 80% back to 20%) when processing long audio files that are split into chunks and processed in parallel.

Changes:

  • Added monotonic progress increase protection in FasterWhisperASR to prevent progress regression
  • Implemented weighted average progress calculation in ChunkedASR with thread-safe progress tracking
  • Changed progress message formatting to remove space before '%' sign for consistency

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
app/core/asr/faster_whisper.py Added last_progress tracking variable and monotonic increase check to prevent progress callback from reporting lower values
app/core/asr/chunked_asr.py Introduced thread-safe progress tracking with per-chunk progress array, progress lock, and monotonic increase enforcement using weighted average calculation

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

idx: int, chunk_bytes: bytes, offset_ms: int
) -> Tuple[int, ASRData]:
"""转录单个音频块 - 为每个块创建独立的 ASR 实例"""
nonlocal last_overall
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nonlocal declaration for last_overall on line 180 is redundant because it's already declared as nonlocal on line 184 within the nested chunk_callback function. The outer transcribe_single_chunk function doesn't need to declare it as nonlocal since it's not modifying it directly - only the inner callback does.

Suggested change
nonlocal last_overall

Copilot uses AI. Check for mistakes.
Comment on lines +187 to +193
with progress_lock:
chunk_progress[idx] = progress
overall = sum(chunk_progress) // total_chunks
# 只允许进度单调递增
if overall > last_overall:
last_overall = overall
callback(overall, f"{idx+1}/{total_chunks}: {message}")
Copy link

Copilot AI Jan 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new monotonic progress increase behavior is not covered by tests. Consider adding a test that verifies progress values are monotonically increasing when multiple chunks are processed concurrently. The test could track all callback invocations and verify that each progress value is greater than or equal to the previous one.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants