Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughRefactors STT model surface to a backend-agnostic SupportedSttModel, adds a hypr-download-interface crate, reworks local-stt plugin APIs/servers (model-aware start/download/connection), removes connector STT APIs, updates AM/AmModel handling, and migrates desktop UI/onboarding to the new model APIs. Additional dependency, error, streaming, and editor/stylesheet adjustments included. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Desktop App
participant LSTT as Local STT Plugin
participant Server as STT Server
participant AM as AM Service
App->>LSTT: set_current_model(SupportedSttModel)
App->>LSTT: start_server(Some(model))
alt model is Whisper
LSTT->>LSTT: validate downloaded (file + size_bytes)
LSTT->>Server: start internal server with Whisper model
Server-->>LSTT: base_url
else model is Am
LSTT->>Server: run external binary (provide am_key)
Server->>AM: init(AmModel, models_dir)
AM-->>Server: InitResponse
Server-->>LSTT: base_url
end
LSTT-->>App: base_url
sequenceDiagram
participant Listener as Listener Plugin
participant LSTT as Local STT Plugin
participant WS as Streaming Service
Listener->>LSTT: get_connection()
LSTT-->>Listener: { base_url }
Listener->>WS: connect(base_url, redemption_time_ms)
WS-->>Listener: stream partial/final words
Listener->>Listener: update partial/final (overwrite, sorted)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~90 minutes Possibly related PRs
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the "Integrations" page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
cubic analysis
No issues found across 15 files. Review in cubic
There was a problem hiding this comment.
Actionable comments posted: 13
🔭 Outside diff range comments (8)
plugins/connector/permissions/schemas/schema.json (1)
478-488: Remove all stale STT permission references across schema and autogenerated artifactsThe
get_stt_connectioncommand has been removed, but its permission variants still linger in the JSON schema, generated reference docs, and TOML definitions. Please remove them to avoid validating against a non-existent command:• plugins/connector/permissions/schemas/schema.json
Remove these entries around lines 478–488:- { - "description": "Enables the get_stt_connection command without any pre-configured scope.", - "type": "string", - "const": "allow-get-stt-connection", - "markdownDescription": "Enables the get_stt_connection command without any pre-configured scope." - }, - { - "description": "Denies the get_stt_connection command without any pre-configured scope.", - "type": "string", - "const": "deny-get-stt-connection", - "markdownDescription": "Denies the get_stt_connection command without any pre-configured scope." - },• plugins/connector/permissions/autogenerated/reference.md
Remove the sections referencing`connector:allow-get-stt-connection` … Enables the get_stt_connection command without any pre-configured scope. … `connector:deny-get-stt-connection` … Denies the get_stt_connection command without any pre-configured scope.• plugins/connector/permissions/autogenerated/commands/get_stt_connection.toml
Either delete this file entirely or remove these blocks:identifier = "allow-get-stt-connection" description = "Enables the get_stt_connection command without any pre-configured scope." commands.allow = ["get_stt_connection"] identifier = "deny-get-stt-connection" description = "Denies the get_stt_connection command without any pre-configured scope." commands.deny = ["get_stt_connection"]After applying these changes, rerun your search script:
rg -n 'get_stt_connection|allow-get-stt-connection|deny-get-stt-connection'to confirm no STT tokens remain.
plugins/connector/src/ext.rs (2)
94-101: Avoid panic: replace unwrap() on api_base with explicit error
guard.api_base.clone().unwrap()will panic if the local LLM server is reported as running but theapi_basehasn’t been set. Return a meaningful error instead of panicking.Apply this diff:
- let api_base = if self.is_server_running().await { - let state = self.state::<SharedState>(); - let guard = state.lock().await; - guard.api_base.clone().unwrap() - } else { + let api_base = if self.is_server_running().await { + let state = self.state::<SharedState>(); + let guard = state.lock().await; + guard + .api_base + .clone() + .ok_or_else(|| crate::Error::UnknownError( + "local_llm_server_running_but_api_base_missing".into() + ))? + } else { self.start_server().await? };
113-124: Validate custom LLM config: error if enabled but API base is missing/blankWhen
CustomEnabledis true, we currently defaultapi_baseto an empty string. That defers failure to later HTTP calls with ambiguous errors. Fail fast with a clear error.Apply this diff:
- if custom_enabled { - let api_base = store - .get::<Option<String>>(StoreKey::CustomApiBase)? - .flatten() - .unwrap_or_default(); - let api_key = store - .get::<Option<String>>(StoreKey::CustomApiKey)? - .flatten(); - - let conn = ConnectionLLM::Custom(Connection { api_base, api_key }); - Ok(conn) - } else { + if custom_enabled { + let api_base = store + .get::<Option<String>>(StoreKey::CustomApiBase)? + .flatten() + .filter(|s| !s.trim().is_empty()) + .ok_or_else(|| crate::Error::UnknownError( + "custom_llm_enabled_but_api_base_missing".into() + ))?; + let api_key = store + .get::<Option<String>>(StoreKey::CustomApiKey)? + .flatten(); + + Ok(ConnectionLLM::Custom(Connection { api_base, api_key })) + } else {plugins/local-llm/src/ext.rs (1)
42-45: Suspicious hardcoded models directory name "ttt"."ttt" looks like a leftover placeholder; this will change on-disk layout and likely break existing installs or confuse users.
Apply this diff to use a stable, self-explanatory directory name:
- fn models_dir(&self) -> PathBuf { - self.path().app_data_dir().unwrap().join("ttt") - } + fn models_dir(&self) -> PathBuf { + // Store local LLM models under app data + self.path().app_data_dir().unwrap().join("models") + }crates/whisper-local-model/src/lib.rs (1)
82-92: Update all.model_size()calls to.model_size_bytes()Renaming
model_sizetomodel_size_bytesimproves clarity but leaves several stale references. Please update the following call sites (and any related proxy methods) to use.model_size_bytes():
- owhisper/owhisper-server/src/server.rs:185
- .model_size(config.size.clone()) + .model_size_bytes(config.size.clone())- plugins/local-llm/src/commands.rs:19, 25, 31
- plugins/local-llm/src/ext.rs:75
- plugins/local-llm/src/model.rs:39 (
pub fn model_size(&self) -> u64) – consider renaming this proxy tomodel_size_bytesor delegating internally.crates/file/src/lib.rs (1)
5-13: Re-export DownloadProgress to avoid breaking callers and fix test pattern matches.By switching to hypr_download_interface::DownloadProgress without re-exporting it, pattern matches like DownloadProgress::Started in this crate’s tests (and any consumers using the crate’s previous symbol path) will no longer resolve. Re-exporting preserves the prior public API surface and keeps tests working.
Apply this diff:
pub use local::*; pub use remote::*; pub use types::*; + +// Why: preserve public API and keep tests/consumers using `crate::DownloadProgress` working. +pub use hypr_download_interface::DownloadProgress; use { crate::Error::OtherError, futures_util::{stream::FuturesUnordered, StreamExt, TryStreamExt}, - hypr_download_interface::DownloadProgress, reqwest::StatusCode, std::{ cmp::min, fs::File, fs::OpenOptions,apps/desktop/src/components/toast/shared.tsx (1)
65-68: Fix query invalidation key: mismatch prevents dependent UI from refreshing.Listeners use ["check-stt-model-downloaded"] and ["check-any-stt-model-downloaded"], but this function invalidates ["check-model-downloaded"]. Align the keys to ensure invalidation works.
Apply this diff:
if (queryClient) { - queryClient.invalidateQueries({ queryKey: ["stt-model-downloading"] }); - queryClient.invalidateQueries({ queryKey: ["check-model-downloaded"] }); + queryClient.invalidateQueries({ queryKey: ["stt-model-downloading"] }); + // Why: keep STT model status in sync across UI after download completes. + queryClient.invalidateQueries({ queryKey: ["check-stt-model-downloaded"] }); + queryClient.invalidateQueries({ queryKey: ["check-any-stt-model-downloaded"] }); }plugins/listener/src/fsm.rs (1)
565-594: Don’t log secrets: remove api_key from tracing.api_key is included in the listen_client log line. This leaks credentials to logs.
Apply this diff to remove api_key from logs:
- tracing::info!(api_base = ?api_base, api_key = ?api_key, languages = ?languages, "listen_client"); + tracing::info!(api_base = ?api_base, languages = ?languages, "listen_client");
🧹 Nitpick comments (22)
plugins/connector/src/ext.rs (1)
166-172: Optional: reuse a single reqwest Client to reduce overhead
reqwest::Client::new()is cheap but not free; reusing a single client is the recommended pattern and enables connection pooling and keep-alives.Suggested pattern (outside this hunk):
use once_cell::sync::Lazy; static HTTP: Lazy<reqwest::Client> = Lazy::new(reqwest::Client::new);Then here:
- let mut req = reqwest::Client::new().get(url); + let mut req = HTTP.get(url);crates/transcribe-whisper-local/src/service/streaming.rs (1)
123-126: Nit: simplify with clamp for readabilitySame behavior, more idiomatic and readable.
Apply this diff:
- let redemption_time = Duration::from_millis(std::cmp::min( - std::cmp::max(params.redemption_time_ms.unwrap_or(500), 300), - 1200, - )); + let redemption_time_ms = params.redemption_time_ms.unwrap_or(500).clamp(300, 1200); + let redemption_time = Duration::from_millis(redemption_time_ms);crates/download-interface/src/lib.rs (1)
1-6: Add Clone/PartialEq/Eq and mark enum non_exhaustive for forward-compat.Deriving these traits eases comparisons in tests and state machines. non_exhaustive lets you add variants later without breaking downstream exhaustive matches.
Apply this diff:
-#[derive(Debug, serde::Serialize, serde::Deserialize)] +#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] +#[non_exhaustive] pub enum DownloadProgress { Started, Progress(u64, u64), Finished, }crates/transcribe-moonshine/src/service/streaming.rs (1)
152-155: Prefer clamp() for readability and to avoid nested min/max.This keeps the intent obvious and avoids nesting. No behavior change.
Apply this diff:
- let redemption_time = Duration::from_millis(std::cmp::min( - std::cmp::max(params.redemption_time_ms.unwrap_or(500), 300), - 1200, - )); + let redemption_ms = params.redemption_time_ms.unwrap_or(500).clamp(300, 1200); + let redemption_time = Duration::from_millis(redemption_ms);plugins/local-llm/src/ext.rs (1)
98-116: Clamp in-flight progress to 0..=99 to avoid flicker and cast issues.During Progress events, rounding can emit 100 before Finished triggers. Clamp to 99 for in-flight updates and keep 100 only on Finished for consistent UX.
Apply this diff:
- DownloadProgress::Progress(downloaded, total_size) => { - let percent = (downloaded as f64 / total_size as f64) * 100.0; - let _ = channel.send(percent as i8); - } + DownloadProgress::Progress(downloaded, total_size) => { + let percent = (downloaded as f64 / total_size as f64) * 100.0; + // Keep 100 reserved for Finished to avoid flicker + let percent = percent.clamp(0.0, 99.0) as i8; + let _ = channel.send(percent); + }plugins/local-stt/src/server/external.rs (4)
33-36: Return the server status instead of discarding it.Expose the status payload to callers; it’s more useful than just Ok(()).
Apply this diff:
- pub async fn status(&self) -> Result<(), crate::Error> { - self.client.status().await?; - Ok(()) - } + pub async fn status(&self) -> Result<hypr_am::ServerStatus, crate::Error> { + self.client.status().await + }
43-47: Avoid hardcoding the port; prefer dynamic binding or a parameter.A fixed port (6942) risks conflicts. Consider accepting a port parameter or discovering a free port at runtime and propagating it to the child process.
Would you like a follow-up patch that negotiates an available port and passes it to the external command?
1-7: Avoid persisting API keys in long-lived structs.Storing api_key on ServerHandle increases exposure risk. Prefer passing the key to init() and not keeping it afterward.
If you must keep it, ensure it’s never logged and consider zeroizing after init if possible.
47-56: Shutdown channel is unused in this external-process flow.You create a watch channel but never hold the receiver or propagate it to anything. terminate() kills the child directly, so the channel adds no value.
Options:
- Remove the shutdown field and the channel entirely in this external server variant, or
- Hook the receiver up to some shutdown mechanism if the child can listen for it (e.g., via IPC or a signal).
crates/am/src/model.rs (1)
24-64: Accessor methods are straightforward; consider centralizing metadata.The repeated match statements are fine, but a static table or macro could reduce duplication and ease future additions.
If desired, I can provide a follow-up refactor using a single metadata table and derive helpers.
apps/desktop/src/components/toast/shared.tsx (2)
60-63: Type the Tauri Channel for stronger type-safety.Make the message type explicit to match DownloadProgress’s number semantics and reduce implicit any.
Apply this diff:
- const sttChannel = new Channel(); + const sttChannel = new Channel<number>();
102-106: Nit: type the LLM download channel as well.Mirror the STT channel typing for consistency and type-safety.
Apply this diff:
- const llmChannel = new Channel(); + const llmChannel = new Channel<number>();apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1)
170-176: Avoid hard-coding pro model keys; derive from source of truth.Hard-coded whitelists drift easily. If possible, filter by a property (e.g., a pro/tier flag) returned by listSupportedModels(), or centralize the whitelist in one shared place to keep UI consistent.
Example (if the API exposes a boolean flag):
const models = await localSttCommands.listSupportedModels(); return models.filter((m) => m.pro === true);If not, at least extract the whitelist to a constant exported from a shared module to avoid duplications across the UI.
apps/desktop/src/components/welcome-modal/model-selection-view.tsx (3)
16-16: Default selection should adapt to available models.If QuantizedSmall isn’t present, the default will be a non-existent option. Prefer deferring selection until data loads, then choose the first allowed model.
You can add this effect to set a sane default when data arrives:
// Place near other hooks const allowed = new Set<SupportedSttModel>(["QuantizedTiny", "QuantizedSmall", "QuantizedLargeTurbo"]); useEffect(() => { const first = supportedSTTModels.data?.find((mi) => allowed.has(mi.model.key))?.model.key; if (first) setSelectedModel(first); }, [supportedSTTModels.data]);If you adopt this, consider initializing selectedModel to a placeholder state (e.g., undefined) and adjusting the button disabled logic accordingly.
65-66: Remove redundant type assertion.model.key is already a SupportedSttModel per generated bindings. The cast is unnecessary.
- onClick={() => setSelectedModel(model.key as SupportedSttModel)} + onClick={() => setSelectedModel(model.key)}
90-93: Disable Continue while loading or when no valid selection.selectedModel is always truthy with the current default, so the disabled guard is ineffective. Consider disabling while the query is loading or until a valid selection is set.
- disabled={!selectedModel} + disabled={supportedSTTModels.isLoading || !selectedModel}If you keep a non-null default, you can skip the second clause.
plugins/local-stt/src/lib.rs (1)
30-48: Consider explicitly exporting SupportedSttModel and SttModelInfo types.Specta should pull these through list_supported_models, but adding explicit .typ::() and .typ::() can make the export intent clearer and resilient to future changes.
tauri_specta::Builder::<R>::new() .plugin_name(PLUGIN_NAME) .commands(tauri_specta::collect_commands![ commands::models_dir::<Wry>, // ... commands::list_supported_models, ]) - .typ::<hypr_whisper_local_model::WhisperModel>() + .typ::<hypr_whisper_local_model::WhisperModel>() + .typ::<crate::SupportedSttModel>() + .typ::<crate::SttModelInfo>() .error_handling(tauri_specta::ErrorHandlingMode::Throw)crates/am/src/hf.rs (4)
89-107: Relax callback trait bounds; Sync isn’t needed.Callback is protected by a Mutex; requiring Sync unnecessarily constrains F and callers.
-where - F: FnMut(DownloadProgress) + Send + Sync + Clone, +where + F: FnMut(DownloadProgress) + Send + Clone,And align Progress impl bound similarly:
-where - F: FnMut(DownloadProgress) + Send + Sync + Clone, +where + F: FnMut(DownloadProgress) + Send + Clone,
217-223: Pre-calc HEAD requests sequentially; consider parallelizing.HEAD-ing each file in a loop can slow large repos. Fire them concurrently to reduce latency.
- let mut total_bytes = 0u64; - for file_path in &files_to_download { - let size = get_file_size(&api, &repo, file_path).await?; - total_bytes += size; - } + let size_futs = files_to_download + .iter() + .map(|fp| get_file_size(&api, &repo, fp)); + let sizes = futures_util::future::try_join_all(size_futs).await?; + let total_bytes = sizes.into_iter().sum();Note: This adds a futures_util::future import if not already present.
50-87: Unused field: file_total_size.file_total_size is set but never read. Drop it to reduce noise.
struct CallbackProgress<F> { callback: Arc<Mutex<F>>, tracker: ProgressTracker, current_file: String, current_size: usize, - file_total_size: usize, } @@ async fn init(&mut self, size: usize, _filename: &str) { - self.file_total_size = size; self.current_size = 0; } @@ fn clone(&self) -> Self { Self { callback: self.callback.clone(), tracker: self.tracker.clone(), current_file: self.current_file.clone(), current_size: self.current_size, - file_total_size: self.file_total_size, } }
268-304: DRY opportunity: share file selection logic between functions.download and is_folder_downloaded reimplement the same dir filtering. Factor into a small helper to reduce divergence risk.
I can extract a small helper (should_include(dir_path_str: &str, file_path: &str) -> bool) and reuse it in both places. Want me to push a patch?
plugins/local-stt/src/commands.rs (1)
24-26: Streamline the error handling forlist_supported_modelsThe function returns
Result<Vec<SttModelInfo>, String>but the operation cannot fail. Consider removing theResultwrapper sinceSUPPORTED_MODELS.iter().map(|m| m.info()).collect()is infallible.-pub async fn list_supported_models() -> Result<Vec<SttModelInfo>, String> { - Ok(SUPPORTED_MODELS.iter().map(|m| m.info()).collect()) +pub async fn list_supported_models() -> Vec<SttModelInfo> { + SUPPORTED_MODELS.iter().map(|m| m.info()).collect() }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (55)
Cargo.toml(1 hunks)apps/desktop/src-tauri/src/ext.rs(0 hunks)apps/desktop/src-tauri/src/lib.rs(0 hunks)apps/desktop/src/components/editor-area/note-header/listen-button.tsx(2 hunks)apps/desktop/src/components/settings/components/ai/stt-view-local.tsx(3 hunks)apps/desktop/src/components/toast/model-select.tsx(1 hunks)apps/desktop/src/components/toast/shared.tsx(2 hunks)apps/desktop/src/components/welcome-modal/download-progress-view.tsx(1 hunks)apps/desktop/src/components/welcome-modal/model-selection-view.tsx(3 hunks)crates/am/Cargo.toml(1 hunks)crates/am/src/error.rs(1 hunks)crates/am/src/hf.rs(1 hunks)crates/am/src/lib.rs(1 hunks)crates/am/src/model.rs(1 hunks)crates/download-interface/Cargo.toml(1 hunks)crates/download-interface/src/lib.rs(1 hunks)crates/file/Cargo.toml(1 hunks)crates/file/src/lib.rs(1 hunks)crates/transcribe-moonshine/src/service/streaming.rs(1 hunks)crates/transcribe-whisper-local/src/service/streaming.rs(1 hunks)crates/whisper-local-model/src/lib.rs(3 hunks)owhisper/owhisper-client/src/lib.rs(1 hunks)owhisper/owhisper-interface/src/lib.rs(2 hunks)owhisper/owhisper-model/src/lib.rs(1 hunks)owhisper/owhisper-server/Cargo.toml(1 hunks)owhisper/owhisper-server/src/commands/pull.rs(1 hunks)plugins/connector/build.rs(0 hunks)plugins/connector/js/bindings.gen.ts(0 hunks)plugins/connector/permissions/autogenerated/reference.md(0 hunks)plugins/connector/permissions/default.toml(0 hunks)plugins/connector/permissions/schemas/schema.json(1 hunks)plugins/connector/src/commands.rs(1 hunks)plugins/connector/src/ext.rs(1 hunks)plugins/connector/src/lib.rs(0 hunks)plugins/connector/src/types.rs(0 hunks)plugins/listener/Cargo.toml(1 hunks)plugins/listener/src/error.rs(1 hunks)plugins/listener/src/fsm.rs(5 hunks)plugins/local-llm/Cargo.toml(1 hunks)plugins/local-llm/src/ext.rs(1 hunks)plugins/local-stt/Cargo.toml(1 hunks)plugins/local-stt/build.rs(0 hunks)plugins/local-stt/js/bindings.gen.ts(2 hunks)plugins/local-stt/permissions/autogenerated/commands/list_supported_models_info.toml(1 hunks)plugins/local-stt/permissions/autogenerated/reference.md(1 hunks)plugins/local-stt/permissions/default.toml(0 hunks)plugins/local-stt/permissions/schemas/schema.json(2 hunks)plugins/local-stt/src/commands.rs(5 hunks)plugins/local-stt/src/events.rs(0 hunks)plugins/local-stt/src/ext.rs(9 hunks)plugins/local-stt/src/lib.rs(4 hunks)plugins/local-stt/src/model.rs(1 hunks)plugins/local-stt/src/server/external.rs(2 hunks)plugins/local-stt/src/server/internal.rs(2 hunks)plugins/local-stt/src/types.rs(1 hunks)
💤 Files with no reviewable changes (11)
- plugins/local-stt/build.rs
- apps/desktop/src-tauri/src/lib.rs
- plugins/connector/build.rs
- apps/desktop/src-tauri/src/ext.rs
- plugins/connector/src/lib.rs
- plugins/connector/permissions/autogenerated/reference.md
- plugins/local-stt/permissions/default.toml
- plugins/connector/js/bindings.gen.ts
- plugins/connector/permissions/default.toml
- plugins/connector/src/types.rs
- plugins/local-stt/src/events.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/local-stt/src/types.rsplugins/local-stt/src/server/internal.rscrates/transcribe-whisper-local/src/service/streaming.rsowhisper/owhisper-server/src/commands/pull.rscrates/transcribe-moonshine/src/service/streaming.rscrates/download-interface/src/lib.rsplugins/local-stt/src/model.rsowhisper/owhisper-interface/src/lib.rsowhisper/owhisper-model/src/lib.rsplugins/local-llm/src/ext.rscrates/am/src/error.rsowhisper/owhisper-client/src/lib.rscrates/am/src/lib.rsapps/desktop/src/components/welcome-modal/download-progress-view.tsxcrates/am/src/hf.rsplugins/connector/src/ext.rsapps/desktop/src/components/editor-area/note-header/listen-button.tsxplugins/connector/src/commands.rscrates/whisper-local-model/src/lib.rsapps/desktop/src/components/toast/shared.tsxapps/desktop/src/components/toast/model-select.tsxplugins/local-stt/src/server/external.rscrates/file/src/lib.rsapps/desktop/src/components/welcome-modal/model-selection-view.tsxapps/desktop/src/components/settings/components/ai/stt-view-local.tsxplugins/listener/src/fsm.rsplugins/listener/src/error.rsplugins/local-stt/src/commands.rsplugins/local-stt/js/bindings.gen.tscrates/am/src/model.rsplugins/local-stt/src/ext.rsplugins/local-stt/src/lib.rs
🧬 Code Graph Analysis (25)
plugins/local-stt/src/types.rs (1)
plugins/connector/js/bindings.gen.ts (1)
Connection(109-109)
crates/transcribe-whisper-local/src/service/streaming.rs (1)
owhisper/owhisper-client/src/lib.rs (1)
params(47-50)
owhisper/owhisper-server/src/commands/pull.rs (2)
apps/desktop/src/components/toast/shared.tsx (1)
DownloadProgress(12-53)packages/ui/src/components/ui/progress.tsx (1)
Progress(26-26)
crates/transcribe-moonshine/src/service/streaming.rs (1)
owhisper/owhisper-client/src/lib.rs (1)
params(47-50)
crates/download-interface/src/lib.rs (2)
apps/desktop/src/components/toast/shared.tsx (1)
DownloadProgress(12-53)packages/ui/src/components/ui/progress.tsx (1)
Progress(26-26)
plugins/local-stt/src/model.rs (1)
plugins/local-stt/js/bindings.gen.ts (4)
AmModel(55-55)WhisperModel(61-61)SupportedSttModel(59-59)SttModelInfo(58-58)
owhisper/owhisper-model/src/lib.rs (2)
crates/am/src/model.rs (1)
model_size_bytes(57-63)crates/whisper-local-model/src/lib.rs (1)
model_size_bytes(82-92)
plugins/local-llm/src/ext.rs (1)
crates/file/src/lib.rs (1)
download_file_parallel(135-254)
crates/am/src/hf.rs (3)
packages/ui/src/components/ui/progress.tsx (1)
Progress(26-26)crates/kyutai/src/assets.rs (1)
url(19-21)crates/am/src/model.rs (1)
repo_name(25-31)
plugins/connector/src/ext.rs (3)
plugins/connector/js/bindings.gen.ts (2)
Connection(109-109)ConnectionLLM(110-110)plugins/flags/js/bindings.gen.ts (1)
StoreKey(31-31)plugins/auth/js/bindings.gen.ts (1)
StoreKey(54-54)
apps/desktop/src/components/editor-area/note-header/listen-button.tsx (1)
plugins/local-stt/js/bindings.gen.ts (1)
SupportedSttModel(59-59)
plugins/connector/src/commands.rs (1)
plugins/connector/js/bindings.gen.ts (2)
Connection(109-109)ConnectionLLM(110-110)
crates/whisper-local-model/src/lib.rs (2)
plugins/local-stt/js/bindings.gen.ts (1)
WhisperModel(61-61)crates/am/src/model.rs (2)
display_name(49-55)model_size_bytes(57-63)
apps/desktop/src/components/toast/shared.tsx (1)
plugins/local-stt/js/bindings.gen.ts (1)
SupportedSttModel(59-59)
apps/desktop/src/components/toast/model-select.tsx (1)
plugins/local-stt/js/bindings.gen.ts (1)
SupportedSttModel(59-59)
plugins/local-stt/src/server/external.rs (2)
crates/am/src/client.rs (4)
init(65-80)new(13-18)new(128-143)status(27-36)plugins/local-stt/src/server/internal.rs (1)
run_server(61-88)
crates/file/src/lib.rs (1)
apps/desktop/src/components/toast/shared.tsx (1)
DownloadProgress(12-53)
apps/desktop/src/components/welcome-modal/model-selection-view.tsx (2)
plugins/local-stt/js/bindings.gen.ts (1)
SupportedSttModel(59-59)packages/ui/src/components/ui/card.tsx (1)
CardContent(80-93)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1)
plugins/connector/src/ext.rs (2)
models(151-151)models(155-196)
plugins/listener/src/fsm.rs (3)
crates/whisper-local/src/model.rs (1)
languages(29-32)crates/db-user/src/sessions_ops.rs (1)
onboarding_session_id(8-10)crates/db-core/src/lib.rs (1)
conn(17-22)
plugins/local-stt/src/commands.rs (3)
plugins/local-stt/js/bindings.gen.ts (3)
ServerType(57-57)SttModelInfo(58-58)SupportedSttModel(59-59)plugins/local-llm/src/ext.rs (4)
is_model_downloaded(31-34)is_model_downloaded(64-80)set_current_model(23-23)set_current_model(232-236)plugins/local-stt/src/ext.rs (4)
is_model_downloaded(46-49)is_model_downloaded(75-100)set_current_model(37-37)set_current_model(347-351)
plugins/local-stt/js/bindings.gen.ts (2)
plugins/listener/src/manager.rs (1)
channel(44-56)plugins/local-llm/js/bindings.gen.ts (1)
TAURI_CHANNEL(60-60)
crates/am/src/model.rs (1)
plugins/local-stt/js/bindings.gen.ts (1)
AmModel(55-55)
plugins/local-stt/src/ext.rs (7)
crates/file/src/lib.rs (3)
download_file_parallel(135-254)file_size(256-259)s(268-268)plugins/local-stt/js/bindings.gen.ts (3)
WhisperModel(61-61)SupportedSttModel(59-59)ServerType(57-57)plugins/connector/js/bindings.gen.ts (1)
Connection(109-109)crates/am/src/model.rs (2)
repo_name(25-31)model_dir(33-39)crates/am/src/hf.rs (2)
is_folder_downloaded(268-304)download(137-266)plugins/local-stt/src/server/external.rs (1)
run_server(39-57)plugins/local-stt/src/server/internal.rs (1)
run_server(61-88)
plugins/local-stt/src/lib.rs (2)
plugins/local-stt/js/bindings.gen.ts (2)
SupportedSttModel(59-59)commands(9-43)plugins/local-stt/src/commands.rs (1)
list_supported_models(24-26)
🪛 Gitleaks (8.27.2)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
173-173: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (43)
plugins/local-stt/permissions/schemas/schema.json (2)
417-428: Permissions Entries VerifiedThe
list_supported_models_infopermission identifiers and descriptions are implemented consistently across the schema, TOML definitions, and reference documentation. No typos or mismatches were found.• plugins/local-stt/permissions/schemas/schema.json (lines 418–427)
• plugins/local-stt/permissions/autogenerated/commands/list_supported_models_info.toml
• plugins/local-stt/permissions/autogenerated/reference.md (around lines 293 & 306)All checks pass—approving the changes.
490-494: Permissions schema and default.toml are aligned
Verified thatallow-list-pro-modelsis absent andallow-list-supported-modelsis present in plugins/local-stt/permissions/default.toml. No further updates needed.plugins/local-stt/permissions/autogenerated/commands/list_supported_models_info.toml (1)
1-14: Autogenerated Command Permissions Verified
- The
$schemapath (plugins/local-stt/permissions/schemas/schema.json) exists.- Both
allow-list-supported-models-infoanddeny-list-supported-models-infoidentifiers are present inplugins/local-stt/permissions/autogenerated/reference.md.No further changes required.
plugins/local-stt/permissions/autogenerated/reference.md (1)
290-311: Docs reflect the new permissions entries appropriately.The table includes both allow/deny rows for list_supported_models_info; placement is consistent with adjacent list_* entries.
plugins/connector/src/commands.rs (1)
1-1: LGTM: STT import cleanupImport list correctly excludes STT types/symbols. Matches the removal of the STT command and types elsewhere.
plugins/local-stt/src/server/internal.rs (1)
50-50: No URL concatenations found; exposingbase_urlis safe
- Searched across the repo for any manual joins of
base_urlwith routes—no occurrences beyond the literal"/v1/listen"registrations in your Axum routers.- All routes are defined directly (
.route_service("/v1/listen", …)), so there’s no risk of double-"/"or missing-"/"right now.Suggestion: if you ever expose
base_urlto third-party consumers, you might add a helper method (e.g.impl Config { pub fn join_path(&self, path: &str) -> String { format!("{}{}", self.base_url.trim_end_matches('/'), path) } }) to centralize safe URL construction.
plugins/local-llm/Cargo.toml (1)
27-27: Confirmed hypr-download-interface usage in local-llmThe
hypr-download-interfacedependency is indeed used inplugins/local-llm:
plugins/local-llm/src/ext.rs:6–use hypr_download_interface::DownloadProgress;- Lines 99–107 in
ext.rsmatch onDownloadProgress::{Started, Progress, Finished}for the download callback.No changes needed—dependency is in active use.
owhisper/owhisper-model/src/lib.rs (1)
118-118: LGTM: uses byte-accurate size APISwitching to model_size_bytes() aligns with the new per-variant byte sizing API across the workspace.
crates/download-interface/src/lib.rs (1)
1-6: Shared progress type looks good and minimal.The enum cleanly captures start/progress/finish states and is ready for serde transport.
crates/transcribe-moonshine/src/service/streaming.rs (1)
152-155: Clamped redemption_time is correct and prevents pathological values.Bounds [300, 1200] with a sane default will stabilize streaming cadence.
crates/download-interface/Cargo.toml (1)
1-8: Verify no transitive or hidden usage ofdirsbefore removal.I ran:
rg -n --glob 'crates/download-interface/**' '\bdirs::'and found no direct references. Please manually confirm there are no remaining usages in other parts of this crate before dropping the dependency:
• crates/download-interface/src/** – look for any calls to
dirsfunctions
• crates/download-interface/build.rs (if present)
• crates/download-interface/tests/** and examples/doc commentsplugins/local-stt/Cargo.toml (1)
45-45: LGTM: Add hypr-download-interface as a dependency.Matches the workspace shift to a shared progress type.
plugins/listener/Cargo.toml (1)
37-37: LGTM: Promote tauri-plugin-local-stt to a runtime dependency.Consistent with integrating local STT into the listener.
plugins/local-llm/src/ext.rs (1)
6-7: Good migration to shared DownloadProgress interface.Import separation looks correct: progress type from hypr_download_interface and downloader from hypr_file. Aligns with the PR-wide standardization.
crates/whisper-local-model/src/lib.rs (2)
13-28: Serde renames preserve external compatibility.Keeping the serialized variant names stable is the right call for existing users.
44-54: display_name addition is clear and consistent with AM models.Good, user-facing names match the new UI expectations.
crates/am/src/model.rs (1)
1-22: Enum rename and serialization tags look correct.Serde and strum tags align with the new "am-..." identifiers and generated TS bindings.
owhisper/owhisper-server/Cargo.toml (1)
33-33: Dependency alignment with shared DownloadProgress.Adding hypr-download-interface keeps progress reporting consistent across the workspace.
crates/file/Cargo.toml (1)
7-8: LGTM: Workspace dependency added correctly
hypr-download-interfaceis declared as a workspace dependency; aligns with the PR-wide shift to shared download progress. No issues found.apps/desktop/src/components/toast/model-select.tsx (1)
3-3: LGTM: Type import aligned with new unified model typeImporting
SupportedSttModelfrom@hypr/plugin-local-sttmatches the broader change to support both Whisper and AM model keys in UI code. No unused imports detected.crates/am/src/lib.rs (1)
3-3: LGTM: New module introduced
mod hf;integrates the new HF downloader module into the crate. No issues here.Cargo.toml (1)
43-43: LGTM: Workspace dependency registered
hypr-download-interfaceadded to[workspace.dependencies]points tocrates/download-interfacewith the correct package name. This enables uniform progress typing across crates.crates/am/Cargo.toml (4)
10-11: LGTM: Shared progress interface dependencyAdding
hypr-download-interfacealignsamwith the new shared download progress type used across the workspace.
15-15: LGTM: Adds dirs for cache/home-dir handling
dirsis appropriate for cache/home directory resolution used by the HF downloader.
19-19: LGTM: strum derive additionMatches enum derivations likely used in AM model definitions. No issues spotted.
12-12: Tokio feature already enabled by defaultConfirmed that
hf-hub = "0.4.3"includes thetokiofeature in its default feature set, so async/Tokio APIs incrates/am/src/hf.rswill compile without any changes toCargo.toml.owhisper/owhisper-interface/src/lib.rs (1)
111-112: ListenParams: new redemption_time_ms field looks goodOptional field with sensible Default(None) maintains backward compatibility and aligns with the client change.
Also applies to: 121-122
owhisper/owhisper-server/src/commands/pull.rs (1)
74-86: Switched to shared hypr_download_interface::DownloadProgress — LGTMUnified progress type improves cross-crate consistency without changing behavior.
crates/am/src/error.rs (2)
3-5: Add IO error variant — LGTMTransparent Fromstd::io::Error is appropriate for the new downloader paths.
9-11: Add NoHomeDir variant — LGTMClear, actionable error for home-dir resolution failures.
plugins/listener/src/error.rs (1)
16-18: New LocalSttError variant looks correct and consistent.The transparent From conversion is appropriate and keeps error bubbling simple through the listener layer.
apps/desktop/src/components/editor-area/note-header/listen-button.tsx (1)
76-79: Argument type alignment for isModelDownloaded looks correct.Casting model.key to SupportedSttModel aligns with the plugin’s key-based API for model existence checks.
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (2)
187-188: Copy update LGTM.Clearer messaging that it’s for pro users and optimized for latency/resources.
197-202: Using display_name and computed size reads cleanly.Mapping to ModelEntry with display_name and MB conversion is straightforward and user-friendly.
plugins/local-stt/src/model.rs (2)
4-15: Model surface looks consistent and complete.SUPPORTED_MODELS spans both Whisper and AM variants and aligns with the generated TS types. info() returns the expected display_name and size_bytes.
24-29: Consider future-proofing the serialization.#[serde(untagged)] works here since both inner enums serialize to distinct strings. If you ever introduce overlapping discriminants, untagged will become ambiguous. Keep this in mind when extending AmModel/WhisperModel.
Would you like me to scan the hypr_am and hypr_whisper_local_model variant strings across the repo to verify there’s no overlap?
plugins/listener/src/fsm.rs (2)
244-246: Signature change is correctly propagated.setup_listen_client now takes (app, languages, is_onboarding). The call site matches and correctly uses session_id == onboarding_session_id.
210-223: Jargons removal is consistent.The setup now derives only (record, languages) from config, aligning with the broader removal of jargons in the flow.
plugins/local-stt/src/lib.rs (2)
7-17: Public re-exports and module wiring look good.Re-exports expose the new SupportedSttModel and related types. State’s download_task key type is updated accordingly.
94-101: AM API key management is fine; keep comments minimal.Pulling AM_API_KEY via build-time envs is consistent with the rest of the plugin setup.
crates/am/src/hf.rs (2)
137-166: Overall download flow and progress aggregation looks solid.The tracker avoids double-counting per-file progress and emits a single Started/Finished pair around the whole operation. API usage with hf-hub is correct.
146-153: Double-check with_max_files(1).with_max_files(1) may serialize downloads (or limit caches) depending on hf-hub semantics. If throughput matters, verify whether this is intended.
Would you like me to check hf-hub’s docs and confirm the behavior of with_max_files?
plugins/local-stt/src/ext.rs (1)
65-73: Dynamicbase_urlfrom SharedState & remove unused variables
- The lock on
SharedState(let _guard = state.lock().await) is acquired but never used to protect any data here – either use it to read the server URL or remove it.- The retrieved
modelisn’t used; drop the call toself.get_current_model()?unless you plan to select a port or path based on the model.- Avoid hard-coding
"http://localhost:1240"; pull the actual URL fromSharedStateinstead.Suggested diff:
async fn get_connection(&self) -> Result<Connection, crate::Error> { - let state = self.state::<crate::SharedState>(); - let _guard = state.lock().await; - let model = self.get_current_model()?; + // Acquire the shared server state and read its current URL + let state = self.state::<crate::SharedState>(); + let _guard = state.lock().await; + let base_url = state.base_url(); Ok(Connection { - base_url: "http://localhost:1240".to_string(), + base_url, }) }Please verify that
SharedStateexposes abase_url()(or equivalent) method and confirm whether the lock is still needed here.
| icon={MicIcon} | ||
| download={sttDownload} | ||
| size={sttMetadata?.size || "250MB"} | ||
| size={"??? MB"} |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Stop shipping "??? MB"; wire in real model size or hide the size
Replace the placeholder with a computed size (from the model) or a neutral placeholder. If the model type exposes size_bytes (as introduced elsewhere in this PR), surface it here; otherwise, hide the size until available.
Apply this minimal diff to pass a computed size string (fallback to an em-dash if unavailable):
- size={"??? MB"}
+ size={selectedSttModel && (selectedSttModel as any).size_bytes
+ ? formatBytes((selectedSttModel as any).size_bytes as number)
+ : "—"}Add this helper in the file (outside the selected range), near the top-level of the component file:
function formatBytes(bytes: number): string {
if (!Number.isFinite(bytes) || bytes < 0) return "—";
if (bytes < 1024) return `${bytes} B`;
const units = ["KB", "MB", "GB", "TB"];
let i = -1;
do {
bytes /= 1024;
i++;
} while (bytes >= 1024 && i < units.length - 1);
const precision = bytes >= 100 ? 0 : bytes >= 10 ? 1 : 2;
return `${bytes.toFixed(precision)} ${units[i]}`;
}If WhisperModel doesn’t expose size_bytes here, either:
- Update selectedSttModel’s type to the new model type that includes size_bytes, or
- Pass a pre-formatted size down via props from the selection step, or
- Temporarily use "—" to avoid confusing "???" in production.
🤖 Prompt for AI Agents
In apps/desktop/src/components/welcome-modal/download-progress-view.tsx around
line 238, the UI currently renders a literal "??? MB"; replace this with a real
formatted model size or a neutral em-dash when size is unavailable. Add the
provided formatBytes(bytes:number) helper at the top-level of the file, then
read the model size (e.g., selectedSttModel.size_bytes) and pass
formatBytes(size_bytes) into the size prop; if the model type in this file
doesn't expose size_bytes, either update the selectedSttModel type to the new
model that includes size_bytes, accept a pre-formatted size from props, or use
"—" as the fallback to avoid showing "???" in production. Ensure
null/undefined/invalid sizes fall back to "—" and remove the hard-coded "???
MB".
There was a problem hiding this comment.
Actionable comments posted: 12
🔭 Outside diff range comments (2)
apps/desktop/src/locales/ko/messages.po (2)
1-15: Set correct Plural-Forms for Korean to avoid wrong plural handling at runtimePlural-Forms is empty. For ko, it should be nplurals=1; plural=0;. Without this, plural selection can be incorrect.
Apply this diff:
"Language-Team: \n" -"Plural-Forms: \n" +"Plural-Forms: nplurals=1; plural=0;\n"
603-611: Translate “Continue”This new, widely used CTA should have a Korean translation.
Apply this diff:
msgid "Continue" -msgstr "" +msgstr "계속"
♻️ Duplicate comments (2)
plugins/local-stt/src/types.rs (1)
1-4: Consider addingClonederive for async boundariesThe
Connectionstruct should also deriveClonefor ergonomic usage across async boundaries and method chaining, as suggested in the past review comment.Apply this diff:
-#[derive(Debug)] +#[derive(Debug, Clone)] pub struct Connection { pub base_url: String, }plugins/local-stt/src/ext.rs (1)
316-320: Add error handling for AM model download failuresThe AM model download silently ignores errors, unlike the Whisper download which logs errors and sends
-1on failure. This inconsistency was noted in a past review.SupportedSttModel::Am(m) => { let task = tokio::spawn(async move { let callback = create_progress_callback(channel.clone()); - let _ = hypr_am::download(m.repo_name(), m.model_dir(), callback).await; + if let Err(e) = hypr_am::download(m.repo_name(), m.model_dir(), callback).await { + tracing::error!("model_download_error: {}", e); + let _ = channel.send(-1); + } });
🧹 Nitpick comments (10)
plugins/windows/src/ext.rs (2)
314-317: Settings window now resizable — add a min size to protect layout; verify borderless-resize UX on WindowsResizing Settings makes sense given the expanded UI. Recommend enforcing a floor to prevent layout breakage and keep consistency with Finder and other windows that set min_inner_size.
Apply:
Self::Settings => self .window_builder(app, "/app/settings") .resizable(true) .inner_size(800.0, 600.0) + .min_inner_size(800.0, 600.0) .build()?,Also, on Windows we use decorations(false) globally for windows. Please verify that borderless edge-resize works as expected for Settings, or add your existing resize hit-test solution if needed.
5-5: Remove unused import tauri_specta::EventThis import isn’t referenced in this file; drop it to satisfy the “no unused imports” guideline and keep clippy quiet.
-use tauri_specta::Event;owhisper/owhisper-client/Cargo.toml (1)
18-20: Consider gating tracing behind an optional feature to minimize downstream dependency/overhead.If this crate is a library consumed by others, making
tracingoptional lets dependents opt-in to telemetry and reduces compile-time footprint by default. Only proceed if usage sites can be feature-gated.Proposed manifest changes (requires gating usage in code with
#[cfg(feature = "telemetry")]or similar):[dependencies] futures-util = { workspace = true } -tracing = { workspace = true } +tracing = { workspace = true, optional = true }Add a features section:
[features] telemetry = ["dep:tracing"]Note: Any direct uses of
tracing::*macros/attributes must be conditionally compiled or no-op wrapped when the feature is disabled.crates/whisper-local/src/model.rs (3)
179-179: Reduce log noise and use structured fields for language selection logs.These are internal decision logs; debug-level with structured fields makes them more useful and less noisy at info-level.
Apply this diff:
- tracing::info!("no_language_specified"); + tracing::debug!("no_language_specified"); - tracing::info!("single_language_specified: {}", lang); + tracing::debug!(?lang, "single_language_specified"); - tracing::info!("predicted: {:#?}, from: {:#?}", best_lang, self.languages); + tracing::debug!(?best_lang, languages = ?self.languages, "language_predicted");Optionally, prefer idiomatic emptiness check:
- if self.languages.len() == 0 { + if self.languages.is_empty() {Also applies to: 186-186, 207-207
346-347: Tests: guard against missing test model to avoid spurious panics.If model.bin isn't present in CI/dev environments, this will panic. Skip gracefully when absent.
Apply this diff:
- let mut whisper = Whisper::builder() - .model_path(concat!(env!("CARGO_MANIFEST_DIR"), "/model.bin")) + let model_path = concat!(env!("CARGO_MANIFEST_DIR"), "/model.bin"); + if !std::path::Path::new(model_path).exists() { + eprintln!("skipping test_whisper: model.bin not found at {}", model_path); + return; + } + let mut whisper = Whisper::builder() + .model_path(model_path) .build() .unwrap();
369-371: Tests: guard llama test Whisper build similarly.Mirror the skip in the llama test to avoid intermittent failures in environments without model.bin.
Apply this diff:
- let mut whisper = Whisper::builder() - .model_path(concat!(env!("CARGO_MANIFEST_DIR"), "/model.bin")) - .build() - .unwrap(); + let model_path = concat!(env!("CARGO_MANIFEST_DIR"), "/model.bin"); + if !std::path::Path::new(model_path).exists() { + eprintln!("skipping test_whisper_with_llama: model.bin not found at {}", model_path); + return; + } + let mut whisper = Whisper::builder() + .model_path(model_path) + .build() + .unwrap();crates/transcribe-whisper-local/src/service/streaming.rs (2)
129-129: Confirm ownership semantics of WhisperPassing Whisper by value into the handler implies it’s moved/consumed downstream. If Whisper is heavy and not trivially movable, consider Arc if you plan to fan out or reuse across tasks; otherwise, this is fine for per-connection ownership.
Would you like me to refactor this to Arc to future-proof for multi-stream use?
134-137: Clamp looks good; minor readability nitThe min(max()) pattern is correct. Consider using clamp if the type supports it for readability.
For example: params.redemption_time_ms.unwrap_or(500).clamp(300, 1200)
plugins/local-stt/src/lib.rs (1)
137-138: Remove debug print statements from testThe test contains debug print statements that should be removed or replaced with proper assertions.
- let model = app.get_current_model(); - println!("model: {:#?}", model); + let model = app.get_current_model(); + assert!(model.is_ok(), "Failed to get current model");plugins/local-stt/src/ext.rs (1)
113-138: Inefficient loop for single iterationThe
is_model_downloadedmethod for Whisper models uses a for loop with a single iteration, which is unnecessarily complex.SupportedSttModel::Whisper(model) => { let model_path = self.models_dir().join(model.file_name()); - for (path, expected) in [(model_path, model.model_size_bytes())] { - if !path.exists() { - return Ok(false); - } - - let actual = hypr_file::file_size(path)?; - if actual != expected { - return Ok(false); - } + if !model_path.exists() { + return Ok(false); } - Ok(true) + let actual = hypr_file::file_size(model_path)?; + Ok(actual == model.model_size_bytes()) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
apps/desktop/src-tauri/src/lib.rs(1 hunks)apps/desktop/src/locales/en/messages.po(7 hunks)apps/desktop/src/locales/ko/messages.po(7 hunks)crates/transcribe-whisper-local/src/service/recorded.rs(1 hunks)crates/transcribe-whisper-local/src/service/streaming.rs(2 hunks)crates/whisper-local/src/error.rs(1 hunks)crates/whisper-local/src/model.rs(6 hunks)crates/ws/src/client.rs(1 hunks)owhisper/owhisper-client/Cargo.toml(1 hunks)owhisper/owhisper-client/src/lib.rs(2 hunks)plugins/local-stt/src/error.rs(1 hunks)plugins/local-stt/src/ext.rs(10 hunks)plugins/local-stt/src/lib.rs(4 hunks)plugins/local-stt/src/types.rs(1 hunks)plugins/windows/src/ext.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- apps/desktop/src/locales/en/messages.po
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/desktop/src-tauri/src/lib.rs
- owhisper/owhisper-client/src/lib.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/windows/src/ext.rsplugins/local-stt/src/error.rscrates/ws/src/client.rscrates/whisper-local/src/error.rscrates/transcribe-whisper-local/src/service/recorded.rscrates/whisper-local/src/model.rsplugins/local-stt/src/types.rscrates/transcribe-whisper-local/src/service/streaming.rsplugins/local-stt/src/ext.rsplugins/local-stt/src/lib.rs
🧬 Code Graph Analysis (4)
plugins/local-stt/src/types.rs (1)
plugins/connector/js/bindings.gen.ts (1)
Connection(109-109)
crates/transcribe-whisper-local/src/service/streaming.rs (2)
crates/whisper-local/src/model.rs (2)
builder(85-87)model_path(24-27)crates/transcribe-moonshine/src/service/streaming.rs (2)
builder(35-37)handle_websocket_connection(133-165)
plugins/local-stt/src/ext.rs (8)
crates/file/src/lib.rs (3)
download_file_parallel(135-254)s(268-268)file_size(256-259)plugins/connector/js/bindings.gen.ts (1)
Connection(109-109)crates/whisper-local/src/lib.rs (1)
list_ggml_backends(22-67)plugins/local-stt/src/model.rs (1)
info(32-45)crates/am/src/model.rs (2)
repo_name(25-31)model_dir(33-39)crates/am/src/hf.rs (2)
is_folder_downloaded(268-304)download(137-266)plugins/local-stt/src/server/internal.rs (2)
builder(43-45)run_server(61-88)plugins/local-stt/src/server/external.rs (1)
run_server(39-57)
plugins/local-stt/src/lib.rs (2)
plugins/local-stt/js/bindings.gen.ts (2)
SupportedSttModel(59-59)commands(9-43)plugins/local-stt/src/commands.rs (1)
list_supported_models(24-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (11)
owhisper/owhisper-client/Cargo.toml (1)
18-20: LGTM: tracing and futures-util additions align with workspace-wide instrumentation.This matches the PR’s goal to add progress/tracing and async ergonomics. No issues spotted in the manifest itself.
crates/whisper-local/src/error.rs (1)
5-6: Good addition of a concrete, serializable error code.Adding ModelNotFound with a stable display string improves error surfaces and downstream handling/logging consistency.
crates/whisper-local/src/model.rs (3)
51-52: Error propagation via?is correct and consistent.Mapping whisper_rs errors into crate::Error via #[from] is the right approach.
56-62: LGTM: Ok-wrapping constructed Whisper and defaulting languages.Returning Ok(...) aligns with the new Result API and preserves previous behavior for unset languages.
34-34: AllWhisper::builder().build()calls now handle the returnedResultWe scoped only the
hypr_whisper_local::Whisperbuilder and verified every call site:• crates/whisper-local/src/model.rs (tests at lines 344 & 367) use
.unwrap()
• crates/transcribe-whisper-local/src/service/recorded.rs (line 25) uses.unwrap()
• crates/transcribe-whisper-local/src/service/streaming.rs (lines 102–104) matches onOk(model)/ErrNo unhandled invocations remain.
crates/transcribe-whisper-local/src/service/streaming.rs (4)
118-121: Passing a prebuilt model into the upgrade handler is a solid simplificationThis removes per-connection reconstruction and aligns with the new prebuild-on-request design.
139-146: Branch dispatch with model owned per channel is correctOwning the model in the selected branch is appropriate; no double-move issues here.
161-161: Transcribe with provided model aligns with the new APIPassing the model into TranscribeMetadataAudioStreamExt::transcribe is consistent with the refactor.
191-191: Merged dual-channel transcription using provided model looks correctWiring matches the single-channel path; no concerns.
plugins/local-stt/src/error.rs (1)
27-28: LGTM! Error variant aligns with multi-model supportThe new
UnsupportedModelTypeerror variant is appropriate for handling non-Whisper models in the internal server flow, which only supports Whisper models as documented.plugins/local-stt/src/ext.rs (1)
300-313: LGTM! Good refactor extracting common progress callback logicThe extraction of the progress callback creation into a closure eliminates code duplication between AM and Whisper download paths, improving maintainability.
| #: src/components/welcome-modal/download-progress-view.tsx:221 | ||
| msgid "All models ready!" | ||
| msgstr "" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Translate “All models ready!”
Missing ko translation will show English in the UI.
Apply this diff:
msgid "All models ready!"
-msgstr ""
+msgstr "모든 모델 준비 완료!"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #: src/components/welcome-modal/download-progress-view.tsx:221 | |
| msgid "All models ready!" | |
| msgstr "" | |
| #: src/components/welcome-modal/download-progress-view.tsx:221 | |
| msgid "All models ready!" | |
| msgstr "모든 모델 준비 완료!" |
🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 379 to 382, the msgstr
for msgid "All models ready!" is empty; add the Korean translation by setting
msgstr to "모든 모델 준비 완료!" so the UI shows the translated string instead of
English.
| #: src/components/welcome-modal/download-progress-view.tsx:201 | ||
| msgid "Downloading AI Models" | ||
| msgstr "" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Translate “Downloading AI Models”
Ensure onboarding/download flows are localized.
Apply this diff:
msgid "Downloading AI Models"
-msgstr ""
+msgstr "AI 모델 다운로드 중"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #: src/components/welcome-modal/download-progress-view.tsx:201 | |
| msgid "Downloading AI Models" | |
| msgstr "" | |
| #: src/components/welcome-modal/download-progress-view.tsx:201 | |
| msgid "Downloading AI Models" | |
| msgstr "AI 모델 다운로드 중" |
🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 708 to 711, the msgid
"Downloading AI Models" has an empty msgstr; add the Korean translation for this
string by setting msgstr to the appropriate Korean text (e.g., "AI 모델 다운로드 중")
so the onboarding/download flows are localized.
| #: src/components/welcome-modal/download-progress-view.tsx:259 | ||
| msgid "It's ok to move on, downloads will continue in the background" | ||
| msgstr "" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Translate background-download hint
Critical UX string during onboarding; provide readable, natural Korean.
Apply this diff:
msgid "It's ok to move on, downloads will continue in the background"
-msgstr ""
+msgstr "다음 단계로 진행해도 됩니다. 다운로드는 백그라운드에서 계속됩니다"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #: src/components/welcome-modal/download-progress-view.tsx:259 | |
| msgid "It's ok to move on, downloads will continue in the background" | |
| msgstr "" | |
| #: src/components/welcome-modal/download-progress-view.tsx:259 | |
| msgid "It's ok to move on, downloads will continue in the background" | |
| msgstr "다음 단계로 진행해도 됩니다. 다운로드는 백그라운드에서 계속됩니다" |
🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 914 to 917, the msgstr
for the onboarding hint is empty; set msgstr to a natural Korean translation,
e.g. "계속 진행하셔도 됩니다 — 다운로드는 백그라운드에서 계속 진행됩니다.", replacing the empty string so the
message reads naturally in Korean.
| #: src/components/welcome-modal/download-progress-view.tsx:66 | ||
| msgid "Ready" | ||
| msgstr "" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Translate “Ready”
Short status label used in progress views.
Apply this diff:
msgid "Ready"
-msgstr ""
+msgstr "준비 완료"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #: src/components/welcome-modal/download-progress-view.tsx:66 | |
| msgid "Ready" | |
| msgstr "" | |
| #: src/components/welcome-modal/download-progress-view.tsx:66 | |
| msgid "Ready" | |
| msgstr "준비 완료" |
🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 1236 to 1239, the msgid
"Ready" is missing a Korean translation; set msgstr to a concise status label
such as "준비됨" (or "준비 완료" if preferred) so the entry becomes translated and
saved with proper encoding and quotation.
| #: src/components/welcome-modal/model-selection-view.tsx:40 | ||
| msgid "Select a transcribing model (STT)" | ||
| msgstr "" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Translate “Select a transcribing model (STT)”
Use common Korean terminology for STT.
Apply this diff:
msgid "Select a transcribing model (STT)"
-msgstr ""
+msgstr "음성 인식(STT) 모델 선택"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #: src/components/welcome-modal/model-selection-view.tsx:40 | |
| msgid "Select a transcribing model (STT)" | |
| msgstr "" | |
| #: src/components/welcome-modal/model-selection-view.tsx:40 | |
| msgid "Select a transcribing model (STT)" | |
| msgstr "음성 인식(STT) 모델 선택" |
🤖 Prompt for AI Agents
In apps/desktop/src/locales/ko/messages.po around lines 1353 to 1356, the msgid
"Select a transcribing model (STT)" is untranslated; set msgstr to the Korean
translation using common STT terminology, e.g. "음성 전사 모델 선택 (STT)" — update the
msgstr line to that string.
| let model = match hypr_whisper_local::Whisper::builder() | ||
| .model_path(model_path.to_str().unwrap()) | ||
| .languages( | ||
| params | ||
| .languages | ||
| .iter() | ||
| .filter_map(|lang| lang.clone().try_into().ok()) | ||
| .collect::<Vec<hypr_whisper::Language>>(), | ||
| ) | ||
| .build() | ||
| { | ||
| Ok(model) => model, | ||
| Err(e) => { | ||
| let res = ( | ||
| StatusCode::INTERNAL_SERVER_ERROR, | ||
| format!("failed_to_build_whisper: {}", e), | ||
| ) | ||
| .into_response(); | ||
| return Ok(res); | ||
| } | ||
| }; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid potential panic on non-UTF8 path and stop leaking internal error details
- model_path.to_str().unwrap() can panic on non-UTF8 paths.
- Returning e.to_string() in a 500 response leaks internal details.
Use lossy conversion for the path and log the error server-side while returning a generic message.
Apply:
- let model = match hypr_whisper_local::Whisper::builder()
- .model_path(model_path.to_str().unwrap())
+ let model = match hypr_whisper_local::Whisper::builder()
+ .model_path(model_path.to_string_lossy().into_owned())
.languages(
params
.languages
.iter()
.filter_map(|lang| lang.clone().try_into().ok())
.collect::<Vec<hypr_whisper::Language>>(),
)
.build()
{
Ok(model) => model,
Err(e) => {
- let res = (
- StatusCode::INTERNAL_SERVER_ERROR,
- format!("failed_to_build_whisper: {}", e),
- )
- .into_response();
+ tracing::error!("failed_to_build_whisper: {}", e);
+ let res = (StatusCode::INTERNAL_SERVER_ERROR, "failed_to_build_whisper")
+ .into_response();
return Ok(res);
}
};I can propagate this pattern to other call sites if you want.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let model = match hypr_whisper_local::Whisper::builder() | |
| .model_path(model_path.to_str().unwrap()) | |
| .languages( | |
| params | |
| .languages | |
| .iter() | |
| .filter_map(|lang| lang.clone().try_into().ok()) | |
| .collect::<Vec<hypr_whisper::Language>>(), | |
| ) | |
| .build() | |
| { | |
| Ok(model) => model, | |
| Err(e) => { | |
| let res = ( | |
| StatusCode::INTERNAL_SERVER_ERROR, | |
| format!("failed_to_build_whisper: {}", e), | |
| ) | |
| .into_response(); | |
| return Ok(res); | |
| } | |
| }; | |
| let model = match hypr_whisper_local::Whisper::builder() | |
| - .model_path(model_path.to_str().unwrap()) | |
| + .model_path(model_path.to_string_lossy().into_owned()) | |
| .languages( | |
| params | |
| .languages | |
| .iter() | |
| .filter_map(|lang| lang.clone().try_into().ok()) | |
| .collect::<Vec<hypr_whisper::Language>>(), | |
| ) | |
| .build() | |
| { | |
| Ok(model) => model, | |
| Err(e) => { | |
| - let res = ( | |
| - StatusCode::INTERNAL_SERVER_ERROR, | |
| - format!("failed_to_build_whisper: {}", e), | |
| - ) | |
| - .into_response(); | |
| + tracing::error!("failed_to_build_whisper: {}", e); | |
| + let res = (StatusCode::INTERNAL_SERVER_ERROR, "failed_to_build_whisper") | |
| + .into_response(); | |
| return Ok(res); | |
| } | |
| }; |
crates/ws/src/client.rs
Outdated
| if let crate::Error::Connection(tokio_tungstenite::tungstenite::Error::Http( | ||
| response, | ||
| )) = e | ||
| { | ||
| if response.status().as_u16() >= 500 && response.status().as_u16() < 600 { | ||
| tracing::warn!("not_retrying_status_code: {}", response.status()); | ||
| return false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Retry condition inverted: currently not retrying on 5xx and retrying on 4xx
5xx errors are typically transient and should be retried; 4xx are client errors and generally should not be retried. As written, the closure returns false (no retry) for 5xx and returns true (retry) for 4xx, which is the opposite of common and expected behavior. This can cause unnecessary retry loops on 4xx (e.g., 401/403/404) and reduce resilience on 5xx.
Switch the logic to avoid retrying on client errors (except 429), and allow retries on server errors.
Apply this diff:
- if let crate::Error::Connection(tokio_tungstenite::tungstenite::Error::Http(
- response,
- )) = e
- {
- if response.status().as_u16() >= 500 && response.status().as_u16() < 600 {
- tracing::warn!("not_retrying_status_code: {}", response.status());
- return false;
- }
- }
+ if let crate::Error::Connection(tokio_tungstenite::tungstenite::Error::Http(response)) = e {
+ let status = response.status();
+ // Do not retry on client errors except 429 (Too Many Requests)
+ if status.is_client_error()
+ && status != tokio_tungstenite::tungstenite::http::StatusCode::TOO_MANY_REQUESTS
+ {
+ tracing::warn!("not_retrying_client_status_code: {}", status);
+ return false;
+ }
+ // Retry on server errors (5xx)
+ if status.is_server_error() {
+ tracing::warn!("retrying_server_status_code: {}", status);
+ return true;
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let crate::Error::Connection(tokio_tungstenite::tungstenite::Error::Http( | |
| response, | |
| )) = e | |
| { | |
| if response.status().as_u16() >= 500 && response.status().as_u16() < 600 { | |
| tracing::warn!("not_retrying_status_code: {}", response.status()); | |
| return false; | |
| } | |
| } | |
| if let crate::Error::Connection(tokio_tungstenite::tungstenite::Error::Http(response)) = e { | |
| let status = response.status(); | |
| // Do not retry on client errors except 429 (Too Many Requests) | |
| if status.is_client_error() | |
| && status != tokio_tungstenite::tungstenite::http::StatusCode::TOO_MANY_REQUESTS | |
| { | |
| tracing::warn!("not_retrying_client_status_code: {}", status); | |
| return false; | |
| } | |
| // Retry on server errors (5xx) | |
| if status.is_server_error() { | |
| tracing::warn!("retrying_server_status_code: {}", status); | |
| return true; | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/ws/src/client.rs around lines 41 to 49, the retry condition is
inverted: it currently returns false for 5xx and allows retries for 4xx; change
the logic to not retry for client errors (4xx) except 429 and to allow retries
for server errors (5xx). Concretely, when matching
tokio_tungstenite::tungstenite::Error::Http(response) inspect response.status():
if status.is_client_error() and status.as_u16() != 429 then log a message and
return false; else if status.is_server_error() then log a message and return
true (allow retry). Keep the same match structure for the error variant and
ensure other cases fall through to the existing default retry behavior.
| futures-util = { workspace = true } | ||
| tracing = { workspace = true } | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Right-size futures-util usage to reduce compile time.
If only a subset of futures-util is used (e.g., StreamExt, TryStreamExt, select!), confirm that workspace-level features are constrained appropriately. If not, consider replacing with smaller crates (futures-core, futures-sink, futures-io) or turning off unnecessary default features at the workspace root.
Run this script to inspect usage in this crate and the workspace-level feature config:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Usage of futures_util in owhisper-client =="
rg -n --hidden --glob '!target' -S 'futures_util' owhisper/owhisper-client || true
rg -n --hidden --glob '!target' -S 'StreamExt|TryStreamExt|FutureExt|select!' owhisper/owhisper-client || true
echo
echo "== Workspace futures-util entries and features =="
rg -n --hidden --glob '!target' -S 'futures-util' --type toml
echo
# Show the nearest workspace Cargo.toml settings around futures-util
fd Cargo.toml -uu | while read -r f; do
if rg -q 'futures-util' "$f"; then
echo "--- $f"
rg -n -A 3 -B 3 'futures-util' "$f"
fi
doneLength of output: 22515
Limit futures-util to only the features you need
You’re only using StreamExt/Stream and the pin_mut! macro in owhisper-client, so you can trim away all of futures-util’s default features (IO, sink, etc.) to shave down compile times.
• In owhisper/owhisper-client/src/lib.rs you import only:
futures_util::{Stream, StreamExt}futures_util::pin_mut!
• Update your workspace-level dependency in Cargo.toml (root, around line 125) from:
- futures-util = "0.3.31"
+ futures-util = { version = "0.3.31", default-features = false, features = ["stream", "macros"] }• After that, verify no other crate in the workspace needs additional futures-util features (e.g. IO or Sink); if they do, either add those features at the workspace root or override per-crate.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In owhisper/owhisper-client/Cargo.toml around lines 18 to 20, the workspace
dependency on futures-util currently brings in all default features; restrict it
to only the features you actually use (stream-related and the pin macro) by
changing the workspace-level futures-util entry to explicitly list the minimal
features (e.g. "stream" and the std/alloc and macros features required for
pin_mut!), then run a workspace build to see if any other crate needs additional
features and add those at the workspace root or as per-crate overrides if
necessary.
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
crates/am/src/client.rs (1)
86-95: Preserve server error context when JSON parsing failsFalling back to UnexpectedResponse on non-JSON error bodies loses useful diagnostics. Prefer capturing HTTP status and raw body when the error isn’t JSON.
Apply this diff to improve error propagation:
- async fn handle_error_response(&self, response: Response) -> Error { - if let Ok(error_response) = response.json::<ErrorResponse>().await { - Error::ServerError { - status: error_response.status, - message: error_response.message, - } - } else { - Error::UnexpectedResponse - } - } + async fn handle_error_response(&self, response: Response) -> Error { + let status = response.status().as_u16(); + let body = response.text().await.unwrap_or_default(); + if let Ok(error_response) = serde_json::from_str::<ErrorResponse>(&body) { + Error::ServerError { + status: error_response.status, + message: error_response.message, + } + } else { + Error::ServerError { + status, + message: body, + } + } + }Note: This uses serde_json::from_str; ensure serde_json is available in the crate.
♻️ Duplicate comments (1)
plugins/local-stt/src/server/external.rs (1)
16-26: Previous issue resolved: redundant init with hardcoded API key removedThe earlier hardcoded "ax_123" init call is no longer present. Thanks for addressing this.
🧹 Nitpick comments (5)
plugins/windows/src/ext.rs (1)
313-325: Add a brief “why” comment if keeping the workaround.If the two-step resize is retained due to a known platform quirk, add a minimal comment explaining the rationale (why it’s needed and on which platform). Helps future maintainers avoid reverting it accidentally.
plugins/local-stt/src/server/external.rs (4)
2-7: Good encapsulation with api_key and client; consider restricting base_url visibilityStoring the hypr_am::Client and api_key on the handle is clean. If external mutation isn’t required, consider making base_url non-pub to keep invariants tighter.
16-26: Make init model-flexible instead of hardcoding WhisperSmallEnAllow callers to pass the desired AmModel to support the unified selection flow and future models without touching this layer.
Proposed change:
- pub async fn init(&self) -> Result<(), crate::Error> { - self.client - .init( - hypr_am::InitRequest::new(self.api_key.clone()) - .with_model(hypr_am::AmModel::WhisperSmallEn), - ) - .await?; - Ok(()) - } + pub async fn init(&self, model: hypr_am::AmModel) -> Result<(), crate::Error> { + self.client + .init( + hypr_am::InitRequest::new(self.api_key.clone()) + .with_model(model), + ) + .await?; + Ok(()) + }
28-31: Return the ServerStatus instead of discarding itAvoid a roundtrip that drops the value; return the status for callers to act on.
Apply:
- pub async fn status(&self) -> Result<(), crate::Error> { - self.client.status().await?; - Ok(()) - } + pub async fn status(&self) -> Result<hypr_am::ServerStatus, crate::Error> { + let status = self.client.status().await?; + Ok(status) + }
38-39: Avoid binding an unused receiver from spawn()Use the underscore pattern to avoid creating an unused variable.
- let (_rx, child) = cmd.args(["--port", &port.to_string()]).spawn()?; + let (_, child) = cmd.args(["--port", &port.to_string()]).spawn()?;
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
crates/am/src/client.rs(2 hunks)crates/am/src/types.rs(0 hunks)plugins/local-stt/src/server/external.rs(2 hunks)plugins/windows/src/ext.rs(1 hunks)
💤 Files with no reviewable changes (1)
- crates/am/src/types.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/windows/src/ext.rsplugins/local-stt/src/server/external.rscrates/am/src/client.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (5)
plugins/windows/src/ext.rs (1)
316-318: Confirm intentional UX change: Settings window is now resizable.Previously, Settings was non-resizable; now it’s resizable with a min size. Verify this matches product expectations, especially given onboarding/settings layout changes.
crates/am/src/client.rs (3)
1-1: Imports look correct and minimalAll imported items are used; no unused imports. Good.
109-112: Solid consolidation: InitRequest::with_model now sources both key and repo from AmModelThis aligns with the API simplification and removes the need for separate with_model_repo calls.
36-51: Confirm the “ax_” API key prefix assumption
Our scan shows the strictstarts_with("ax_")check only incrates/am/src/client.rs(lines 36–40) and no code in this repo generates or otherwise references"ax_"tokens. Please verify with the hypr_am service specification or team that all AM API keys indeed begin with"ax_"and that this convention won’t change. If the format may evolve, consider deferring validation to the server rather than enforcing a local prefix check.plugins/local-stt/src/server/external.rs (1)
41-44: Shutdown signaling appears unwiredYou create a watch channel but never hand the receiver to any listener. Sending in terminate() will be a no-op without consumers. Either wire the receiver to a task that drives shutdown or remove the channel.
If the receiver is expected elsewhere, verify its path and add a TODO explaining why it’s created here. Otherwise, remove it:
- let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(()); + let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(()); + // TODO: Wire `_shutdown_rx` to the component that should react to shutdown, or remove the channel.
| am_key: String, | ||
| ) -> Result<ServerHandle, crate::Error> { | ||
| let (_rx, child) = cmd.args(["--port", "6942"]).spawn()?; | ||
| let port = 6942; | ||
| let (_rx, child) = cmd.args(["--port", &port.to_string()]).spawn()?; | ||
|
|
||
| let api_base = "http://localhost:6942"; | ||
| let base_url = format!("http://localhost:{}", port); | ||
| let (shutdown_tx, _shutdown_rx) = tokio::sync::watch::channel(()); | ||
| let client = hypr_am::Client::new(&base_url); | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Hardcoded port risks collisions; consider configurability or ephemeral allocation
Using a fixed 6942 port can fail if already in use. Prefer passing the port in, an env-configured port, or letting the server choose an ephemeral port and reporting it back.
If you choose configurability, a minimal change is:
-pub async fn run_server(
- cmd: tauri_plugin_shell::process::Command,
- am_key: String,
-) -> Result<ServerHandle, crate::Error> {
- let port = 6942;
+pub async fn run_server(
+ cmd: tauri_plugin_shell::process::Command,
+ am_key: String,
+ port: u16,
+) -> Result<ServerHandle, crate::Error> {Search for call sites to update:
🏁 Script executed:
#!/bin/bash
rg -n -A2 -B2 'run_server\('Length of output: 2136
Hardcoded port 6942 risks collisions; make it configurable or use an ephemeral port allocation
To avoid “address already in use” errors, refactor external::run_server to accept a port: u16 (or use 0 for OS‐assigned) and update its sole call site:
• In plugins/local-stt/src/server/external.rs
-pub async fn run_server(
- cmd: tauri_plugin_shell::process::Command,
- am_key: String,
-) -> Result<ServerHandle, crate::Error> {
- let port = 6942;
+pub async fn run_server(
+ cmd: tauri_plugin_shell::process::Command,
+ am_key: String,
+ port: u16,
+) -> Result<ServerHandle, crate::Error> {
+ // If you want an ephemeral port, call with port = 0 and read the assigned port:
+ let port = if port == 0 {
+ // parse child output or inspect the bound socket…
+ /* assigned_port */
+ } else {
+ port
+ };
let (_rx, child) = cmd
.args(["--port", &port.to_string()])
.spawn()?;• In plugins/local-stt/src/ext.rs (at the external::run_server call, ~line 227):
- let server = external::run_server(cmd, am_key).await?;
+ let server = external::run_server(cmd, am_key, DESIRED_PORT).await?;This change ensures you can pass in a well‐known port or let the OS pick one.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In plugins/local-stt/src/server/external.rs around lines 36 to 44, the function
currently hardcodes port 6942 which can collide; change the function signature
to accept a port: u16 (or allow callers to pass 0 for OS-assigned), use
port.to_string() when building cmd args and base_url, and return or propagate
any dynamically assigned port if needed; also update the single call site in
plugins/local-stt/src/ext.rs (around the external::run_server call ~line 227) to
pass the desired port value (or 0) so the port is configurable and no longer
hardcoded.
There was a problem hiding this comment.
Actionable comments posted: 8
🔭 Outside diff range comments (2)
crates/file/src/lib.rs (1)
192-193: Avoid std::sync::Mutex in async hot path; use AtomicU64 for progress trackingLocking a std::sync::Mutex inside async tasks can cause unnecessary contention. An AtomicU64 is simpler, faster, and avoids potential poisoning issues.
@@ - let downloaded = Arc::new(Mutex::new(existing_size)); + let downloaded = Arc::new(std::sync::atomic::AtomicU64::new(existing_size)); @@ - let mut downloaded_guard = downloaded_clone.lock().unwrap(); - *downloaded_guard += chunk.len() as u64; - let current_downloaded = *downloaded_guard; - drop(downloaded_guard); - - progress_callback_clone(DownloadProgress::Progress(current_downloaded, total_size)); + let current_downloaded = + downloaded_clone.fetch_add(chunk.len() as u64, std::sync::atomic::Ordering::Relaxed) + + (chunk.len() as u64); + progress_callback_clone(DownloadProgress::Progress( + current_downloaded, + total_size, + ));If you prefer explicit imports instead of fully qualified paths, add:
@@ -use { +use { @@ - sync::{Arc, Mutex, OnceLock}, + sync::{Arc, Mutex, OnceLock, atomic::{AtomicU64, Ordering}},…and then use AtomicU64/Ordering directly.
Also applies to: 227-233, 20-21
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (1)
264-270: Add error handling for model switching operationsThe onClick handler performs multiple async operations without error handling. If any operation fails, the UI state might become inconsistent.
- onClick={() => { + onClick={async () => { if (model.downloaded) { - setSelectedSTTModel(model.key as WhisperModel); - localSttCommands.setCurrentModel(model.key as WhisperModel); - localSttCommands.stopServer(null); - localSttCommands.startServer(null); + try { + await localSttCommands.stopServer(null); + await localSttCommands.setCurrentModel(model.key as WhisperModel); + setSelectedSTTModel(model.key as WhisperModel); + await localSttCommands.startServer(null); + } catch (error) { + console.error("Failed to switch model:", error); + // Consider showing a toast or error message to the user + } } }}
♻️ Duplicate comments (3)
plugins/local-stt/src/server/external.rs (1)
39-39: Hardcoded port 6942 risks collisionsplugins/local-stt/src/ext.rs (1)
302-315: Extract duplicated progress callback logicplugins/local-stt/js/bindings.gen.ts (1)
60-60: Remove duplicate TAURI_CHANNEL type definition
🧹 Nitpick comments (13)
plugins/local-stt/permissions/schemas/schema.json (1)
502-506: Avoid drift between default list here and default.toml (add an explicit sync note).The enumerated list is easy to get out of sync with plugins/local-stt/permissions/default.toml. Add a short note to prompt maintainers to keep them aligned.
Apply this diff to append a sync note to both description and markdownDescription:
- "description": "Default permissions for the plugin\n#### This default permission set includes:\n\n- `allow-models-dir`\n- `allow-is-model-downloaded`\n- `allow-is-model-downloading`\n- `allow-download-model`\n- `allow-start-server`\n- `allow-stop-server`\n- `allow-get-servers`\n- `allow-get-current-model`\n- `allow-set-current-model`\n- `allow-list-supported-models`", + "description": "Default permissions for the plugin\n#### This default permission set includes:\n\n- `allow-models-dir`\n- `allow-is-model-downloaded`\n- `allow-is-model-downloading`\n- `allow-download-model`\n- `allow-start-server`\n- `allow-stop-server`\n- `allow-get-servers`\n- `allow-get-current-model`\n- `allow-set-current-model`\n- `allow-list-supported-models`\n\nNote: Keep this list in sync with plugins/local-stt/permissions/default.toml.", @@ - "markdownDescription": "Default permissions for the plugin\n#### This default permission set includes:\n\n- `allow-models-dir`\n- `allow-is-model-downloaded`\n- `allow-is-model-downloading`\n- `allow-download-model`\n- `allow-start-server`\n- `allow-stop-server`\n- `allow-get-servers`\n- `allow-get-current-model`\n- `allow-set-current-model`\n- `allow-list-supported-models`" + "markdownDescription": "Default permissions for the plugin\n#### This default permission set includes:\n\n- `allow-models-dir`\n- `allow-is-model-downloaded`\n- `allow-is-model-downloading`\n- `allow-download-model`\n- `allow-start-server`\n- `allow-stop-server`\n- `allow-get-servers`\n- `allow-get-current-model`\n- `allow-set-current-model`\n- `allow-list-supported-models`\n\nNote: Keep this list in sync with plugins/local-stt/permissions/default.toml."crates/file/src/lib.rs (4)
217-219: Make the non-206 range response error actionable (include range and expectation)The new message is too generic and removes vital context for debugging. Surface the requested range and expected status.
- return Err(OtherError(format!( - "Something went wrong. Please try again. (status: {})", - response.status() - ))); + return Err(OtherError(format!( + "Unexpected status for range GET {}-{} (expected 206): {}", + start, end, response.status() + )));Also applies to: 215-216
41-42: Use typed header constants for correctness and clarityTyped constants avoid case/typo mistakes and make intent explicit.
@@ - request = request.header("Range", format!("bytes={}-", start)); + request = request.header(reqwest::header::RANGE, format!("bytes={}-", start)); @@ - let supports_ranges = head_response - .headers() - .get("accept-ranges") + let supports_ranges = head_response + .headers() + .get(reqwest::header::ACCEPT_RANGES) .map(|v| v.to_str().unwrap_or("")) .unwrap_or("") == "bytes"; @@ - .header("Range", range_header) + .header(reqwest::header::RANGE, range_header)Also applies to: 151-156, 209-213
221-223: Pre-allocate per-chunk buffer capacity to reduce reallocationsA minor perf win: we know the chunk size, so we can reserve capacity.
- let mut bytes = Vec::new(); + let mut bytes = Vec::with_capacity((end - start + 1) as usize);
99-116: Optional: flush file before signaling Finished for consistencyParallel path flushes file; serial path doesn’t. Flushing before Finished keeps semantics consistent.
@@ } - progress_callback(DownloadProgress::Finished); + file.flush()?; + progress_callback(DownloadProgress::Finished); Ok(())plugins/local-stt/src/server/external.rs (3)
2-2: base_url field should be private for proper encapsulationThe
base_urlfield is public but is only used internally within the module. Making it private improves encapsulation.- pub base_url: String, + base_url: String,
29-32: Consider simplifying the status methodThe status result is wrapped in Ok() unnecessarily since it already returns a Result.
- pub async fn status(&self) -> Result<hypr_am::ServerStatus, crate::Error> { - let status = self.client.status().await?; - Ok(status) - } + pub async fn status(&self) -> Result<hypr_am::ServerStatus, crate::Error> { + self.client.status().await.map_err(Into::into) + }
46-77: Consider extracting log processing logicThe nested loop handling stdout/stderr could be extracted into a helper function for better readability.
+fn process_command_event(event: tauri_plugin_shell::process::CommandEvent) { + match event { + tauri_plugin_shell::process::CommandEvent::Stdout(bytes) + | tauri_plugin_shell::process::CommandEvent::Stderr(bytes) => { + if let Ok(text) = String::from_utf8(bytes) { + let text = text.trim(); + if !text.is_empty() { + tracing::info!("{}", text); + } + } + } + _ => {} + } +} tokio::spawn(async move { loop { tokio::select! { _ = shutdown_rx.changed() => { tracing::info!("external_server_shutdown"); break; } event = rx.recv() => { if event.is_none() { break; } - - match event.unwrap() { - tauri_plugin_shell::process::CommandEvent::Stdout(bytes) => { - if let Ok(text) = String::from_utf8(bytes) { - let text = text.trim(); - tracing::info!("{}", text); - } - } - tauri_plugin_shell::process::CommandEvent::Stderr(bytes) => { - if let Ok(text) = String::from_utf8(bytes) { - let text = text.trim(); - tracing::info!("{}", text); - } - } - _ => {} - - } + process_command_event(event.unwrap()); } } } });plugins/local-stt/src/ext.rs (1)
317-375: Add consistent error handling for both model typesThe AM model download includes proper error handling with tar unpacking (lines 331-334), but consider adding a success log for consistency with error logging.
if let Err(e) = m.tar_verify_and_unpack(&tar_path, &final_path) { tracing::error!("model_unpack_error: {}", e); let _ = channel.send(-1); + } else { + tracing::info!("model_unpacked: {}", m.model_dir()); }crates/am/Cargo.toml (1)
22-22: Consider pinning thetardependency to a workspace versionThe
tardependency is pinned to a specific version"0.4.44"while other dependencies use workspace versions. For consistency and easier dependency management across the workspace, consider moving this to the workspace dependencies.crates/am/src/error.rs (1)
12-13: Consider using thedirscrate's error type instead of a custom variantThe
NoHomeDirerror could potentially be replaced by propagating the actual error from thedirscrate operations, providing more context about why the home directory couldn't be found.crates/am/src/model.rs (2)
50-67: Directory existence check has a potential TOCTOU issueThe
is_downloadedmethod checks directory existence and then reads it in separate operations. While not critical for this use case, consider combining operations to be more robust.pub fn is_downloaded( &self, base_dir: impl AsRef<std::path::Path>, ) -> Result<bool, crate::Error> { let model_path = base_dir.as_ref().join(self.model_dir()); - if !model_path.exists() { - return Ok(false); - } - - if !model_path.is_dir() { - return Ok(false); - } - - let entries = std::fs::read_dir(&model_path)?; - let has_files = entries.count() > 0; - - Ok(has_files) + match std::fs::read_dir(&model_path) { + Ok(entries) => Ok(entries.count() > 0), + Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok(false), + Err(e) => Err(e.into()), + } }
78-83: Checksums should be documented or externalizedThe hardcoded CRC32 checksums (1906983049, 1964673816) are critical for security but lack documentation about their source. Consider adding comments or moving them to a configuration file.
pub fn tar_checksum(&self) -> u32 { match self { - AmModel::ParakeetV2 => 1906983049, - AmModel::WhisperLargeV3 => 1964673816, + // CRC32 checksum of nvidia_parakeet-v2_476MB.tar + AmModel::ParakeetV2 => 1906983049, + // CRC32 checksum of openai_whisper-large-v3-v20240930_626MB.tar + AmModel::WhisperLargeV3 => 1964673816, } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx(4 hunks)crates/am/Cargo.toml(1 hunks)crates/am/src/client.rs(2 hunks)crates/am/src/error.rs(2 hunks)crates/am/src/model.rs(1 hunks)crates/am/src/types.rs(0 hunks)crates/file/src/lib.rs(2 hunks)plugins/listener/src/fsm.rs(6 hunks)plugins/local-stt/js/bindings.gen.ts(2 hunks)plugins/local-stt/permissions/autogenerated/commands/get_external_server_status.toml(1 hunks)plugins/local-stt/permissions/autogenerated/reference.md(2 hunks)plugins/local-stt/permissions/schemas/schema.json(3 hunks)plugins/local-stt/src/commands.rs(5 hunks)plugins/local-stt/src/ext.rs(12 hunks)plugins/local-stt/src/model.rs(1 hunks)plugins/local-stt/src/server/external.rs(2 hunks)
💤 Files with no reviewable changes (1)
- crates/am/src/types.rs
✅ Files skipped from review due to trivial changes (1)
- plugins/local-stt/permissions/autogenerated/commands/get_external_server_status.toml
🚧 Files skipped from review as they are similar to previous changes (2)
- plugins/listener/src/fsm.rs
- plugins/local-stt/src/model.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
crates/am/src/error.rsplugins/local-stt/src/ext.rscrates/am/src/model.rscrates/am/src/client.rscrates/file/src/lib.rsplugins/local-stt/js/bindings.gen.tsplugins/local-stt/src/server/external.rsapps/desktop/src/components/settings/components/ai/stt-view-local.tsxplugins/local-stt/src/commands.rs
🧬 Code Graph Analysis (6)
plugins/local-stt/src/ext.rs (7)
crates/file/src/lib.rs (3)
download_file_parallel(135-254)s(268-268)file_size(256-259)plugins/local-stt/js/bindings.gen.ts (4)
WhisperModel(61-61)SupportedSttModel(59-59)ServerType(57-57)GgmlBackend(56-56)plugins/connector/js/bindings.gen.ts (1)
Connection(109-109)crates/whisper-local/src/lib.rs (1)
list_ggml_backends(22-67)plugins/local-stt/src/server/internal.rs (1)
run_server(61-88)plugins/local-stt/src/server/external.rs (1)
run_server(35-86)crates/am/src/model.rs (1)
model_dir(29-34)
crates/am/src/model.rs (3)
plugins/local-stt/js/bindings.gen.ts (1)
AmModel(55-55)crates/whisper-local-model/src/lib.rs (2)
display_name(44-54)model_size_bytes(82-92)crates/file/src/lib.rs (2)
calculate_file_checksum(272-289)download_file_parallel(135-254)
crates/am/src/client.rs (1)
plugins/local-stt/js/bindings.gen.ts (1)
AmModel(55-55)
crates/file/src/lib.rs (1)
apps/desktop/src/components/toast/shared.tsx (1)
DownloadProgress(12-53)
plugins/local-stt/js/bindings.gen.ts (1)
plugins/listener/src/manager.rs (1)
channel(44-56)
plugins/local-stt/src/server/external.rs (5)
crates/am/src/client.rs (4)
init(36-51)new(11-16)new(99-106)status(25-34)plugins/local-stt/js/bindings.gen.ts (1)
AmModel(55-55)plugins/local-stt/src/commands.rs (1)
models_dir(10-12)plugins/local-stt/src/ext.rs (2)
models_dir(19-19)models_dir(60-62)plugins/local-stt/src/server/internal.rs (1)
run_server(61-88)
🪛 Gitleaks (8.27.2)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
188-188: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (24)
plugins/local-stt/permissions/schemas/schema.json (1)
429-440: Confirm migration to the newlist_supported_models_infocommand
- Autogenerated TOML (
plugins/local-stt/permissions/autogenerated/commands/list_supported_models_info.toml) correctly references the schema and includes bothallow-list-supported-models-infoanddeny-list-supported-models-infopermissions.- Default permissions no longer include the legacy
allow-list-pro-modelsentry.- Please manually verify that every consumer (desktop client, plugin code, UI, etc.) has been updated to call
list_supported_models_infoinstead of any legacy “pro-model” listing endpoint.plugins/local-stt/permissions/autogenerated/reference.md (3)
339-341: Good to explicitly document allow-models-dir in the table.This addition keeps the table aligned with the default set and reduces confusion.
82-104: Confirmed:get_external_server_statusisn’t in the default permission set
The newallow-get-external-server-status/deny-get-external-server-statusentries match the schema and do not appear inplugins/local-stt/permissions/default.toml, so there’s no unintended default access.
316-337: Verify settings/onboarding flows uselist_supported_models_infoand droplist_pro_modelsI only found the new and legacy identifiers in autogenerated docs, schemas, and permission TOML files—no actual callers in UI or flow code:
- Definitions for both commands live in
plugins/local-stt/permissions/{autogenerated,…}.- No occurrences of either command string in settings or onboarding modules.
Please manually confirm:
- All settings/onboarding code invoking model‐list APIs now calls
list_supported_models_info.- No UI or flow still references the legacy
list_pro_modelscommand.plugins/local-stt/src/server/external.rs (1)
16-27: LGTM! Clean implementation of init methodThe init method correctly creates an InitRequest with the API key and model, delegating to the Hypr client.
plugins/local-stt/src/ext.rs (4)
68-105: LGTM! Clean implementation of get_connectionThe implementation correctly handles both AM and Whisper models, reusing existing servers when available.
107-127: LGTM! Proper delegation to model-specific download checkThe is_model_downloaded method correctly delegates to model-specific implementations and validates file sizes for Whisper models.
153-158: LGTM! Proper model type validationThe code correctly validates that only Whisper models are used with the internal server.
188-193: LGTM! Proper model type validation for external serverThe code correctly validates that only AM models are used with the external server.
plugins/local-stt/src/commands.rs (3)
24-26: LGTM! Clean implementation using SUPPORTED_MODELS constantThe function properly leverages the centralized model metadata.
30-37: LGTM! Proper async delegation with error handlingAll model-related commands correctly handle the async operations and map errors to strings for the Tauri command interface.
70-77: LGTM! Correctly updated to asyncThe set_current_model command is properly updated to handle the async operation.
plugins/local-stt/js/bindings.gen.ts (2)
16-42: LGTM! API properly updated to use SupportedSttModelAll STT model-related commands correctly use the new
SupportedSttModeltype, and the rename fromlistProModelstolistSupportedModelsbetter reflects its purpose.
55-61: LGTM! Well-structured type definitionsThe new type hierarchy with
SupportedSttModelas a union ofWhisperModel | AmModelprovides good extensibility for future model types.crates/am/src/error.rs (3)
3-4: LGTM! Error variant addition is appropriateThe
IOerror variant with transparent forwarding is correctly implemented for handling I/O operations in the new tar extraction and file operations.
9-10: LGTM! HyprFile error integration is well-structuredThe
HyprFileerror variant properly integrates with the workspace's file handling module, maintaining consistency with the error propagation pattern.
24-25: LGTM! ChecksumMismatch error supports integrity verificationThe
ChecksumMismatcherror variant is essential for the tar file integrity verification implemented in the model download flow.crates/am/src/client.rs (3)
1-1: LGTM! Import cleanup aligns with simplified APIThe removal of
ComputeUnitsfrom imports correctly reflects the simplified initialization flow without compute unit configuration.
98-106: LGTM! Simplified InitRequest constructorThe simplified
newmethod correctly initializes only the essential fields, with optional model configuration deferred to the builder method.
108-123: Add validation for the base directory pathThe
with_modelmethod should validate that the base directory exists or can be created to prevent runtime errors during model operations.pub fn with_model( mut self, model: crate::AmModel, base_dir: impl AsRef<std::path::Path>, ) -> Self { + let base_path = base_dir.as_ref(); + if !base_path.exists() { + tracing::warn!("Base directory does not exist: {:?}", base_path); + } self.model = Some(model.model_dir().to_string()); self.model_repo = Some(model.repo_name().to_string()); self.model_folder = Some( - base_dir - .as_ref() + base_path .join(model.model_dir()) .to_string_lossy() .to_string(), ); self }Likely an incorrect or invalid review comment.
crates/am/src/model.rs (4)
1-19: LGTM! Well-structured enum with proper derivesThe
AmModelenum is properly configured with all necessary derives for serialization, display, and type safety. The serde and strum annotations ensure consistent string representations.
22-27: LGTM! Repository mapping is correctThe
repo_namemethod provides the correct Argmax repository names for each model variant.
99-106: LGTM! Clean async download implementationThe download method properly delegates to the workspace's parallel download functionality with progress callbacks.
43-48: Confirm .tar file sizes against model_size_bytes constantsPlease double-check that the hardcoded values in
crates/am/src/model.rs(lines 43–48) actually match the size of the hosted.tararchives:
- ParakeetV2: 476 134 400 bytes
- WhisperLargeV3: 625 990 656 bytes
You can verify each with a HTTP HEAD request, for example:
curl -sI <MODEL_URL> | grep -i Content-LengthIf these values ever drift, progress-bar reporting will be wrong. Consider adding an integration test that retrieves the Content-Length header and asserts it matches your constant to catch any future changes.
| const handleShowFileLocation = async () => { | ||
| localSttCommands.modelsDir().then((path) => openPath(path)); | ||
| }; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Remove duplicated handleShowFileLocation function
The handleShowFileLocation function is identical in both BasicModelsManagement (lines 140-142) and ProModelsManagement (lines 180-183). Extract it to the parent component or a shared utility.
Move the function to the parent component and pass it as a prop:
export function STTViewLocal({
selectedSTTModel,
setSelectedSTTModel,
sttModels,
setSttModels,
downloadingModels,
handleModelDownload,
}: STTViewProps) {
+ const handleShowFileLocation = async () => {
+ localSttCommands.modelsDir().then((path) => openPath(path));
+ };
+
// ... rest of the component
return (
<div className="space-y-6">
<BasicModelsManagement
on={!!servers.data?.internal}
modelsToShow={modelsToShow}
selectedSTTModel={selectedSTTModel}
setSelectedSTTModel={setSelectedSTTModel}
downloadingModels={downloadingModels}
handleModelDownload={handleModelDownload}
+ handleShowFileLocation={handleShowFileLocation}
/>
{amAvailable && (
<ProModelsManagement
on={!!servers.data?.external}
selectedSTTModel={selectedSTTModel}
setSelectedSTTModel={setSelectedSTTModel}
downloadingModels={downloadingModels}
handleModelDownload={handleModelDownload}
+ handleShowFileLocation={handleShowFileLocation}
/>
)}
</div>
);
}Then remove the local definitions in both BasicModelsManagement and ProModelsManagement.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleShowFileLocation = async () => { | |
| localSttCommands.modelsDir().then((path) => openPath(path)); | |
| }; | |
| export function STTViewLocal({ | |
| selectedSTTModel, | |
| setSelectedSTTModel, | |
| sttModels, | |
| setSttModels, | |
| downloadingModels, | |
| handleModelDownload, | |
| }: STTViewProps) { | |
| const handleShowFileLocation = async () => { | |
| localSttCommands.modelsDir().then((path) => openPath(path)); | |
| }; | |
| // ... rest of the component | |
| return ( | |
| <div className="space-y-6"> | |
| <BasicModelsManagement | |
| on={!!servers.data?.internal} | |
| modelsToShow={modelsToShow} | |
| selectedSTTModel={selectedSTTModel} | |
| setSelectedSTTModel={setSelectedSTTModel} | |
| downloadingModels={downloadingModels} | |
| handleModelDownload={handleModelDownload} | |
| handleShowFileLocation={handleShowFileLocation} | |
| /> | |
| {amAvailable && ( | |
| <ProModelsManagement | |
| on={!!servers.data?.external} | |
| selectedSTTModel={selectedSTTModel} | |
| setSelectedSTTModel={setSelectedSTTModel} | |
| downloadingModels={downloadingModels} | |
| handleModelDownload={handleModelDownload} | |
| handleShowFileLocation={handleShowFileLocation} | |
| /> | |
| )} | |
| </div> | |
| ); | |
| } |
🤖 Prompt for AI Agents
In apps/desktop/src/components/settings/components/ai/stt-view-local.tsx around
lines 140-142 and 180-183, the handleShowFileLocation function is duplicated in
BasicModelsManagement and ProModelsManagement; move a single implementation up
to their parent component (or a shared utility) that calls
localSttCommands.modelsDir().then(path => openPath(path)), expose it as a prop
(e.g., onShowFileLocation) and pass it into both BasicModelsManagement and
ProModelsManagement, then remove the local handleShowFileLocation definitions
from both child components.
There was a problem hiding this comment.
Actionable comments posted: 2
🔭 Outside diff range comments (3)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts (3)
56-71: Fix listener cleanup race to prevent leaks when the effect is torn down before listen resolvesIf the effect unmounts before
.listen(...)resolves,unlistenremains null and the handler may leak. Add a disposed flag to unlisten immediately upon resolution if already cleaned up.Apply this diff:
- let unlisten: (() => void) | null = null; + let unlisten: (() => void) | null = null; + let disposed = false; listenerEvents.sessionEvent.listen(({ payload }) => { if (payload.type === "finalWords") { setFinalWords(payload.words as Word[]); - } else if (payload.type === "partialWords") { + setPartialWords([]); + } else if (payload.type === "partialWords") { setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: 0 }))); } }).then((fn) => { unlisten = fn; + if (disposed) { + // Effect already cleaned up; remove listener immediately + fn(); + } }); return () => { - if (unlisten) { - unlisten(); - } + disposed = true; + if (unlisten) unlisten(); };
31-42: Include isEnhanced in the React Query key to avoid stale data when source function changesThe queryFn switches between getWords and getWordsOnboarding based on isEnhanced, but the queryKey doesn’t include isEnhanced. React Query will not refetch when isEnhanced flips, risking stale data from the wrong source.
Apply this diff:
- const existingWords = useQuery({ - enabled: !!sessionId, - queryKey: ["session", "words", sessionId], + const existingWords = useQuery({ + enabled: !!sessionId, + queryKey: ["session", "words", sessionId, isEnhanced ? "enhanced" : "basic"], queryFn: async () => { const onboardingSessionId = await dbCommands.onboardingSessionId(); const fn = (sessionId === onboardingSessionId && isEnhanced) ? dbCommands.getWordsOnboarding : dbCommands.getWords; return fn(sessionId!); }, });
13-13: Enforce Rules of Hooks: call useSession unconditionallyCalling
useSessiononly whensessionIdis truthy breaks React’s Hooks rules (hooks must be invoked in the same order on every render). Refactor so the hook is always called, then guard its result:• File:
apps/desktop/src/components/right-panel/hooks/useTranscript.ts
Line ~13Suggested diff:
- const isEnhanced = sessionId - ? useSession(sessionId, (s) => !!s.session.enhanced_memo_html) - : false; + // Always invoke the hook; use a sentinel for sessionId + const enhanced = useSession(sessionId ?? '', (s) => !!s.session.enhanced_memo_html); + const isEnhanced = !!sessionId && enhanced;If passing
""touseSessionisn’t safe, introduce a wrapper hook that no-ops whensessionIdis null:// In your contexts package export function useOptionalSession<T>( sessionId?: string, selector: (s: Session) => T ): T | undefined { const ctx = useContext(SessionContext); return sessionId ? selector(ctx.sessions[sessionId]) : undefined; } // In useTranscript.ts const isEnhanced = !!useOptionalSession( sessionId, (s) => !!s.session.enhanced_memo_html );
♻️ Duplicate comments (2)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (2)
172-184: Remove duplicated handleShowFileLocation functionThe
handleShowFileLocationfunction is identical to the one inBasicModelsManagement(lines 141-143). This creates unnecessary code duplication.Move the function to the parent component and pass it as a prop to both child components to eliminate duplication.
188-203: Make Pro model filtering dynamic instead of hardcodedThe hardcoded filter for only
am-whisper-large-v3andam-parakeet-v2will become outdated as Argmax continues adding new Pro models. This approach is not scalable and will miss future models.Consider implementing a more flexible approach:
- const models = await localSttCommands.listSupportedModels().then((models) => - models.filter((model) => model.key === "am-whisper-large-v3" || model.key === "am-parakeet-v2") - ); + const models = await localSttCommands.listSupportedModels().then((models) => + models.filter((model) => model.key.startsWith("am-") || model.isPro === true) + );Alternatively, maintain a configuration file of Pro model keys that can be updated as new models are released.
🧹 Nitpick comments (3)
packages/tiptap/src/styles/transcript.css (1)
48-51: Consider semantic classes instead of string-prefix selectorsRelying on string prefixes is brittle and hard to maintain. Compute a discrete confidence level at render time and style by class.
Example (JS-side):
- Map confidence to levels: low (<=0.4), medium (<=0.7), high (>0.7)
- Render spans with classes:
confidence--low|medium|highExample (CSS):
span.confidence--low { color: #777; } span.confidence--medium { color: inherit; } span.confidence--high { color: inherit; }I can draft the small utility and integration in the transcript rendering code if you want.
plugins/listener/src/manager.rs (2)
94-104: Avoid leaking transcripts to stdout; gate and fix home path expansion
- Printing raw transcripts to stdout can unintentionally surface PII in dev logs. Gate it behind an env var and send to stderr.
- "~" is not expanded by std::fs; the current path likely won’t resolve. Build a concrete path from HOME/USERPROFILE (fallback to local file).
Apply:
- if let Ok(mut file) = OpenOptions::new() - .create(true) - .append(true) - .open("~/out.jsonl") - { - if let Ok(json) = serde_json::to_string(response) { - println!("{}", json); - let _ = writeln!(file, "{}", json); - } - } + let path = std::env::var_os("HOME") + .or_else(|| std::env::var_os("USERPROFILE")) + .map(std::path::PathBuf::from) + .map(|p| p.join("out.jsonl")) + .unwrap_or_else(|| std::path::PathBuf::from("out.jsonl")); + + if let Ok(mut file) = OpenOptions::new() + .create(true) + .append(true) + .open(path) + { + if let Ok(json) = serde_json::to_string(response) { + if std::env::var_os("HYPRNOTE_DEBUG_STREAM_LOG_STDOUT").is_some() { + eprintln!("{}", json); + } + let _ = writeln!(file, "{}", json); + } + }
90-90: Gate the debug-only logger function itselfThe call site is behind debug_assertions, but the function is still compiled in release and may trigger dead_code warnings. Gate the function too.
- fn log(response: &owhisper_interface::StreamResponse) { + #[cfg(debug_assertions)] + fn log(response: &owhisper_interface::StreamResponse) {
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts(1 hunks)apps/desktop/src/components/settings/components/ai/stt-view-local.tsx(7 hunks)apps/desktop/src/stores/index.ts(0 hunks)apps/desktop/src/stores/ui.ts(0 hunks)crates/ws/src/client.rs(1 hunks)packages/tiptap/src/styles/transcript.css(1 hunks)plugins/listener/src/manager.rs(1 hunks)
💤 Files with no reviewable changes (2)
- apps/desktop/src/stores/index.ts
- apps/desktop/src/stores/ui.ts
✅ Files skipped from review due to trivial changes (1)
- crates/ws/src/client.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
plugins/listener/src/manager.rsapps/desktop/src/components/right-panel/hooks/useTranscript.tsapps/desktop/src/components/settings/components/ai/stt-view-local.tsx
🧬 Code Graph Analysis (1)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (3)
packages/ui/src/lib/utils.ts (1)
cn(4-6)apps/desktop/src/hooks/use-license.ts (1)
useLicense(9-104)plugins/connector/src/ext.rs (2)
models(151-151)models(155-196)
🪛 Gitleaks (8.27.2)
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx
190-190: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (9)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts (1)
60-60: No special handling of confidence === 0 found—setting to zero is safeI searched for any logic that treats
confidence === 0specially (e.g. hiding or thresholding words) and only found code inpackages/tiptap/src/transcript/utils.tsthat reads confidence marks via:•
const confidenceMark = node.marks?.find(mark => mark.type === "confidence");
•const confidence = confidenceMark?.attrs?.confidence ?? null;There’s no UI logic checking for
confidence === 0or filtering out words with zero confidence. You can leave:setPartialWords( (payload.words as Word[]).map(w => ({ ...w, confidence: 0 })) );as-is without raising display concerns.
apps/desktop/src/components/settings/components/ai/stt-view-local.tsx (8)
7-7: LGTM!The addition of
useLicenseimport is appropriate for the new license gating functionality implemented in Pro models.
116-119: LGTM!The expansion of props passed to
ProModelsManagementcorrectly provides the necessary state management functions that were previously missing placeholders.
150-150: LGTM!The color change from
bg-red-300tobg-gray-100for the off-state indicator provides better visual consistency with the Pro Models section.
212-212: LGTM!The consistent use of
bg-gray-100for the off-state indicator aligns with the Basic Models section styling.
215-215: LGTM!The updated description text clearly communicates the Pro plan requirement and benefits of these models.
223-229: LGTM!The license gating implementation correctly disables Pro model interactions when the license is invalid, and all necessary props are now properly passed to
ModelEntry.
262-267: LGTM!The license check integration in the styling and click handler correctly prevents unauthorized access to Pro models while maintaining proper visual feedback.
293-293: LGTM!The
disabledprop correctly prevents interaction with the "Show in Finder" button when the license is invalid.
| setFinalWords(payload.words as Word[]); | ||
| } else if (payload.type === "partialWords") { | ||
| setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: 0.1 }))); | ||
| setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: 0 }))); | ||
| } |
There was a problem hiding this comment.
Clear partial words when receiving finalWords to avoid stale overlays
Without clearing partialWords on a finalWords event, the UI can render outdated partials on top of finalized text.
Apply this diff:
- setFinalWords(payload.words as Word[]);
+ setFinalWords(payload.words as Word[]);
+ // clear partials on finalized batch to avoid stale overlays
+ setPartialWords([]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| setFinalWords(payload.words as Word[]); | |
| } else if (payload.type === "partialWords") { | |
| setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: 0.1 }))); | |
| setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: 0 }))); | |
| } | |
| setFinalWords(payload.words as Word[]); | |
| // clear partials on finalized batch to avoid stale overlays | |
| setPartialWords([]); | |
| } else if (payload.type === "partialWords") { | |
| setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: 0 }))); | |
| } |
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/hooks/useTranscript.ts around lines
58 to 61, the handler for "finalWords" does not clear partialWords which can
leave stale partial overlays visible; update the branch handling payload.type
=== "finalWords" to clear partialWords (e.g., call setPartialWords([])) before
or immediately after calling setFinalWords(payload.words as Word[]) so any prior
partialWords are removed when final words arrive.
| span[data-confidence^="0"] { | ||
| opacity: 0.3; | ||
| color: #666; | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Selector is over-broad: dims 0.5–0.9 confidence text; likely a UI regression
CSS attribute selectors are string-based. [data-confidence^="0"] matches any value starting with "0", including "0.5"–"0.9". This applies low-contrast styling to medium/high confidence tokens, which likely contradicts the previous intent (only 0.0–0.4). Also, combining opacity: 0.3 with color: #666 risks failing accessibility contrast.
Proposed fix: precisely target 0.0–0.4 and avoid global opacity.
Apply this diff to restore intended targeting and improve accessibility:
-span[data-confidence^="0"] {
- opacity: 0.3;
- color: #666;
-}
+/* Low confidence: 0.0–0.4 */
+span[data-confidence^="0.0"],
+span[data-confidence^="0.1"],
+span[data-confidence^="0.2"],
+span[data-confidence^="0.3"],
+span[data-confidence^="0.4"] {
+ /* Avoid element opacity to preserve child contrast; adjust color only */
+ color: #777; /* or use a design token/var if available */
+}Follow-ups:
- Verify if 1.0 previously had a distinct style; if needed, reintroduce
[data-confidence="1"]styling. - If confidence string formats vary (e.g., "0.40" vs "0.4"), confirm they still start with the same prefixes; otherwise normalize at render time.
🏁 Script executed:
#!/bin/bash
# Search for all occurrences of the data-confidence attribute
rg -n 'data-confidence' .
# Search for any code references to confidence values
rg -n 'confidence' .Length of output: 279858
🏁 Script executed:
#!/bin/bash
# Show the implementation of confidence mark in marks.ts
sed -n '1,60p' packages/tiptap/src/transcript/marks.tsLength of output: 917
🏁 Script executed:
#!/bin/bash
# Print lines around the selector to inspect context
sed -n '30,80p' packages/tiptap/src/styles/transcript.cssLength of output: 362
Fix over-broad confidence selector in transcript.css
The rule
span[data-confidence^="0"] {
opacity: 0.3;
color: #666;
}matches every 0.x value (0.5–0.9 included), dimming medium/high-confidence words and risking poor contrast with opacity. It should only target low confidence (0.0–0.4) and avoid using element opacity.
File: packages/tiptap/src/styles/transcript.css (around lines 48–51)
Proposed diff:
span[data-confidence^="0"] {
- opacity: 0.3;
- color: #666;
-}
+ /* removed overly broad selector */
+}
+/* Low confidence: 0.0–0.4 */
+span[data-confidence="0"],
+span[data-confidence="0.1"],
+span[data-confidence="0.2"],
+span[data-confidence="0.3"],
+span[data-confidence="0.4"] {
+ /* use color only—preserves child contrast */
+ color: #777; /* replace with design token if available */
+}Follow-ups:
- Add a style for perfect confidence (
[data-confidence="1"]) if needed. - Confirm that rendered
data-confidencevalues are normalized (e.g. “0” vs “0.0”).
🤖 Prompt for AI Agents
In packages/tiptap/src/styles/transcript.css around lines 48–51, the selector
span[data-confidence^="0"] is too broad (matches 0.x through 0.9) and uses
opacity which dims text and harms contrast; change the rule to target only
low-confidence values (explicitly list the values you render, e.g.
[data-confidence="0"], [data-confidence="0.0"], [data-confidence="0.1"],
[data-confidence="0.2"], [data-confidence="0.3"], [data-confidence="0.4"]) and
remove opacity in favor of a muted color (use a specific hex/rgba color for
better contrast control), add an optional rule for perfect confidence
([data-confidence="1"]) if desired, and ensure the code that renders
data-confidence normalizes values (e.g. "0" vs "0.0") or switch to applying a
low-confidence class server/client-side for clearer styling.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (2)
plugins/local-stt/js/bindings.gen.ts (1)
60-60: Remove duplicateTAURI_CHANNELtype definition
TAURI_CHANNELis imported from@tauri-apps/api/coreon line 67 but also defined as a type alias on line 60. This creates a naming conflict.-export type TAURI_CHANNEL<TSend> = nullplugins/local-stt/src/ext.rs (1)
403-409: Potential issue with server startup failure handling.If
start_serverfails after updating the store and stopping the previous server, the system is left in an inconsistent state with no running server and a potentially invalid model setting.Consider implementing rollback logic:
async fn set_current_model(&self, model: SupportedSttModel) -> Result<(), crate::Error> { let store = self.local_stt_store(); + // Save previous model for potential rollback + let previous_model = self.get_current_model().ok(); + store.set(crate::StoreKey::DefaultModel, model.clone())?; self.stop_server(None).await?; - self.start_server(Some(model)).await?; + + // Attempt to start the new server + if let Err(e) = self.start_server(Some(model.clone())).await { + // Rollback on failure + if let Some(prev) = previous_model { + let _ = store.set(crate::StoreKey::DefaultModel, prev); + } + return Err(e); + } Ok(()) }
🧹 Nitpick comments (1)
plugins/local-stt/src/ext.rs (1)
68-105: Consider simplifying the duplicated connection logic.Both match arms contain identical logic for checking existing servers and starting new ones. Consider extracting this to reduce duplication.
async fn get_connection(&self) -> Result<Connection, crate::Error> { let model = self.get_current_model()?; - match model { - SupportedSttModel::Am(_) => { - let existing_api_base = { - let state = self.state::<crate::SharedState>(); - let guard = state.lock().await; - guard.external_server.as_ref().map(|s| s.base_url.clone()) - }; - - let conn = match existing_api_base { - Some(api_base) => Connection { base_url: api_base }, - None => { - let api_base = self.start_server(Some(model)).await?; - Connection { base_url: api_base } - } - }; - Ok(conn) - } - SupportedSttModel::Whisper(_) => { - let existing_api_base = { - let state = self.state::<crate::SharedState>(); - let guard = state.lock().await; - guard.internal_server.as_ref().map(|s| s.base_url.clone()) - }; - - let conn = match existing_api_base { - Some(api_base) => Connection { base_url: api_base }, - None => { - let api_base = self.start_server(Some(model)).await?; - Connection { base_url: api_base } - } - }; - Ok(conn) - } + let existing_api_base = { + let state = self.state::<crate::SharedState>(); + let guard = state.lock().await; + match &model { + SupportedSttModel::Am(_) => guard.external_server.as_ref().map(|s| s.base_url.clone()), + SupportedSttModel::Whisper(_) => guard.internal_server.as_ref().map(|s| s.base_url.clone()), + } + }; + + let conn = match existing_api_base { + Some(api_base) => Connection { base_url: api_base }, + None => { + let api_base = self.start_server(Some(model)).await?; + Connection { base_url: api_base } + } + }; + Ok(conn) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
apps/desktop/src-tauri/src/ext.rs(3 hunks)apps/desktop/src-tauri/src/lib.rs(1 hunks)plugins/local-stt/js/bindings.gen.ts(1 hunks)plugins/local-stt/src/commands.rs(5 hunks)plugins/local-stt/src/ext.rs(11 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/desktop/src-tauri/src/lib.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
apps/desktop/src-tauri/src/ext.rsplugins/local-stt/src/commands.rsplugins/local-stt/js/bindings.gen.tsplugins/local-stt/src/ext.rs
🧬 Code Graph Analysis (3)
apps/desktop/src-tauri/src/ext.rs (2)
plugins/local-stt/js/bindings.gen.ts (2)
WhisperModel(61-61)SupportedSttModel(59-59)plugins/local-llm/js/bindings.gen.ts (1)
SupportedModel(59-59)
plugins/local-stt/src/commands.rs (3)
plugins/local-stt/js/bindings.gen.ts (3)
ServerType(57-57)SttModelInfo(58-58)SupportedSttModel(59-59)plugins/local-stt/src/ext.rs (6)
is_model_downloaded(49-52)is_model_downloaded(107-127)set_current_model(37-40)set_current_model(403-409)start_server(24-27)start_server(130-248)plugins/local-llm/src/commands.rs (3)
is_model_downloaded(44-51)set_current_model(111-116)start_server(76-78)
plugins/local-stt/src/ext.rs (7)
crates/file/src/lib.rs (2)
download_file_parallel(135-254)s(268-268)plugins/local-stt/js/bindings.gen.ts (4)
WhisperModel(61-61)SupportedSttModel(59-59)ServerType(57-57)GgmlBackend(56-56)plugins/connector/js/bindings.gen.ts (1)
Connection(109-109)crates/whisper-local/src/lib.rs (1)
list_ggml_backends(22-67)plugins/local-stt/src/server/internal.rs (1)
run_server(61-88)plugins/local-stt/src/server/external.rs (1)
run_server(35-86)crates/am/src/model.rs (1)
model_dir(29-34)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (22)
plugins/local-stt/src/commands.rs (8)
4-6: LGTM!The imports are properly updated to support the new unified model abstraction with
SttModelInfo,SupportedSttModel, andSUPPORTED_MODELS.
24-26: Good refactoring to use the centralized model list.The function now correctly derives model info from
SUPPORTED_MODELS, eliminating the need for hard-coded model lists and making the code more maintainable.
30-37: Consistent error handling pattern applied.The function correctly accepts
SupportedSttModeland follows the established error handling pattern withmap_err.
41-46: Properly updated to use the unified model type.The function signature is correctly updated to accept
SupportedSttModeland maintains the simple boolean return pattern.
50-58: Async implementation aligns with the trait definition.The function is properly updated to be async and maintains consistent error handling with
map_err.
62-66: Return type correctly updated to SupportedSttModel.The function properly returns the new unified model type with appropriate error handling.
70-77: Function correctly made async to match trait requirements.The set operation is properly awaited and error handling is consistent.
81-86: Server startup correctly accepts optional model parameter.The function signature properly changed from
server_typetomodel, aligning with the new model-centric architecture where the server type is derived from the model type.plugins/local-stt/js/bindings.gen.ts (2)
16-42: API signatures correctly updated to use SupportedSttModel.All STT-related commands have been properly updated to accept
SupportedSttModelinstead of the legacyWhisperModel, enabling support for both Whisper and AM models through a unified interface.
55-59: New types properly support the multi-backend architecture.The addition of
AmModel,SttModelInfo, and the union typeSupportedSttModelprovides a clean abstraction for handling multiple STT backends.plugins/local-stt/src/ext.rs (9)
7-15: LGTM! Clean import organization for the new model abstraction.The imports are properly updated to support the new download interface and unified model types.
22-22: Good addition of the connection abstraction.The new
get_connectionmethod provides a cleaner API for obtaining server endpoints.
107-127: Model download check properly handles both model types.The implementation correctly delegates to AM-specific or Whisper-specific download verification logic.
130-140: Server type correctly derived from model type.The logic properly maps AM models to External server and Whisper models to Internal server.
160-166: Proper model type validation for internal server.The code correctly ensures only Whisper models are used with the internal server.
195-201: Proper model type validation for external server.The code correctly ensures only AM models are used with the external server.
309-323: Good extraction of common progress callback logic.The closure properly encapsulates the shared progress reporting logic, reducing duplication between AM and Whisper download branches.
332-336: Consistent error handling for AM model downloads.Good addition of error logging and failure signaling (-1) for AM downloads, matching the Whisper error handling pattern.
338-342: Proper error handling for tar unpacking.The code correctly handles and reports unpacking errors with appropriate logging and failure signaling.
apps/desktop/src-tauri/src/ext.rs (3)
3-6: Imports properly updated for unified model support.The new imports correctly bring in the types needed for the unified STT model architecture.
43-51: STT setup correctly uses the new model abstraction.The code properly defaults to
QuantizedSmallWhisper model and passes it to the server startup, ensuring the server knows which model to load.
57-57: LLM setup uses appropriate default model.The code correctly defaults to
HyprLLMfor the LLM setup.
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
plugins/webhook/openapi.json (1)
396-399: Add top-level security to satisfy CKV_OPENAPI_4 and set a secure defaultStatic analysis flags the absence of a global security rule. Defining a top-level default (e.g., API key) improves consistency and ensures unprotected operations aren’t added by mistake. Operation-level security (like webhook_signature) will continue to override the default per OpenAPI spec.
Apply this diff to add a global security default:
}, + "security": [ + { "api_key": [] } + ], "tags": [
🧹 Nitpick comments (3)
plugins/webhook/openapi.json (3)
10-12: Populate or remove empty license.nameThe InfoObject license name is an empty string. Either use a valid SPDX identifier (e.g., MIT, Apache-2.0) or omit the license block if not applicable.
For example:
"license": { - "name": "" + "name": "Proprietary" },
298-301: Strengthen WebhookEvent.data schema with explicit typing (union of known events)Currently, data lacks a schema and is only a description. Define it as a union of known event payloads to improve validation and client generation.
- "data": { - "description": "Event payload" - }, + "data": { + "description": "Event payload", + "oneOf": [ + { "$ref": "#/components/schemas/NoteEvent" }, + { "$ref": "#/components/schemas/RecordingEvent" }, + { "$ref": "#/components/schemas/TranscriptionEvent" } + ] + },If payloads can include additional event types, consider adding a catch-all:
{ "type": "object", "additionalProperties": true }to the oneOf list.
371-380: Align WebhookVerification.timestamp type with descriptionDescription says “Unix timestamp” but the type is string. Prefer integer with int64 for clarity and better tooling support.
"timestamp": { - "type": "string", - "description": "Unix timestamp", - "example": "1704880200" + "type": "integer", + "format": "int64", + "description": "Unix timestamp (seconds since epoch)", + "example": 1704880200 }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/desktop/src/components/editor-area/note-header/listen-button.tsx(2 hunks)apps/desktop/src/locales/en/messages.po(7 hunks)apps/desktop/src/locales/ko/messages.po(7 hunks)crates/transcribe-whisper-local/src/service/streaming.rs(1 hunks)crates/whisper-local/src/error.rs(1 hunks)plugins/webhook/openapi.json(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/transcribe-whisper-local/src/service/streaming.rs
🚧 Files skipped from review as they are similar to previous changes (4)
- apps/desktop/src/components/editor-area/note-header/listen-button.tsx
- crates/whisper-local/src/error.rs
- apps/desktop/src/locales/ko/messages.po
- apps/desktop/src/locales/en/messages.po
🧰 Additional context used
🪛 Checkov (3.2.334)
plugins/webhook/openapi.json
[HIGH] 1-407: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (windows, windows-latest)
- GitHub Check: ci (macos, macos-latest)
🔇 Additional comments (1)
plugins/webhook/openapi.json (1)
407-407: EOF newline addition — LGTMThanks for fixing the missing newline at EOF.
There was a problem hiding this comment.
Actionable comments posted: 5
🔭 Outside diff range comments (2)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts (1)
57-66: Fix async listener cleanup race to prevent leaked handlersIf the effect unmounts before listen() resolves, unlisten remains null at cleanup and the handler leaks when fn arrives later.
Apply this diff to guard disposal and immediately unlisten if already cleaned up:
let unlisten: (() => void) | null = null; + let disposed = false; - listenerEvents.sessionEvent.listen(({ payload }) => { + listenerEvents.sessionEvent.listen(({ payload }) => { if (payload.type === "finalWords") { setFinalWords((existing) => [...existing, ...payload.words]); } else if (payload.type === "partialWords") { setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: -1 }))); } - }).then((fn) => { - unlisten = fn; - }); + }).then((fn) => { + if (disposed) { + // effect already cleaned up; immediately unlisten + fn(); + return; + } + unlisten = fn; + });And in the cleanup (below), set disposed = true before calling unlisten:
- return () => { - if (unlisten) { - unlisten(); - } - }; + return () => { + disposed = true; + if (unlisten) { + unlisten(); + } + };packages/tiptap/src/transcript/utils.ts (1)
124-138: Preserve interim signal when parsing editor contentfromEditorToWords currently drops the interim mark and forces confidence: null for all words, making serialization lossy and breaking round-trips for interim tokens.
Apply this diff to recognize the interim mark and map it back to the negative-confidence sentinel:
- for (const node of speakerBlock.content) { + for (const node of speakerBlock.content) { if (node.type !== "text" || !node.text) { continue; } - const wordTexts = node.text.split(/\s+/).filter(Boolean); + const wordTexts = node.text.split(/\s+/).filter(Boolean); + const hasInterim = + Array.isArray((node as any).marks) && + (node as any).marks.some((m: any) => m.type === "interim"); for (const wordText of wordTexts) { words.push({ text: wordText, speaker, - confidence: null, + confidence: hasInterim ? -1 : null, start_ms: null, end_ms: null, }); } }
♻️ Duplicate comments (4)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts (1)
58-61: Clear partialWords upon finalWords to avoid stale overlaysFinalized words arrive but partialWords aren’t cleared, leaving stale partial overlays above finalized text.
Apply this diff:
if (payload.type === "finalWords") { - setFinalWords((existing) => [...existing, ...payload.words]); + setFinalWords((existing) => [...existing, ...payload.words]); + // clear partials when final words are committed + setPartialWords([]); } else if (payload.type === "partialWords") { setPartialWords((payload.words as Word[]).map(w => ({ ...w, confidence: -1 }))); }plugins/local-stt/src/server/external.rs (1)
43-49: Hardcoded port 6942 is brittle—make it configurable via env (minimal change)Port collisions are likely on fixed ports. At least read from an env var with a sane default to avoid breaking call sites now; you can evolve to param injection later.
- let port = 6942; + // Allow override for CI/dev; default remains stable. + let port: u16 = std::env::var("HYPR_AM_PORT") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(6942); let (mut rx, child) = cmd.args(["--port", &port.to_string()]).spawn()?; - let base_url = format!("http://localhost:{}", port); + let base_url = format!("http://localhost:{}", port); let (shutdown_tx, mut shutdown_rx) = tokio::sync::watch::channel(()); let client = hypr_am::Client::new(&base_url);packages/tiptap/src/styles/transcript.css (1)
48-51: Avoid opacity on text; use color-only for accessibility and consistency with prior guidanceOpacity on text compounds with parent opacity and harms contrast. Keep the presence-only selector, but drop opacity.
Apply this diff:
-span[data-interim] { - opacity: 0.3; - color: #666; -} +span[data-interim] { + /* Muted color only to preserve contrast of nested content */ + color: #777; /* consider a design token if available */ +}plugins/local-stt/src/ext.rs (1)
417-422: Model switch can leave system inconsistent if server start fails; add rollback
set_current_modelpersists the new model, stops servers, then attempts to start. Ifstart_serverfails, the persisted model changes but no server is running.- let store = self.local_stt_store(); - store.set(crate::StoreKey::DefaultModel, model.clone())?; - self.stop_server(None).await?; - self.start_server(Some(model)).await?; - Ok(()) + let store = self.local_stt_store(); + let prev_model = store.get(crate::StoreKey::DefaultModel)?; + store.set(crate::StoreKey::DefaultModel, model.clone())?; + self.stop_server(None).await?; + + if let Err(err) = self.start_server(Some(model.clone())).await { + // Roll back persisted model on failure + store.set(crate::StoreKey::DefaultModel, prev_model)?; + return Err(err); + } + Ok(())Optional: also restart the previous server on rollback to restore service continuity.
🧹 Nitpick comments (9)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts (3)
28-30: Chronological sort is good; consider pushing unknown timestamps to the end to reduce jumpinessWords without start_ms currently sort to the very beginning; this can cause early “jumps” in the UI. Prefer placing unknown-timestamp words last.
Apply this diff:
- return allWords.sort((a, b) => (a.start_ms ?? 0) - (b.start_ms ?? 0)); + return allWords.sort( + (a, b) => + (a.start_ms ?? Number.POSITIVE_INFINITY) - (b.start_ms ?? Number.POSITIVE_INFINITY), + );
51-53: Use isLive in guard and deps to reduce duplication and re-subscribesYou already compute isLive; use it in the effect guard and dependency array to simplify and avoid unnecessary re-subscriptions.
Apply this diff:
- if (ongoingSessionState.status !== "running_active" || ongoingSessionState.sessionId !== sessionId) { + if (!isLive) { return; } @@ - }, [ongoingSessionState.status, ongoingSessionState.sessionId, sessionId]); + }, [isLive]);Also applies to: 72-72
61-61: Confirm downstream handles confidence = -1 as a sentinel for partialsIf any consumer clamps confidence to [0,1] or interprets negatives as errors, this may regress UI logic. Verify callers expect this sentinel.
If negatives aren’t universally safe, prefer a separate flag (e.g., isPartial) or set confidence to 0 while tracking partial-ness separately.
plugins/local-stt/src/server/external.rs (3)
10-17: Avoid unwrap in health(); simplify with match/if-letUse a direct match to avoid unwrap and tighten readability.
- pub async fn health(&self) -> bool { - let res = self.client.status().await; - if res.is_err() { - return false; - } - - matches!(res.unwrap().status, hypr_am::ServerStatusType::Ready) - } + pub async fn health(&self) -> bool { + match self.client.status().await { + Ok(s) => matches!(s.status, hypr_am::ServerStatusType::Ready), + Err(_) => false, + } + }
25-36: Inline the init result; drop the temporary bindingMinor cleanup: return directly to reduce one binding.
- let r = self - .client - .init(hypr_am::InitRequest::new(self.api_key.clone()).with_model(model, models_dir)) - .await?; - - Ok(r) + self.client + .init(hypr_am::InitRequest::new(self.api_key.clone()).with_model(model, models_dir)) + .await + .map_err(Into::into)
50-81: Safer event handling and less noisy logs
- Remove unwrap by matching Option directly.
- Consider debug for stdout and warn for stderr to reduce noise and surface problems.
- tokio::spawn(async move { - loop { - tokio::select! { - _ = shutdown_rx.changed() => { - tracing::info!("external_server_shutdown"); - break; - } - event = rx.recv() => { - if event.is_none() { - break; - } - - match event.unwrap() { - tauri_plugin_shell::process::CommandEvent::Stdout(bytes) => { - if let Ok(text) = String::from_utf8(bytes) { - let text = text.trim(); - tracing::info!("{}", text); - } - } - tauri_plugin_shell::process::CommandEvent::Stderr(bytes) => { - if let Ok(text) = String::from_utf8(bytes) { - let text = text.trim(); - tracing::info!("{}", text); - } - } - _ => {} - } - } - } - } - }); + tokio::spawn(async move { + loop { + tokio::select! { + _ = shutdown_rx.changed() => { + tracing::info!("external_server_shutdown"); + break; + } + event = rx.recv() => { + match event { + Some(tauri_plugin_shell::process::CommandEvent::Stdout(bytes)) => { + if let Ok(text) = String::from_utf8(bytes) { + let text = text.trim(); + tracing::debug!("{}", text); + } + } + Some(tauri_plugin_shell::process::CommandEvent::Stderr(bytes)) => { + if let Ok(text) = String::from_utf8(bytes) { + let text = text.trim(); + tracing::warn!("{}", text); + } + } + Some(_) => {} + None => break, + } + } + } + } + });apps/desktop/src/routes/app.tsx (1)
169-174: Flatten promise chain with async/await for readabilityAvoid nested thens in the file watcher callback; using async/await keeps the control flow straightforward and consistent with modern TS patterns.
- return watch(sttPath, (_event) => { - localSttCommands.stopServer(null).then((stopped) => { - if (stopped) { - localSttCommands.getCurrentModel().then((model) => { - localSttCommands.startServer(model); - }); - } - }); - }, { delayMs: 1000 }); + return watch( + sttPath, + async (_event) => { + const stopped = await localSttCommands.stopServer(null); + if (stopped) { + const model = await localSttCommands.getCurrentModel(); + await localSttCommands.startServer(model); + } + }, + { delayMs: 1000 }, + );plugins/local-stt/src/ext.rs (1)
107-125: Whisper download check: simplify and tightenThe loop over a single tuple is future-proof but currently redundant. Consider simplifying to direct checks; also prefer early returns with explicit paths for readability.
- SupportedSttModel::Whisper(model) => { - let model_path = self.models_dir().join(model.file_name()); - - for (path, expected) in [(model_path, model.model_size_bytes())] { - if !path.exists() { - return Ok(false); - } - - let actual = hypr_file::file_size(path)?; - if actual != expected { - return Ok(false); - } - } - - Ok(true) - } + SupportedSttModel::Whisper(model) => { + let path = self.models_dir().join(model.file_name()); + if !path.exists() { + return Ok(false); + } + let actual = hypr_file::file_size(&path)?; + Ok(actual == model.model_size_bytes()) + }plugins/listener/src/manager.rs (1)
82-95: Silent error handling may hide important issues.The logging function silently ignores all errors during file operations and serialization. This could make debugging difficult if logging fails.
Consider adding minimal error logging:
fn log(id: uuid::Uuid, response: &owhisper_interface::StreamResponse) { use std::fs::OpenOptions; use std::io::Write; if let Ok(mut file) = OpenOptions::new().create(true).append(true).open( dirs::home_dir() .unwrap() .join(format!("transcript_{}.jsonl", id)), ) { if let Ok(json) = serde_json::to_string(response) { - let _ = writeln!(file, "{}", json); + if let Err(e) = writeln!(file, "{}", json) { + eprintln!("Failed to write transcript log: {}", e); + } + } else { + eprintln!("Failed to serialize transcript response"); } + } else { + eprintln!("Failed to open transcript log file"); } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
apps/desktop/src/components/right-panel/hooks/useTranscript.ts(2 hunks)apps/desktop/src/routes/app.tsx(1 hunks)crates/whisper-local/src/error.rs(1 hunks)packages/tiptap/src/styles/transcript.css(1 hunks)packages/tiptap/src/transcript/index.tsx(2 hunks)packages/tiptap/src/transcript/marks.ts(2 hunks)packages/tiptap/src/transcript/utils.ts(2 hunks)plugins/listener/Cargo.toml(2 hunks)plugins/listener/assets/f7952672-5d18-4f75-8aa0-74ab8b02dac3.jsonl(1 hunks)plugins/listener/src/manager.rs(5 hunks)plugins/local-stt/Cargo.toml(2 hunks)plugins/local-stt/src/ext.rs(11 hunks)plugins/local-stt/src/server/external.rs(1 hunks)plugins/local-stt/src/server/internal.rs(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- plugins/listener/assets/f7952672-5d18-4f75-8aa0-74ab8b02dac3.jsonl
🚧 Files skipped from review as they are similar to previous changes (3)
- plugins/local-stt/src/server/internal.rs
- plugins/local-stt/Cargo.toml
- plugins/listener/Cargo.toml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit Configuration File
**/*.{js,ts,tsx,rs}: 1. No error handling.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
packages/tiptap/src/transcript/index.tsxapps/desktop/src/routes/app.tsxpackages/tiptap/src/transcript/utils.tscrates/whisper-local/src/error.rsapps/desktop/src/components/right-panel/hooks/useTranscript.tspackages/tiptap/src/transcript/marks.tsplugins/local-stt/src/ext.rsplugins/local-stt/src/server/external.rsplugins/listener/src/manager.rs
🧬 Code Graph Analysis (3)
packages/tiptap/src/transcript/index.tsx (1)
packages/tiptap/src/transcript/marks.ts (1)
InterimMark(3-43)
plugins/local-stt/src/server/external.rs (3)
plugins/local-stt/src/server/internal.rs (2)
health(55-58)health(113-115)crates/am/src/client.rs (4)
status(25-34)init(36-51)new(11-16)new(99-106)plugins/local-stt/js/bindings.gen.ts (1)
AmModel(55-55)
plugins/listener/src/manager.rs (1)
owhisper/owhisper-server/src/commands/run/state.rs (1)
words(80-83)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (macos, macos-latest)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (14)
plugins/local-stt/src/server/external.rs (2)
2-7: ServerHandle shape/visibility looks goodPublicly exposing only base_url while keeping api_key and client private is the right call; aligns with the new model-centric flow and prevents accidental key leakage.
83-89: Handle construction is coherentapi_key, base_url, and client are correctly wired into the handle; shutdown is kept private and used via terminate(), which is appropriate.
packages/tiptap/src/transcript/index.tsx (1)
13-14: LGTM: interim mark integration is correctImport and extension registration for InterimMark look correct and consistent with the new mark implementation.
Also applies to: 55-56
packages/tiptap/src/transcript/utils.ts (1)
70-80: Confirm sentinel semantics: negative confidence denotes "interim" — repo scanI ran the ripgrep you suggested; the only match is:
- packages/tiptap/src/transcript/utils.ts:70 —
if (word.confidence !== null && word.confidence < 0) { ... }No other comparisons against
confidencewere found in packages. If negative confidence is the intended sentinel, please:
- Update the TypeScript types/docs/serialization to allow/describe negative values, or
- Prefer adding an explicit
interimflag to avoid overloadingconfidence, and- Audit all uses of
confidence(e.g.,rg -n 'confidence' packages) to ensure no code assumes confidence ∈ [0,1].packages/tiptap/src/transcript/marks.ts (1)
26-33: LGTM: selector aligns with CSS and utilsparseHTML tag selector span[data-interim] matches the new CSS and utils encoder.
crates/whisper-local/src/error.rs (1)
7-9: Enum variant reorder is safe; semantics unchangedReordering
LocalWhisperErrorwithin the enum does not affect behavior. Conversion via#[from]and Display-based serialization remain intact.apps/desktop/src/routes/app.tsx (1)
169-174: Consider restart when server wasn’t runningCurrently, restart only occurs when
stopServer(null)returnstrue. If no server was running (returnsfalse), you might still want to start the current model after model file changes. Verify this matches the intended behavior.Would you like me to update the callback to always attempt a start (and safely ignore “already running” cases), or ensure we have a reliable upstream indicator that a server is up before deciding?
plugins/local-stt/src/ext.rs (1)
234-238: AM server init result is ignored
server.init(am_model, data_dir).awaitresult is discarded. If init fails, we still store and return the server base URL, potentially leading to a non-functional server state.Do we intentionally tolerate init failures here? If not, consider propagating the error or at least logging it at error level and tearing down the server to avoid returning a broken endpoint.
plugins/listener/src/manager.rs (6)
36-36: LGTM! Improved logging with unique identifier.The logging now includes the unique ID, which will help with debugging and tracking specific transcript sessions.
58-68: LGTM! Clean logic for handling final transcripts.The logic correctly filters partial words that come after the final words and returns the appropriate
Diffstructure.
70-76: LGTM! Proper handling of partial transcripts.The logic correctly updates the partial words and returns an empty final words vector for non-final responses.
111-268: LGTM! Comprehensive test coverage with realistic data.The new test
test_f7952672_5d18_4f75_8aa0_74ab8b02dac3provides excellent coverage of the transcript manager functionality with realistic speech recognition data, including proper testing of both partial and final transcript handling.
457-624: LGTM! Updated test expectations match new behavior.The test expectations have been properly updated to reflect the new
Diff-based return structure, with mostly empty final_history entries as expected for the test data pattern.
3-3: Add UUID initialization to Default implementation.The
TranscriptManagerstruct now includes anidfield of typeuuid::Uuid, but theDefaultimplementation doesn't initialize it. This will cause a compilation error sinceuuid::Uuiddoesn't implementDefault.Apply this diff to fix the initialization:
-#[derive(Debug, Default, Clone)] +#[derive(Debug, Clone)] pub struct TranscriptManager { id: uuid::Uuid, partial_words: Vec<owhisper_interface::Word>, }And add a proper
Defaultimplementation:+impl Default for TranscriptManager { + fn default() -> Self { + Self { + id: uuid::Uuid::new_v4(), + partial_words: Vec::new(), + } + } +}Likely an incorrect or invalid review comment.
| interim: { | ||
| default: null, | ||
| parseHTML: element => { | ||
| const value = element.getAttribute("data-confidence"); | ||
| const value = element.getAttribute("data-interim"); | ||
| return value ? parseFloat(value) : null; | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix parseHTML: parsing “true” with parseFloat yields NaN
data-interim is used as a presence-only marker. parseFloat("true") returns NaN, causing attributes.interim to become NaN, which then serializes as data-interim="NaN".
Apply this diff to treat presence as boolean true:
- interim: {
+ interim: {
default: null,
parseHTML: element => {
- const value = element.getAttribute("data-interim");
- return value ? parseFloat(value) : null;
+ const value = element.getAttribute("data-interim");
+ // Presence-only attribute: any value (incl. empty string) => true
+ return value !== null && value !== "false" && value !== "0";
},
renderHTML: attributes => {
- if (attributes.interim === null) {
- return {};
- }
- return {
- "data-interim": attributes.interim,
- };
+ if (!attributes.interim) return {};
+ // Emit presence-only attribute
+ return { "data-interim": "" };
},
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| interim: { | |
| default: null, | |
| parseHTML: element => { | |
| const value = element.getAttribute("data-confidence"); | |
| const value = element.getAttribute("data-interim"); | |
| return value ? parseFloat(value) : null; | |
| }, | |
| interim: { | |
| default: null, | |
| parseHTML: element => { | |
| const value = element.getAttribute("data-interim"); | |
| // Presence-only attribute: any value (incl. empty string) => true | |
| return value !== null && value !== "false" && value !== "0"; | |
| }, | |
| renderHTML: attributes => { | |
| if (!attributes.interim) return {}; | |
| // Emit presence-only attribute | |
| return { "data-interim": "" }; | |
| }, | |
| }, |
🤖 Prompt for AI Agents
In packages/tiptap/src/transcript/marks.ts around lines 8 to 13, the parseHTML
implementation uses parseFloat on the data-interim attribute which yields NaN
for presence-only values like "true"; change it to treat presence as a boolean
flag by returning true when the attribute exists (use
element.hasAttribute("data-interim") or check value !== null) and return null
when it does not, replacing the parseFloat logic so attributes.interim becomes
true instead of NaN.
|
|
||
| if is_final { | ||
| self.process_final_words(&mut diff, words); | ||
| let last_final_word_end = words.last().unwrap().end; |
There was a problem hiding this comment.
Potential panic on empty words vector.
Line 57 calls .unwrap() on words.last() without checking if the words vector is empty. This could panic if a final response contains no words.
Apply this diff to add proper error handling:
- let last_final_word_end = words.last().unwrap().end;
+ let last_final_word_end = match words.last() {
+ Some(word) => word.end,
+ None => return Diff::default(),
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let last_final_word_end = words.last().unwrap().end; | |
| let last_final_word_end = match words.last() { | |
| Some(word) => word.end, | |
| None => return Diff::default(), | |
| }; |
🤖 Prompt for AI Agents
In plugins/listener/src/manager.rs around line 57, the code calls
words.last().unwrap() which will panic if the words vector is empty; update this
to handle the empty case explicitly (e.g., use match or if let to check
words.last(), or check words.is_empty() first) and return or propagate an
appropriate error/early-return instead of unwrapping; ensure downstream code
that uses last_final_word_end is only executed when a valid last word exists and
add a clear error log or Result return to handle the empty-words case.
| if let Ok(mut file) = OpenOptions::new().create(true).append(true).open( | ||
| dirs::home_dir() | ||
| .unwrap() | ||
| .join(format!("transcript_{}.jsonl", id)), | ||
| ) { |
There was a problem hiding this comment.
Potential panic on missing home directory.
Line 88 calls .unwrap() on dirs::home_dir(), which could panic if the home directory cannot be determined (e.g., in some containerized environments).
Apply this diff to handle the missing home directory gracefully:
- if let Ok(mut file) = OpenOptions::new().create(true).append(true).open(
- dirs::home_dir()
- .unwrap()
- .join(format!("transcript_{}.jsonl", id)),
- ) {
+ if let Some(home_dir) = dirs::home_dir() {
+ if let Ok(mut file) = OpenOptions::new().create(true).append(true).open(
+ home_dir.join(format!("transcript_{}.jsonl", id)),
+ ) {
+ if let Ok(json) = serde_json::to_string(response) {
+ let _ = writeln!(file, "{}", json);
+ }
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Ok(mut file) = OpenOptions::new().create(true).append(true).open( | |
| dirs::home_dir() | |
| .unwrap() | |
| .join(format!("transcript_{}.jsonl", id)), | |
| ) { | |
| if let Some(home_dir) = dirs::home_dir() { | |
| if let Ok(mut file) = OpenOptions::new().create(true).append(true).open( | |
| home_dir.join(format!("transcript_{}.jsonl", id)), | |
| ) { | |
| if let Ok(json) = serde_json::to_string(response) { | |
| let _ = writeln!(file, "{}", json); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In plugins/listener/src/manager.rs around lines 86 to 90, the code calls
dirs::home_dir().unwrap() which can panic if the home directory is unavailable;
change this to handle the Option safely by checking dirs::home_dir() first
(e.g., match or if let), and if None either fall back to a safe directory such
as std::env::current_dir() or return an error/result early; then build the path
from that safe directory and open the file, ensuring you propagate or log any
errors instead of unwrapping.
| async fn get_connection(&self) -> Result<Connection, crate::Error> { | ||
| let model = self.get_current_model()?; | ||
|
|
||
| match model { | ||
| SupportedSttModel::Am(_) => { | ||
| let existing_api_base = { | ||
| let state = self.state::<crate::SharedState>(); | ||
| let guard = state.lock().await; | ||
| guard.external_server.as_ref().map(|s| s.base_url.clone()) | ||
| }; | ||
|
|
||
| for (path, expected) in [(model_path, model.model_size())] { | ||
| if !path.exists() { | ||
| return Ok(false); | ||
| let conn = match existing_api_base { | ||
| Some(api_base) => Connection { base_url: api_base }, | ||
| None => { | ||
| let api_base = self.start_server(Some(model)).await?; | ||
| Connection { base_url: api_base } | ||
| } | ||
| }; | ||
| Ok(conn) | ||
| } |
There was a problem hiding this comment.
Potential use-after-move of model inside match
match model moves model. Using model inside the arms and passing it to start_server(Some(model)) risks a “use of moved value” error unless SupportedSttModel is Copy. Make the match borrow and clone when needed.
- let model = self.get_current_model()?;
-
- match model {
+ let model = self.get_current_model()?;
+
+ match &model {
SupportedSttModel::Am(_) => {
let existing_api_base = {
let state = self.state::<crate::SharedState>();
let guard = state.lock().await;
guard.external_server.as_ref().map(|s| s.base_url.clone())
};
let conn = match existing_api_base {
Some(api_base) => Connection { base_url: api_base },
None => {
- let api_base = self.start_server(Some(model)).await?;
+ let api_base = self.start_server(Some(model.clone())).await?;
Connection { base_url: api_base }
}
};
Ok(conn)
}
- SupportedSttModel::Whisper(_) => {
+ SupportedSttModel::Whisper(_) => {
let existing_api_base = {
let state = self.state::<crate::SharedState>();
let guard = state.lock().await;
guard.internal_server.as_ref().map(|s| s.base_url.clone())
};
let conn = match existing_api_base {
Some(api_base) => Connection { base_url: api_base },
None => {
- let api_base = self.start_server(Some(model)).await?;
+ let api_base = self.start_server(Some(model.clone())).await?;
Connection { base_url: api_base }
}
};
Ok(conn)
}
}Also applies to: 95-103
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 68 to 87 (and similarly 95 to 103),
the code matches on `model` which moves it, then later tries to use it in
`start_server(Some(model))`; change the match to avoid moving `model` by
matching a reference (e.g., `match &model` or `match model_ref`) or clone the
value when passing into `start_server` (e.g.,
`start_server(Some(model.clone()))`) and ensure `SupportedSttModel` implements
Clone if you choose cloning; update both occurrences to use the same non-moving
pattern.
| let internal_url = if let Some(server) = &guard.internal_server { | ||
| if server.health().await { | ||
| Some(server.base_url.clone()) | ||
| } else { | ||
| None | ||
| } | ||
| } else { | ||
| None | ||
| }; | ||
|
|
||
| let external_url = if let Some(server) = &guard.external_server { | ||
| if server.health().await { | ||
| Some(server.base_url.clone()) | ||
| } else { | ||
| None | ||
| } | ||
| } else { | ||
| None | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Don’t hold mutex across await in health checks (risk of deadlock/latency)
server.health().await is called while holding the SharedState lock. Avoid awaiting while holding a mutex to prevent deadlocks and unnecessary contention.
- let guard = state.lock().await;
-
- let internal_url = if let Some(server) = &guard.internal_server {
- if server.health().await {
- Some(server.base_url.clone())
- } else {
- None
- }
- } else {
- None
- };
-
- let external_url = if let Some(server) = &guard.external_server {
- if server.health().await {
- Some(server.base_url.clone())
- } else {
- None
- }
- } else {
- None
- };
+ // Clone handles out of the lock, then await health checks.
+ let (internal_server, external_server) = {
+ let guard = state.lock().await;
+ (guard.internal_server.clone(), guard.external_server.clone())
+ };
+
+ let internal_url = if let Some(server) = internal_server {
+ if server.health().await {
+ Some(server.base_url.clone())
+ } else {
+ None
+ }
+ } else {
+ None
+ };
+
+ let external_url = if let Some(server) = external_server {
+ if server.health().await {
+ Some(server.base_url.clone())
+ } else {
+ None
+ }
+ } else {
+ None
+ };If the server handles aren’t Clone, extract only what’s needed for the health check into cloneable types (e.g., an Arc around the handle) or restructure health() to accept a lightweight reference that doesn’t require holding the store lock.
Also applies to: 309-314
🤖 Prompt for AI Agents
In plugins/local-stt/src/ext.rs around lines 289-307 (and similarly 309-314),
the code calls server.health().await while holding the SharedState lock; avoid
awaiting while holding a mutex. Fix by extracting the minimal cloneable data or
an Arc reference to the server handles while the lock is held (e.g., clone the
Option<Arc<ServerHandle>> or base_url), then drop the lock and perform the async
health() await calls; after the awaits, reacquire the lock if you need to mutate
SharedState and set internal_url/external_url from the results. Ensure server
handles used for health are Clone or wrapped in Arc so you never await while the
original mutex is held.
Summary by cubic
Added support for Argmax (AM) speech models for pro users, including model download, selection, and integration into the local STT plugin.
New Features
Refactors