feat: strategic research items — positioning, MCP improvements, TUI e…#73
feat: strategic research items — positioning, MCP improvements, TUI e…#73
Conversation
…nhancements - Reframe README messaging: subtraction-first unification story - Add onboarding-guide persona for setup abandonment reduction - MCP: Ollama native structured output for commit-msg task - MCP: fast-path pre-checks bypass Ollama for trivial diffs - MCP: JSONL task logging for TUI observability - TUI: VRAM auto-detection (NVIDIA, AMD, Apple Silicon) - TUI: Task Log screen showing MCP tool invocations - Add cloud billing guide (docs/) with official pricing links - Tests: GPU detection, model recommendations, task log view
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughDocs refocused to a single-configuration narrative; new cloud-billing and onboarding guides added. MCP server logs per-tool JSONL, parses structured Ollama outputs, records timing/errors, and adds a commit-message fast path. TUI adds a Task Log screen, GPU detection, and recommended model suggestions. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant MCP as MCP Server
participant Ollama
participant Logger as Task Logger
participant Storage as User Config Dir
User->>MCP: Invoke tool (e.g., ollama_commit_msg)
alt Fast-path determined locally
MCP->>MCP: fastPathCommitMsg -> result
MCP->>Logger: Create log entry (status, elapsed)
else Requires LLM call
MCP->>Ollama: callOllama(task, format, input)
Ollama-->>MCP: Structured JSON or text
MCP->>MCP: Parse structured output or fallback
MCP->>Logger: Create log entry (status, elapsed, error)
end
Logger->>Storage: Append JSONL entry
MCP-->>User: Return tool result
sequenceDiagram
participant User
participant TUI as TUI (Health Check)
participant GPU as GPU Detector
participant Platform as System API
User->>TUI: Open Health Check
TUI->>GPU: detectGPU()
GPU->>Platform: Query (nvidia-smi / system_profiler / sysfs)
Platform-->>GPU: GPU info & memory
GPU->>GPU: Map to platform & recommended models
GPU-->>TUI: gpuInfo + recommended supervisor/logic
TUI->>User: Display GPU status & recommendations
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)level=error msg="[linters_context] typechecking error: pattern ./...: directory prefix . does not contain main module or its selected dependencies" Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
docs/cloud-billing-guide.md (1)
13-19: Consider verifying pricing URLs periodically.The official pricing links are valuable references. Since pricing pages can change URLs over time, consider adding a note to verify these links if the guide hasn't been updated recently, or integrate link checking into CI.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/cloud-billing-guide.md` around lines 13 - 19, Add a short note under the "## Official Pricing References" section advising readers to periodically verify the listed pricing URLs (Anthropic/Claude, Google/Gemini, OpenAI) because pricing pages can change, and optionally mention enabling automated link-checking in CI to catch broken/updated links; update the markdown near the existing list items to include this verification guidance.personas/onboarding-guide.md (1)
76-78: Curl-pipe-to-shell pattern has inherent risks.While
curl -fsSL https://ollama.com/install.sh | shis the official Ollama installation method, curl-pipe-to-shell is generally discouraged for security-conscious users. Consider adding a note about verifying the script or pointing to Ollama's documentation for alternative installation methods.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@personas/onboarding-guide.md` around lines 76 - 78, Replace the raw curl-pipe-to-shell install suggestion with a safer note: keep the existing commands but add a short caution after the Linux line explaining the security risks of piping remote scripts into sh and advise users to inspect the script first (e.g., curl -fsSL https://ollama.com/install.sh -o install.sh && less install.sh) or to follow alternative installation methods linked on Ollama's official docs page; ensure the snippet still lists the macOS brew/download options and add a link to Ollama's installation documentation for verification/alternatives.tools/tui/main.go (2)
762-789: Consider log rotation for unbounded file growth.The task log file (
~/.config/nexus/logs/mcp-tasks.jsonl) will grow indefinitely. While capping display to 50 entries is good for the UI, the file itself could become large over time.Consider adding log rotation (e.g., truncate to last N entries periodically, or rotate by date) in a future iteration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 762 - 789, loadTaskLog currently reads the entire mcp-tasks.jsonl file without bounding its on-disk size; implement simple rotation/truncation to prevent unbounded growth by trimming the file to the last N lines (e.g., 5000) or rotating by date whenever the file exceeds a size threshold. Inside loadTaskLog or a helper referenced from it (use logFile, loadTaskLog, taskLogEntry, taskLogMsg to locate), detect file size/line count, read only the tail (last N lines) or read all and then write back only the tail to the same path (atomic write via temp file + rename) to truncate/rotate; ensure concurrent reads are safe by using a temp file + os.Rename and preserve file permissions.
119-141: Apple Silicon detection may false-positive on Intel Macs.The
system_profiler SPHardwareDataTypecommand will succeed on Intel Macs, but won't have the "Chip:" line. However, if "Memory:" is found without "Chip:", the function returns an emptyinfo.Namewhich correctly triggers theif info.Name != ""check at line 138, falling through to AMD detection.The logic is correct, but consider adding a check for Intel processors to return early, or document that Intel Macs will fall through to "unknown" platform.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 119 - 141, The system_profiler parsing block can misclassify Intel Macs as Apple Silicon when only Memory is present; update the parsing in tools/tui/main.go (the gpuInfo parsing block that sets info.Platform = "apple") to detect Intel-specific lines (e.g., lines starting with "Processor Name:" or containing "Intel") and, when found, set info.Platform = "intel" and return that info (or return an "unknown" platform sentinel) immediately so Intel machines don't fall through as Apple; reference the gpuInfo struct and the parsing block that checks "Chip:" and "Memory:" to implement this early-detection/early-return behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@personas/onboarding-guide.md`:
- Around line 53-62: The onboarding-guide model recommendation table is out of
sync with the TUI implementation: update the 16 GB VRAM row so the Logic Model
matches the RecommendedModels() implementation in tools/tui/main.go (which
returns "llama3.1:8b" for the 16GB tier); locate the 16 GB row in
personas/onboarding-guide.md and change its Logic Model entry from "qwen2.5:14b"
to "llama3.1:8b" (or alternatively update RecommendedModels() if the code is
wrong) so both the markdown and the RecommendedModels() function agree.
In `@tools/tui/main.go`:
- Around line 168-184: The RecommendedModels method on type gpuInfo currently
groups 16GB with the <=18GB case, causing a mismatch with the onboarding guide;
add a dedicated 16GB branch in gpuInfo.RecommendedModels (e.g., a case mem <=
16*1024) that returns "qwen2.5-coder:7b" and "qwen2.5:14b", or alternatively
update the doc to match the existing logic—locate the function RecommendedModels
and either insert the new mem <= 16*1024 case returning the doc-specified models
or change the documentation to reflect the current <=18*1024 return values.
---
Nitpick comments:
In `@docs/cloud-billing-guide.md`:
- Around line 13-19: Add a short note under the "## Official Pricing References"
section advising readers to periodically verify the listed pricing URLs
(Anthropic/Claude, Google/Gemini, OpenAI) because pricing pages can change, and
optionally mention enabling automated link-checking in CI to catch
broken/updated links; update the markdown near the existing list items to
include this verification guidance.
In `@personas/onboarding-guide.md`:
- Around line 76-78: Replace the raw curl-pipe-to-shell install suggestion with
a safer note: keep the existing commands but add a short caution after the Linux
line explaining the security risks of piping remote scripts into sh and advise
users to inspect the script first (e.g., curl -fsSL
https://ollama.com/install.sh -o install.sh && less install.sh) or to follow
alternative installation methods linked on Ollama's official docs page; ensure
the snippet still lists the macOS brew/download options and add a link to
Ollama's installation documentation for verification/alternatives.
In `@tools/tui/main.go`:
- Around line 762-789: loadTaskLog currently reads the entire mcp-tasks.jsonl
file without bounding its on-disk size; implement simple rotation/truncation to
prevent unbounded growth by trimming the file to the last N lines (e.g., 5000)
or rotating by date whenever the file exceeds a size threshold. Inside
loadTaskLog or a helper referenced from it (use logFile, loadTaskLog,
taskLogEntry, taskLogMsg to locate), detect file size/line count, read only the
tail (last N lines) or read all and then write back only the tail to the same
path (atomic write via temp file + rename) to truncate/rotate; ensure concurrent
reads are safe by using a temp file + os.Rename and preserve file permissions.
- Around line 119-141: The system_profiler parsing block can misclassify Intel
Macs as Apple Silicon when only Memory is present; update the parsing in
tools/tui/main.go (the gpuInfo parsing block that sets info.Platform = "apple")
to detect Intel-specific lines (e.g., lines starting with "Processor Name:" or
containing "Intel") and, when found, set info.Platform = "intel" and return that
info (or return an "unknown" platform sentinel) immediately so Intel machines
don't fall through as Apple; reference the gpuInfo struct and the parsing block
that checks "Chip:" and "Memory:" to implement this early-detection/early-return
behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: ca0b2010-30e2-467a-ab64-309898ee27e7
📒 Files selected for processing (6)
README.mddocs/cloud-billing-guide.mdpersonas/onboarding-guide.mdtools/mcp/server.mjstools/tui/main.gotools/tui/main_test.go
- Add dedicated 16GB VRAM branch in RecommendedModels() - Sync onboarding-guide model table with TUI implementation - Add log rotation (cap mcp-tasks.jsonl at 5000 lines) - Handle Intel Macs in Apple Silicon detection (early return) - Add pricing URL verification note to billing guide - Add curl-pipe-to-shell security note to onboarding persona - Add 16GB test case for RecommendedModels()
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tools/tui/main.go (1)
1143-1146:⚠️ Potential issue | 🟠 MajorClose the health-check HTTP response body.
The response from
client.Get(ollamaURL)at line 1144 is discarded without closing its body, causing connection leaks on repeated health checks. This is particularly problematic in a TUI application where health checks run repeatedly.Suggested fix
- if _, err := client.Get(ollamaURL); err == nil { + if resp, err := client.Get(ollamaURL); err == nil { + _ = resp.Body.Close() h.ollamaUp = true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 1143 - 1146, The health-check currently discards the response from client.Get(ollamaURL) and never closes the response body, causing connection leaks; update the health-check so that when client.Get(ollamaURL) returns (err == nil) you capture the response (e.g., resp), set h.ollamaUp accordingly, and ensure resp.Body.Close() is called (use defer resp.Body.Close() immediately after checking err == nil) to release the connection; locate the call to client.Get and the variables ollamaURL and h.ollamaUp to apply this change.
🧹 Nitpick comments (1)
personas/onboarding-guide.md (1)
77-80: Prefer the inspected installer path as the default recommendation.You already include a security note; make the inspect-first flow the primary command and keep
curl | shas an explicitly “quick path” alternative.As per coding guidelines
personas/**: Review persona files for PII, hardcoded secrets, prompt injection patterns, and unsafe command examples.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@personas/onboarding-guide.md` around lines 77 - 80, Change the onboarding guide to make the inspect-first installer the primary recommendation and relegate the curl|sh shortcut to a clearly labeled "quick path" alternative: replace the current primary Linux line `curl -fsSL https://ollama.com/install.sh | sh` with the inspected flow `curl -fsSL https://ollama.com/install.sh -o install.sh && less install.sh && sh install.sh`, keep the original piped command but move it under a "Quick install (less secure)" note, and preserve the existing security note text. Also scan the personas files for PII/hardcoded secrets, prompt-injection patterns, and other unsafe command examples as per the personas/** guidelines and remove or remediate any findings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@personas/onboarding-guide.md`:
- Around line 121-127: The onboarding snippet currently uses a destructive
heredoc with "cat > ... .env << 'EOF'" to overwrite the entire .env; change it
to a safe update pattern that preserves existing variables: detect or create the
.env, update or append the NEXUS_LOCAL_AI, OLLAMA_HOST_URL,
NEXUS_SUPERVISOR_MODEL, and NEXUS_LOGIC_MODEL entries (e.g., use grep/sed to
replace existing keys or append missing ones, or write to a temp file and merge)
instead of replacing the file outright; update the snippet that writes
NEXUS_LOCAL_AI, OLLAMA_HOST_URL, NEXUS_SUPERVISOR_MODEL and NEXUS_LOGIC_MODEL so
it safely modifies the .env without destructive overwrite.
In `@tools/tui/main.go`:
- Around line 772-785: The rotation code reads logFile, trims lines, writes to
tmp and renames without any error handling or synchronization, which can lose
concurrent appends; fix by acquiring an exclusive lock on the log file
descriptor around the read-modify-write (use syscall.Flock or
golang.org/x/sys/unix.Flock) before calling os.ReadFile, then perform
write-to-temp using os.CreateTemp, write and call file.Sync on the temp file,
close it, check and handle any errors from os.Rename(logFile) and fallback
appropriately, and finally release the lock; ensure every filesystem operation
(Read, CreateTemp, Write, Sync, Rename) checks/returns errors instead of
ignoring them so the calling function can handle failures.
---
Outside diff comments:
In `@tools/tui/main.go`:
- Around line 1143-1146: The health-check currently discards the response from
client.Get(ollamaURL) and never closes the response body, causing connection
leaks; update the health-check so that when client.Get(ollamaURL) returns (err
== nil) you capture the response (e.g., resp), set h.ollamaUp accordingly, and
ensure resp.Body.Close() is called (use defer resp.Body.Close() immediately
after checking err == nil) to release the connection; locate the call to
client.Get and the variables ollamaURL and h.ollamaUp to apply this change.
---
Nitpick comments:
In `@personas/onboarding-guide.md`:
- Around line 77-80: Change the onboarding guide to make the inspect-first
installer the primary recommendation and relegate the curl|sh shortcut to a
clearly labeled "quick path" alternative: replace the current primary Linux line
`curl -fsSL https://ollama.com/install.sh | sh` with the inspected flow `curl
-fsSL https://ollama.com/install.sh -o install.sh && less install.sh && sh
install.sh`, keep the original piped command but move it under a "Quick install
(less secure)" note, and preserve the existing security note text. Also scan the
personas files for PII/hardcoded secrets, prompt-injection patterns, and other
unsafe command examples as per the personas/** guidelines and remove or
remediate any findings.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 351a3b26-dad5-4c38-bedc-f434bd021992
📒 Files selected for processing (4)
docs/cloud-billing-guide.mdpersonas/onboarding-guide.mdtools/tui/main.gotools/tui/main_test.go
✅ Files skipped from review due to trivial changes (1)
- docs/cloud-billing-guide.md
🚧 Files skipped from review as they are similar to previous changes (1)
- tools/tui/main_test.go
| data, err := os.ReadFile(logFile) | ||
| if err != nil { | ||
| return taskLogMsg{} | ||
| } | ||
| lines := strings.Split(strings.TrimSpace(string(data)), "\n") | ||
|
|
||
| // Log rotation: cap file at 5000 lines | ||
| const maxLines = 5000 | ||
| if len(lines) > maxLines { | ||
| lines = lines[len(lines)-maxLines:] | ||
| tmp := logFile + ".tmp" | ||
| if os.WriteFile(tmp, []byte(strings.Join(lines, "\n")+"\n"), 0644) == nil { | ||
| os.Rename(tmp, logFile) | ||
| } |
There was a problem hiding this comment.
Fix log file rotation to handle concurrent writes and errors.
The rotation logic (lines 778-785) reads the log file, then overwrites it via WriteFile+Rename without checking errors. If another process is concurrently appending to the log (between ReadFile and Rename), writes can be silently lost. Additionally, the Rename error is completely unchecked.
Either: (1) implement exclusive locking around the entire read-modify-write sequence, (2) keep the TUI read-only and handle rotation elsewhere, or (3) use atomic operations (e.g., fd-based locking via flock).
As-is, this violates Go error-handling patterns and risks data loss under concurrent access.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/tui/main.go` around lines 772 - 785, The rotation code reads logFile,
trims lines, writes to tmp and renames without any error handling or
synchronization, which can lose concurrent appends; fix by acquiring an
exclusive lock on the log file descriptor around the read-modify-write (use
syscall.Flock or golang.org/x/sys/unix.Flock) before calling os.ReadFile, then
perform write-to-temp using os.CreateTemp, write and call file.Sync on the temp
file, close it, check and handle any errors from os.Rename(logFile) and fallback
appropriately, and finally release the lock; ensure every filesystem operation
(Read, CreateTemp, Write, Sync, Rename) checks/returns errors instead of
ignoring them so the calling function can handle failures.
…ll default - Close HTTP response body in checkHealth() to prevent connection leaks - Replace destructive cat-heredoc .env overwrite with key-level sed/append - Make inspect-first the primary Ollama install command in onboarding persona
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tools/tui/main.go (2)
107-117: NVIDIA detection: Consider handling multi-GPU and malformed output.If the system has multiple GPUs,
nvidia-smireturns multiple lines. The current code only parses the first line (correct behavior), but if the output format differs (e.g., comma in GPU name),SplitN(..., 2)may misparse.Additionally, if
parts[1]doesn't parse as an integer,memstays 0, which would cause incorrect model recommendations (the ≤4GB tier).This is a minor edge case since the fallback is safe, but worth noting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 107 - 117, The NVIDIA parsing code should handle multi-line output and malformed names: read the command output, pick the first non-empty line, and split the GPU name and memory by the last comma (not SplitN) to avoid breaking when the name contains commas; parse memory with strconv.Atoi and check the error before using it (fall back to 0 only on parse failure). Update the block around exec.Command(...) that produces line/parts and the mem variable and return path that builds gpuInfo (referencing the exec.Command call, the line variable, the parts split logic, the mem integer, and the gpuInfo return) to implement these fixes.
838-852: Long tool/model names may break column alignment.The
%-24sand%-22sformat specifiers left-align but don't truncate. If a tool or model name exceeds these widths, the table columns will misalign.Consider truncating long values:
♻️ Optional: truncate long values for consistent alignment
+func truncateStr(s string, max int) string { + if len(s) > max { + return s[:max-1] + "…" + } + return s +} + // In taskLogView: -s += fmt.Sprintf(" %-24s %-22s %8s %s %s\n", - e.Tool, e.Model, dur, status, m.styles.subtle.Render(ts)) +s += fmt.Sprintf(" %-24s %-22s %8s %s %s\n", + truncateStr(e.Tool, 24), truncateStr(e.Model, 22), dur, status, m.styles.subtle.Render(ts))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 838 - 852, Long tool/model names in the table break alignment because fmt.Sprintf uses "%-24s" and "%-22s" without truncation; update the rendering to truncate e.Tool to 24 characters and e.Model to 22 characters (e.g., add a small helper like truncate(s, n) or inline substring logic) before passing them into fmt.Sprintf in the loop over m.taskLog so the columns remain aligned; reference e.Tool, e.Model and the fmt.Sprintf call that builds the line and ensure truncation appends an ellipsis or simply cuts to the specified width to preserve alignment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tools/tui/main.go`:
- Around line 107-117: The NVIDIA parsing code should handle multi-line output
and malformed names: read the command output, pick the first non-empty line, and
split the GPU name and memory by the last comma (not SplitN) to avoid breaking
when the name contains commas; parse memory with strconv.Atoi and check the
error before using it (fall back to 0 only on parse failure). Update the block
around exec.Command(...) that produces line/parts and the mem variable and
return path that builds gpuInfo (referencing the exec.Command call, the line
variable, the parts split logic, the mem integer, and the gpuInfo return) to
implement these fixes.
- Around line 838-852: Long tool/model names in the table break alignment
because fmt.Sprintf uses "%-24s" and "%-22s" without truncation; update the
rendering to truncate e.Tool to 24 characters and e.Model to 22 characters
(e.g., add a small helper like truncate(s, n) or inline substring logic) before
passing them into fmt.Sprintf in the loop over m.taskLog so the columns remain
aligned; reference e.Tool, e.Model and the fmt.Sprintf call that builds the line
and ensure truncation appends an ellipsis or simply cuts to the specified width
to preserve alignment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 5e6e7602-67ed-473c-b4b4-8e901d89f316
📒 Files selected for processing (2)
personas/onboarding-guide.mdtools/tui/main.go
✅ Files skipped from review due to trivial changes (1)
- personas/onboarding-guide.md
Remove unnecessary needs dependency — test-install-cycle does its own checkout and build, doesn't consume build-and-test artifacts. All 5 CI jobs now start immediately, cutting wall-clock time roughly in half.
- Split nvidia-smi output by last comma to handle names with commas - Take first line only for multi-GPU systems - Add truncateCol() to prevent task log column misalignment
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tools/tui/main.go (1)
778-785:⚠️ Potential issue | 🟠 MajorKeep task-log rotation out of the TUI or make it atomic.
This read/trim/write/rename sequence can still drop concurrent appends from the MCP server, and
os.Renameis still unchecked. Since this process is only a reader, the safest fix is to stop rotating here or guard the whole sequence with a real lock and checked filesystem operations.As per coding guidelines,
**/*.go: Review Go code for error handling, resource leaks, and adherence to Bubbletea v2 patterns. This is a TUI application using the Charm ecosystem.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 778 - 785, The log-rotation snippet that reads/trims/writes/renames (using maxLines, lines, tmp, os.WriteFile and os.Rename against logFile) can drop concurrent appends and ignores errors; either remove/disable this rotation from the TUI reader entirely (since this process should only read logs), or make it atomic and safe: acquire a filesystem/process-level lock around the entire read-trim-write-rename sequence, check and handle all errors from os.ReadFile/os.WriteFile/os.Rename, write to a temp file and fsync before rename, and only perform the rotation from a single responsible writer (or move rotation into the MCP server) so this code in the TUI (where lines and maxLines are used) no longer races with concurrent appenders.
🧹 Nitpick comments (1)
tools/tui/main.go (1)
1275-1279: Use display-width truncation here, not byte slicing.
lenands[:max-1]can split UTF-8 and still misalign columns for wide glyphs, so this only fixes ASCII cases. In this Charm/Lip Gloss UI, a width-aware truncation helper would be safer.As per coding guidelines,
**/*.go: Review Go code for error handling, resource leaks, and adherence to Bubbletea v2 patterns. This is a TUI application using the Charm ecosystem.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/tui/main.go` around lines 1275 - 1279, The truncateCol function uses byte-length and slicing which can split UTF-8 and mis-handle wide glyphs; change truncateCol to use a display-width aware truncation (e.g., github.com/mattn/go-runewidth) instead of len and s[:..]. Replace the body of truncateCol to call runewidth.Truncate(s, max, "…") (or implement a rune-iteration that accumulates runewidth.StringWidth per rune and appends "…" when exceeding max) so columns align correctly for multibyte and wide characters.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tools/tui/main.go`:
- Around line 772-775: The read of logFile collapses all os.ReadFile failures
into an empty taskLogMsg{}, hiding real errors; change the handler in the
function that reads logFile so that if errors.Is(err, os.ErrNotExist) you return
taskLogMsg{} (empty), but for any other err return a taskLogMsg that carries the
error (e.g., set an Error/Err/Msg field on taskLogMsg with err or err.Error())
so the UI can surface permission/path/read failures. Update callers/rendering to
display that error field if present.
---
Duplicate comments:
In `@tools/tui/main.go`:
- Around line 778-785: The log-rotation snippet that reads/trims/writes/renames
(using maxLines, lines, tmp, os.WriteFile and os.Rename against logFile) can
drop concurrent appends and ignores errors; either remove/disable this rotation
from the TUI reader entirely (since this process should only read logs), or make
it atomic and safe: acquire a filesystem/process-level lock around the entire
read-trim-write-rename sequence, check and handle all errors from
os.ReadFile/os.WriteFile/os.Rename, write to a temp file and fsync before
rename, and only perform the rotation from a single responsible writer (or move
rotation into the MCP server) so this code in the TUI (where lines and maxLines
are used) no longer races with concurrent appenders.
---
Nitpick comments:
In `@tools/tui/main.go`:
- Around line 1275-1279: The truncateCol function uses byte-length and slicing
which can split UTF-8 and mis-handle wide glyphs; change truncateCol to use a
display-width aware truncation (e.g., github.com/mattn/go-runewidth) instead of
len and s[:..]. Replace the body of truncateCol to call runewidth.Truncate(s,
max, "…") (or implement a rune-iteration that accumulates runewidth.StringWidth
per rune and appends "…" when exceeding max) so columns align correctly for
multibyte and wide characters.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 3b7b5fdb-dc06-4ebd-b931-968bbe83c3a2
📒 Files selected for processing (1)
tools/tui/main.go
| data, err := os.ReadFile(logFile) | ||
| if err != nil { | ||
| return taskLogMsg{} | ||
| } |
There was a problem hiding this comment.
Don't collapse log-file read failures into the empty state.
Returning taskLogMsg{} for any os.ReadFile error makes permission/path problems render as “No MCP tasks recorded yet,” which weakens the new observability screen. Treat os.ErrNotExist as empty, but surface other read errors in the UI state.
As per coding guidelines, **/*.go: Review Go code for error handling, resource leaks, and adherence to Bubbletea v2 patterns. This is a TUI application using the Charm ecosystem.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/tui/main.go` around lines 772 - 775, The read of logFile collapses all
os.ReadFile failures into an empty taskLogMsg{}, hiding real errors; change the
handler in the function that reads logFile so that if errors.Is(err,
os.ErrNotExist) you return taskLogMsg{} (empty), but for any other err return a
taskLogMsg that carries the error (e.g., set an Error/Err/Msg field on
taskLogMsg with err or err.Error()) so the UI can surface permission/path/read
failures. Update callers/rendering to display that error field if present.
PR #73 added a subtraction-first philosophy reframe (18 lines). Our rewrite supersedes it with full feature coverage, tool compatibility matrix, research citations, hardware routing table, and roadmap.
…nhancements
Summary by CodeRabbit
New Features
Documentation