Skip to content

test(output): add direct unit tests for app/output.py (#871)#1169

Merged
rrajan94 merged 2 commits intoTracer-Cloud:mainfrom
Davidson3556:test/output-unit-tests-871
May 1, 2026
Merged

test(output): add direct unit tests for app/output.py (#871)#1169
rrajan94 merged 2 commits intoTracer-Cloud:mainfrom
Davidson3556:test/output-unit-tests-871

Conversation

@Davidson3556
Copy link
Copy Markdown
Contributor

Fixes #871

Describe the changes you have made in this PR -

Adds a direct unit test module for app/output.py at tests/test_output.py.
The module previously had no dedicated tests despite controlling progress
spinners, output-format selection, and humanised messages.

Coverage added:

  • get_output_format() — explicit override, NO_COLOR (set and empty),
    SLACK_WEBHOOK_URL, TTY/non-TTY, override precedence over NO_COLOR.
  • _humanise_message() — empty input, registered tool display names,
    fallback for unknown tools, "No new actions" collapse, integrations
    list extraction, validity → confidence formatting, datadog: prefix
    stripping, unrecognised pass-through.
  • _fmt_timing() — parametrised across the ms/seconds boundary
    (0, 1, 250, 999, 1000, 1500, 12345).
  • ProgressTracker.start() and complete() in text mode — node label,
    ellipsis prefix, fallback label for unknown nodes, dot marker, timing,
    humanised message rendering, recorded events, plus the error path
    ( marker), update_subtext no-op in text mode, singleton/reset
    behaviour, and ProgressEvent dataclass defaults.

Tests are deterministic: env vars are scrubbed via an autouse fixture,
time.monotonic is monkeypatched where elapsed time matters, and the
output format is forced to text via TRACER_OUTPUT_FORMAT so no real
TTY is required.

Demo/Screenshot for feature changes and bug fixes -

Screenshot 2026-04-30 at 19 23 27 Screenshot 2026-04-30 at 19 24 06

Code Understanding and AI Usage

Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?

  • No, I wrote all the code myself
  • Yes, I used AI assistance (continue below)

If you used AI assistance:

  • I have reviewed every single line of the AI-generated code
  • I can explain the purpose and logic of each function/component I added
  • I have tested edge cases and understand how the code handles them
  • I have modified the AI output to follow this project's coding standards and conventions

Explain your implementation approach:
The problem: app/output.py had no direct test module, so refactors to
the progress tracker, format detection, or message humanisation could
silently break the CLI's output without a failing test.

Approach: a single tests/test_output.py that exercises each public/
helper function directly, instead of going through the graph runner.
Helpers like _humanise_message and _fmt_timing are pure functions,
so they get straightforward input/output assertions (parametrised for
_fmt_timing to keep the boundary cases obvious).

For ProgressTracker, the tracker has two render paths (rich spinner
vs plain text). The issue scope is text mode only — testing the rich
spinner would mean asserting on threaded ANSI output, which is exactly
the kind of brittle terminal snapshot the issue tells us to avoid. So
the tests force TRACER_OUTPUT_FORMAT=text via a fixture and assert
on the captured print() output (with ANSI codes stripped).

Determinism comes from three things: an autouse fixture that scrubs
the env vars get_output_format() reads (TRACER_OUTPUT_FORMAT,
NO_COLOR, SLACK_WEBHOOK_URL, TRACER_VERBOSE); monkeypatching
time.monotonic for the timing assertions; and a force_text_mode
fixture applied via @pytest.mark.usefixtures to keep tracker tests
off the rich path.

One subtlety worth calling out: ProgressTracker._finish calls
time.monotonic twice (the second time is the eager default for
dict.pop), so the patched clock has to yield three values, not two.
That's commented in the test.


Checklist before requesting a review

  • I have added proper PR title and linked to the issue
  • I have performed a self-review of my code
  • I can explain the purpose of every function, class, and logic block I added
  • I understand why my changes work and have tested them thoroughly
  • I have considered potential edge cases and how my code handles them
  • If it is a core feature, I have added thorough tests
  • My code follows the project's style guidelines and conventions

Note: Please check Allow edits from maintainers if you would like us to assist in the PR.

Cover get_output_format(), _humanise_message(), _fmt_timing(), and
ProgressTracker.start()/complete() in text mode. Tests are deterministic
(env vars scrubbed via autouse fixture, time.monotonic patched, output
format forced to text) so no real TTY is required.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 1, 2026

Greptile Summary

Adds tests/test_output.py — the first dedicated unit test module for app/output.py. The tests cover get_output_format() (7 cases), _humanise_message() (9 cases), _fmt_timing() (parametrised across the ms/seconds boundary), and ProgressTracker in text mode (start, complete, error, update_subtext no-op, singleton lifecycle, and ProgressEvent dataclass defaults). All tests are deterministic: env vars are scrubbed via an autouse fixture, time.monotonic is monkeypatched where elapsed time matters, and the rich spinner path is bypassed by forcing TRACER_OUTPUT_FORMAT=text.

Confidence Score: 5/5

Safe to merge — adds deterministic unit tests with no production code changes.

All findings are P2 style suggestions. The test logic, clock arithmetic, fixture ordering, and expected values are all correct. No production code is modified.

No files require special attention.

Important Files Changed

Filename Overview
tests/test_output.py New test module adding comprehensive coverage for app/output.py; tests are deterministic, well-structured, and correct — two minor P2 style findings (singleton not reset in autouse fixture, slightly misleading clock comment).

Sequence Diagram

sequenceDiagram
    participant T as Test
    participant F as Fixtures
    participant O as app/output.py
    participant R as tool registry

    Note over F: autouse: _clear_output_env<br/>(deletes env vars)
    Note over F: per-test: force_text_mode<br/>(sets TRACER_OUTPUT_FORMAT=text)

    T->>O: get_output_format()
    O-->>T: "text" / "rich" / env override

    T->>O: _humanise_message(msg)
    O->>R: resolve_tool_display_name(action)
    R-->>O: display name
    O-->>T: humanised string

    T->>O: _fmt_timing(elapsed_ms)
    O-->>T: "Nms" or "N.Ns"

    T->>O: ProgressTracker()
    Note over O: reads get_output_format()<br/>at __init__ time → _rich=False
    T->>O: tracker.start(node)
    O-->>T: print("  … Label")
    Note over T: monkeypatch time.monotonic → clock iter
    T->>O: tracker.complete(node)
    O-->>T: print("  ● Label  Xms  msg")

    T->>O: get_tracker(reset=True)
    O-->>T: new ProgressTracker singleton
    T->>O: reset_tracker()
    O-->>T: new ProgressTracker singleton
Loading

Reviews (1): Last reviewed commit: "test(output): add direct unit tests for ..." | Re-trigger Greptile

Comment thread tests/test_output.py
Comment thread tests/test_output.py Outdated
Address review feedback on PR Tracer-Cloud#1169:

- Autouse fixture now also resets ``app.output._tracker`` so a tracker
  created in one test cannot leak its ``_rich`` flag into later tests
  in the broader suite.
- Reword the ``time.monotonic`` clock comment so it accurately describes
  ``dict.pop``'s unconditional default-expression evaluation rather than
  attributing it to "eager" call semantics.
@Davidson3556
Copy link
Copy Markdown
Contributor Author

@davincios @VaibhavUpreti kindly review

@rrajan94
Copy link
Copy Markdown
Collaborator

rrajan94 commented May 1, 2026

Validated locally: tests/test_output.py, make test-cov, make lint, make format-check, make typecheck all pass. LGTM.

@rrajan94 rrajan94 merged commit abfac99 into Tracer-Cloud:main May 1, 2026
10 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 1, 2026

🌊 Merged. @Davidson3556 is now permanently woven into git history. No take-backs. 😄


👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add direct unit tests for app/output.py

2 participants