Skip to content

test(tools): add unit tests for Azure Monitor Logs tool#1007

Merged
yashksaini-coder merged 5 commits intoTracer-Cloud:mainfrom
7vignesh:issue/996-azure-monitor-logs-tool-tests
Apr 27, 2026
Merged

test(tools): add unit tests for Azure Monitor Logs tool#1007
yashksaini-coder merged 5 commits intoTracer-Cloud:mainfrom
7vignesh:issue/996-azure-monitor-logs-tool-tests

Conversation

@7vignesh
Copy link
Copy Markdown
Contributor

Fixes #996

This PR adds unit tests for AzureMonitorLogsTool with focused coverage on tool contract, source availability, parameter extraction, bounded limits, and runtime behavior.

Added coverage includes:

  • BaseToolContract contract test
  • is_available true/false scenarios for required Azure fields
  • extract_params mapping + default values
  • _bounded_limit behavior
  • run happy path (mocked httpx.post with KQL table payload)
  • run HTTP error path (mocked httpx.post failure)
  • run unavailable path (missing credentials)

This is a test-only change and does not modify production tool logic.

Demo/Screenshot for feature changes and bug fixes -

Validation run for this change:

  • ✅ python -m pytest tests/tools/test_azure_monitor_logs_tool.py (15 passed)
  • ✅ python -m ruff check tests/tools/test_azure_monitor_logs_tool.py
  • ✅ python -m ruff format --check tests/tools/test_azure_monitor_logs_tool.py
  • ✅ python -m mypy tests/tools/test_azure_monitor_logs_tool.py (no issues)

Code Understanding and AI Usage

Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?

  • No, I wrote all the code myself
  • Yes, I used AI assistance (continue below)

If you used AI assistance:

  • I have reviewed every single line of the AI-generated code
  • I can explain the purpose and logic of each function/component I added
  • I have tested edge cases and understand how the code handles them
  • I have modified the AI output to follow this project's coding standards and conventions

Explain your implementation approach:

I followed the existing repository pattern for function-based tool tests and patched at the httpx layer because this tool performs direct HTTP calls. The tests are deterministic and focus on behavior specified in the issue: availability/extraction rules, bounded limits, successful parsing, graceful error handling, and missing-credential behavior.


Checklist before requesting a review

  • I have added proper PR title and linked to the issue
  • I have performed a self-review of my code
  • I can explain the purpose of every function, class, and logic block I added
  • I understand why my changes work and have tested them thoroughly
  • I have considered potential edge cases and how my code handles them
  • If it is a core feature, I have added thorough tests
  • My code follows the project's style guidelines and conventions

Copilot AI review requested due to automatic review settings April 27, 2026 14:24
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 27, 2026

Greptile Summary

Adds unit tests for AzureMonitorLogsTool covering the BaseToolContract, is_available parametrized scenarios, extract_params defaults, _bounded_limit, and run (happy path, HTTP error, missing credentials). The tests are well-structured and follow existing project conventions; two minor coverage gaps were found but neither blocks merge.

Confidence Score: 5/5

Safe to merge — test-only change with no production code modifications and correct assertions throughout.

All findings are P2 (style/coverage suggestions). The production logic is unchanged, existing tests are not broken, and the new tests correctly exercise the main behavior paths.

No files require special attention.

Important Files Changed

Filename Overview
tests/tools/test_azure_monitor_logs_tool.py New test file covering contract, is_available, extract_params, _bounded_limit, and run (happy/error/unavailable) paths for AzureMonitorLogsTool; two minor coverage gaps in _bounded_limit and error-path assertions.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[test_is_available_requires_verified_workspace_and_token] --> B{_azure_available}
    B -->|all fields present & verified| C[True]
    B -->|any field missing/false| D[False]

    E[test_extract_params_maps_fields_and_defaults] --> F[_azure_extract_params]
    F --> G[stripped workspace_id / access_token / endpoint + hardcoded defaults]

    H[test_bounded_limit_caps_requested_limit] --> I[_bounded_limit]
    I -->|limit > max_results| J[returns max_results]
    I -->|limit > _MAX_HARD_LIMIT unannotated untested| K[returns 200]

    L[test_run_happy_path] --> M[query_azure_monitor_logs]
    M -->|httpx.post mocked OK| N[parse tables/columns/rows -> return rows]

    O[test_run_http_error_path] --> P[query_azure_monitor_logs]
    P -->|raise_for_status raises| Q[return error dict]

    R[test_run_unavailable_without_credentials] --> S[query_azure_monitor_logs]
    S -->|empty workspace/token| T[early return available=False]
Loading

Reviews (1): Last reviewed commit: "test(tools): add Azure Monitor Logs tool..." | Re-trigger Greptile

{
"columns": [
{"name": "TimeGenerated"},
{"name": "Message"},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 _bounded_limit hard-cap path untested

The single _bounded_limit(300, 100) call only exercises the max_results cap, leaving the _MAX_HARD_LIMIT = 200 branch completely uncovered. Passing max_results > 200 (e.g., _bounded_limit(300, 500)) should return 200, not 300. If that branch were accidentally removed, this test would still pass.

Suggested change
{"name": "Message"},
def test_bounded_limit_caps_requested_limit() -> None:
assert _bounded_limit(300, 100) == 100
def test_bounded_limit_respects_hard_cap() -> None:
# max_results > _MAX_HARD_LIMIT (200) — the hard cap should win
assert _bounded_limit(300, 500) == 200

Comment on lines +135 to +147
def test_run_http_error_path(monkeypatch: pytest.MonkeyPatch) -> None:
mocked_response = MagicMock()
mocked_response.raise_for_status.side_effect = Exception("401 Client Error: Unauthorized")
mocked_response.json.return_value = {}

monkeypatch.setattr(
"app.tools.AzureMonitorLogsTool.httpx.post",
lambda *_args, **_kwargs: mocked_response,
)

result = query_azure_monitor_logs(
workspace_id="workspace-123",
access_token="token-abc",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Error-path assertions are incomplete

The test verifies "error" and "source" but not available or rows. If the production code ever changed to return "available": True or omit "rows": [] on failure, this test would not catch the regression.

Suggested change
def test_run_http_error_path(monkeypatch: pytest.MonkeyPatch) -> None:
mocked_response = MagicMock()
mocked_response.raise_for_status.side_effect = Exception("401 Client Error: Unauthorized")
mocked_response.json.return_value = {}
monkeypatch.setattr(
"app.tools.AzureMonitorLogsTool.httpx.post",
lambda *_args, **_kwargs: mocked_response,
)
result = query_azure_monitor_logs(
workspace_id="workspace-123",
access_token="token-abc",
assert "error" in result
assert "401" in result["error"]
assert result["source"] == "azure"
assert result["available"] is False
assert result["rows"] == []

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a dedicated unit test suite for AzureMonitorLogsTool to validate tool registration/contract behavior and key runtime paths without modifying production code.

Changes:

  • Adds BaseToolContract contract coverage for the function-based @tool registration.
  • Adds tests for is_available, extract_params defaults/mapping, and _bounded_limit.
  • Adds run tests covering a mocked success response, mocked HTTP failure, and missing-credentials behavior.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +26 to +74
@pytest.mark.parametrize(
"sources,expected",
[
(
{
"azure": {
"connection_verified": True,
"workspace_id": "workspace-123",
"access_token": "token-abc",
}
},
True,
),
(
{
"azure": {
"connection_verified": False,
"workspace_id": "workspace-123",
"access_token": "token-abc",
}
},
False,
),
(
{
"azure": {
"connection_verified": True,
"workspace_id": "",
"access_token": "token-abc",
}
},
False,
),
(
{
"azure": {
"connection_verified": True,
"workspace_id": "workspace-123",
"access_token": "",
}
},
False,
),
],
)
def test_is_available_requires_verified_workspace_and_token(sources: dict, expected: bool) -> None:
rt = _registered_tool()
assert rt.is_available(sources) is expected

)

assert "error" in result
assert "401" in result["error"]
@7vignesh
Copy link
Copy Markdown
Contributor Author

@muddlebee
Quick note on CI: an earlier quality run failed on an old/stale check snapshot, but after rerunning on the latest head commit, all checks passed. Current CI is green.

Copy link
Copy Markdown
Collaborator

@yashksaini-coder yashksaini-coder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @7vignesh, solid work overall — the structure is clean, monkeypatching at the httpx layer is the right approach, and the parametrized is_available coverage is well thought out. Two gaps I'd want closed before merging:

{
"columns": [
{"name": "TimeGenerated"},
{"name": "Message"},
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_ensure_take_clause has zero test coverage and it's not a trivial function — it has 3 distinct branches that directly affect the KQL query sent to Azure:

  1. empty query → replaces with full default query
  2. query already has take → leaves it unchanged
  3. query without take → appends | take {limit}

If branch 1 or 3 broke, the tool would send unbounded queries to Azure Monitor and potentially OOM or hit rate limits. Worth adding a quick parametrize block here:

@pytest.mark.parametrize(
    "query,limit,expected",
    [
        ("", 10, "AppTraces | order by TimeGenerated desc | take 10"),
        ("AppTraces | order by TimeGenerated desc", 5, "AppTraces | order by TimeGenerated desc | take 5"),
        ("AppTraces | take 100", 5, "AppTraces | take 100"),  # existing take is preserved
    ],
)
def test_ensure_take_clause_branches(query: str, limit: int, expected: str) -> None:
    from app.tools.AzureMonitorLogsTool import _ensure_take_clause
    assert _ensure_take_clause(query, limit) == expected

monkeypatch.setattr(
"app.tools.AzureMonitorLogsTool.httpx.post",
lambda *_args, **_kwargs: mocked_response,
)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The happy path test verifies the parsed response correctly, but the monkeypatched httpx.post uses lambda *_args, **_kwargs so it silently swallows whatever URL, headers, and payload the tool sends. If the URL construction (f"{base_url}/v1/workspaces/{workspace}/query") or the Bearer token header broke, this test would still pass because the mock doesn't care.

Worth capturing the call args and asserting the basics:

captured: dict[str, Any] = {}

def fake_post(url: str, **kwargs: Any) -> MagicMock:
    captured["url"] = url
    captured["headers"] = kwargs.get("headers", {})
    return mocked_response

monkeypatch.setattr("app.tools.AzureMonitorLogsTool.httpx.post", fake_post)
# ... run the tool ...
assert "workspace-123" in captured["url"]
assert captured["headers"]["Authorization"] == "Bearer token-abc"

@7vignesh
Copy link
Copy Markdown
Contributor Author

7vignesh commented Apr 27, 2026

Hey @yashksaini-coder, addressed both points:

  • ensure_take_clause
  • Happy-path assertions

Formatting ruff format fix pushed; all CI checks are green

@muddlebee ready to merge!

@muddlebee muddlebee requested review from muddlebee and removed request for muddlebee April 27, 2026 18:58
Copy link
Copy Markdown
Collaborator

@yashksaini-coder yashksaini-coder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the _ensure_take_clause coverage and the happy-path request assertions — those were the two blockers I flagged and both are now properly fixed.

However, there are still 4 open inline comment threads from greptile and Copilot that weren't addressed in this round. The contributor comment says "ready to merge" but these are still unresolved:

1. _bounded_limit hard-cap is untested (line 124, greptile thread)
The current test only exercises the max_results cap: _bounded_limit(300, 100) == 100. That's the first ceiling. But _bounded_limit has a second, harder ceiling — _MAX_HARD_LIMIT = 200 — and it's completely untested.

_bounded_limit(500, 300) should return 200, not 300. Right now there's no test for this. This matters because _MAX_HARD_LIMIT is a safety control that prevents unbounded data retrieval regardless of what max_results is set to. If that path breaks, the agent could silently start pulling unlimited rows from Azure Monitor. Add: assert _bounded_limit(500, 300) == 200 and assert _bounded_limit(0, 100) == 1.

2. Error-path assertions are incomplete (line 173/178, greptile + Copilot threads)
test_run_http_error_path asserts "error" in result and "401" in result["error"] — but the production error return is:

{"source": "azure", "available": False, "error": str(err), "rows": []}

The test doesn't assert result["available"] is False or result["rows"] == []. These are contract-level guarantees that callers depend on. If the return dict ever accidentally removes available or flips it to True on error, no test will catch it.

3. is_available missing sources={} case (line 75, Copilot thread)
_azure_available correctly handles a missing azure key via sources.get("azure", {}), but there's no parametrize case for sources={} (no azure key at all). This is real-world input the agent can receive. Takes one line to add to the parametrize list.

None of these are hard — they're small test additions, but the greptile and Copilot threads on them are still open and the PR shouldn't merge with open review threads unacknowledged.

"azure": {
"workspace_id": " workspace-123 ",
"access_token": " token-abc ",
"endpoint": " https://api.loganalytics.io ",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still needs a sources={} parametrize case here (per the open Copilot thread at line 75). _azure_available({}) should return False — the production code handles it correctly via .get("azure", {}), but there's no test confirming that. Add:

({}, False),  # no azure key at all

to the parametrize list.

- Add sources={} parametrize case to is_available test (no azure key)
- Add _bounded_limit hard-ceiling test (_MAX_HARD_LIMIT=200 enforced when max_results > 200)
- Add _bounded_limit minimum-of-one test (zero and negative inputs)
- Add available=False and rows=[] assertions to HTTP error path test

Co-authored-by: Copilot <[email protected]>
Copy link
Copy Markdown
Collaborator

@yashksaini-coder yashksaini-coder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All review threads addressed. The 4 gaps flagged by greptile and Copilot are now fixed:

  • sources={} parametrize case added to is_available test
  • _bounded_limit hard ceiling (_MAX_HARD_LIMIT=200) now tested: _bounded_limit(500, 300) == 200
  • _bounded_limit minimum-of-one boundary tested
  • HTTP error path now asserts available=False and rows=[]

All 21 tests pass, ruff clean. Good to merge.

@yashksaini-coder yashksaini-coder merged commit 11a88eb into Tracer-Cloud:main Apr 27, 2026
7 checks passed
@7vignesh
Copy link
Copy Markdown
Contributor Author

Thanks for the detailed review, @yashksaini-coder! Good call on the bot threads — I'll make sure to go through Greptile and Copilot review comments before marking as ready next time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add unit tests for AzureMonitorLogsTool

3 participants