Feat/add registry import failure test#1142
Conversation
Greptile SummaryThis PR adds two regression tests to Confidence Score: 5/5Safe to merge — only test code is added, no production logic is changed, and all findings are minor P2 style points. Both new tests are structurally sound and follow established patterns in the file. The only actionable items are P2 style suggestions (stale No files require special attention; consider a follow-up to remove Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[_load_registry_snapshot] --> B[iter module names]
B --> C{_import_tool_module}
C -->|ModuleNotFoundError| D[log warning & continue]
C -->|RuntimeError / Exception| E[log warning & continue\n← NEW TEST covers this path]
C -->|success| F[_collect_registered_tools_from_module]
F --> G{duplicate tool name\nacross modules?}
G -->|yes| H[log warning & skip\n← NEW TEST covers this path]
G -->|no| I[add to tools_by_name]
D --> B
E --> B
H --> B
I --> B
B -->|done| J[return sorted tuple]
Reviews (1): Last reviewed commit: "Add registry regression test for import ..." | Re-trigger Greptile |
| ) | ||
|
|
||
|
|
||
| def test_registry_regression_import_failures( | ||
| monkeypatch: pytest.MonkeyPatch, caplog: pytest.LogCaptureFixture | ||
| ) -> None: | ||
| """Test that registry gracefully skips modules with import failures.""" | ||
| module: Any = ModuleType("app.tools.valid_tool") | ||
|
|
||
| @tool( | ||
| name="valid_tool", | ||
| description="A valid tool.", | ||
| source="knowledge", | ||
| ) | ||
| def valid_tool() -> dict[str, str]: | ||
| return {"status": "ok"} | ||
|
|
||
| valid_tool.__module__ = module.__name__ | ||
| module.valid_tool = valid_tool | ||
|
|
||
| def mock_import(name: str) -> ModuleType: | ||
| if name == "broken_module": | ||
| raise RuntimeError("Module initialization failed") | ||
| return module | ||
|
|
||
| monkeypatch.setattr( | ||
| registry_module, | ||
| "_iter_tool_module_names", | ||
| lambda: ["broken_module", "valid_tool"], | ||
| ) | ||
| monkeypatch.setattr( | ||
| registry_module, | ||
| "_import_tool_module", | ||
| mock_import, | ||
| ) | ||
|
|
||
| with caplog.at_level(logging.WARNING, logger="app.tools.registry"): | ||
| tools = registry_module.get_registered_tools() | ||
|
|
||
| tool_names = [t.name for t in tools] | ||
|
|
||
| assert "valid_tool" in tool_names | ||
| assert len(tools) >= 1 | ||
| assert registry_module.get_registered_tool_map()["valid_tool"].run() == { | ||
| "status": "ok" | ||
| } | ||
|
|
There was a problem hiding this comment.
# pragma: no cover now stale in production code
The RuntimeError raised by mock_import is caught by the general except Exception as exc block in registry.py (line 131), which carries the comment # pragma: no cover - defensive logging path. That annotation was added precisely because no test exercised this path — but this test now does. The pragma will continue to tell the coverage tool to ignore the branch, so coverage reports will still show it as uncovered even though it is exercised. Consider removing # pragma: no cover from registry.py's except Exception handler alongside this PR (or in a follow-up), otherwise the coverage signal remains misleading.
| with caplog.at_level(logging.WARNING, logger="app.tools.registry"): | ||
| tools = registry_module.get_registered_tools() |
There was a problem hiding this comment.
assert len(tools) >= 1 is fully implied by the assert "valid_tool" in tool_names assertion directly above it. Removing this line tightens the test and avoids the impression that an empty list with a false-positive in check could ever pass.
| with caplog.at_level(logging.WARNING, logger="app.tools.registry"): | |
| tools = registry_module.get_registered_tools() | |
| assert "valid_tool" in tool_names | |
| assert registry_module.get_registered_tool_map()["valid_tool"].run() == { |
c98329d to
491f285
Compare
|
@Aaryan-549 pls fix the CI and review points :) |
|
@muddlebee Yes yes forgot to put a text here that I am still working on it. My bad. |
…b.com/Aaryan-549/opensre into feat/add-registry-import-failure-test
|
Hey @muddlebee Have a look now! |
|
🌮 @Aaryan-549's PR: showed up unannounced, improved everything, left zero bugs. Just like a perfect taco. 🌮 👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome. |

Fixes #1115
Describe the changes you have made in this PR -
Added regression test to verify the tool registry properly handles modules that fail to import. The test checks that when a module raises
an exception during import, the registry skips it gracefully and continues loading valid tools from other modules.
The test creates a mock scenario with a broken module and a valid module, then verifies:
Demo/Screenshot for feature changes and bug fixes -
Test passes with current code:
tests/tools/test_registry.py::test_registry_regression_import_failures PASSED
Test catches the regression when
continuestatement is removed from exception handler:UnboundLocalError: cannot access local variable 'module' where it is not associated with a value
This proves the test correctly detects when import error handling is broken.
Code Understanding and AI Usage
Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?
If you used AI assistance:
Explain your implementation approach:
The test mirrors the pattern from the duplicate tool names regression test. It uses monkeypatch to mock the module import process, creates
a scenario where one module fails with RuntimeError while another succeeds, then verifies the registry recovers gracefully.
Key parts:
mock_import()function simulates import failures for "broken_module"The implementation catches a real bug: if the
continuestatement after exception handling is removed, the code tries to use anuninitialized
modulevariable, causing UnboundLocalError.Checklist before requesting a review