docs(azure-monitor): add Azure Monitor integration guide#999
Conversation
Greptile SummaryThis PR adds Confidence Score: 4/5Safe to merge after fixing the verify SOURCE label in the expected output snippet. One P1 factual inaccuracy: the verify snippet shows "env" but the code produces "local env", which will confuse users comparing terminal output against the docs. All other content (env vars, endpoint, cap, KQL examples, troubleshooting) is verified against the live code and correct. docs/azure-monitor.mdx — the Verify section expected output block Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant OpenSRE
participant EntraID as Microsoft Entra ID
participant AzureLogAnalytics as Azure Log Analytics
User->>EntraID: POST /oauth2/v2.0/token (client_credentials)
EntraID-->>User: Bearer token (~60 min TTL)
User->>OpenSRE: Set AZURE_LOG_ANALYTICS_TOKEN
Note over OpenSRE: Investigation triggered
OpenSRE->>OpenSRE: _ensure_take_clause(query, limit)
OpenSRE->>AzureLogAnalytics: POST /v1/workspaces/{workspace_id}/query
AzureLogAnalytics-->>OpenSRE: tables with columns and rows
OpenSRE->>OpenSRE: flatten rows up to effective_limit
OpenSRE-->>User: source, rows, total_returned
Reviews (1): Last reviewed commit: "docs(azure-monitor): register guide in d..." | Re-trigger Greptile |
| The verify step is a credential-shape check — it does not call the workspace. To exercise the live path, point OpenSRE at a synthetic alert that names `azure` as the source and inspect the resulting evidence. | ||
|
|
||
| ## Example KQL queries | ||
|
|
||
| Recent application errors: | ||
|
|
||
| ```kql |
There was a problem hiding this comment.
Verify output
SOURCE field is wrong
The expected output shows env in the SOURCE column, but catalog.py sets the source label to "local env" for all environment-variable–backed integrations (in _service_metadata). A user running opensre integrations verify azure with env vars configured will see local env in the output, not env, causing a mismatch when following the docs.
The correct expected output line should read:
azure local env passed Azure Log Analytics credentials are configured for workspace <id> at https://api.loganalytics.io
Env-var-backed integrations are emitted with source label 'local env' by app/integrations/catalog.py:_service_metadata, not 'env'. Update the expected output snippet so users matching the docs against terminal output see the same string.
|
Thanks for the catch — fixed in
|
|
Verified the fix: docs/azure-monitor.mdx now shows local env in the SOURCE column, matching _service_metadata in catalog.py. Greptile finding resolved. Re-confirming the rest of the docs against the codebase: env var names, default endpoint, 200-row hard cap, default limit 50 all match. LGTM. |
Fixes #997
Describe the changes you have made in this PR -
AzureMonitorLogsToolwas the only major log-source integration without adocs/<name>.mdxpage. This PR addsdocs/azure-monitor.mdxmodeled on the existing integration docs (Coralogix, Better Stack, etc.) and registers the new page under the Local integrations group indocs/docs.jsonso it shows up in the rendered nav.The doc covers everything the issue asks for:
query_azure_monitor_logsreturnsLog Analytics Readerrole, network egresscurlsnippet to mint a bearer tokenAZURE_LOG_ANALYTICS_WORKSPACE_ID,AZURE_LOG_ANALYTICS_TOKEN,AZURE_LOG_ANALYTICS_ENDPOINT, etc.) and via~/.tracer/integrations.jsonopensre integrations verify azureVariable names, defaults, and the
200-row hard cap are all sourced from the live code (app/integrations/catalog.py,app/tools/AzureMonitorLogsTool/__init__.py,app/integrations/verify.py) so the doc matches what the agent actually reads at runtime.Commits are split atomically:
4c2ca0ddocs(azure-monitor): add Azure Monitor integration guide — the new.mdxfile0abb3bedocs(azure-monitor): register guide in docs navigation — the one-linedocs/docs.jsoninsertionDemo/Screenshot for feature changes and bug fixes -
Docs-only PR. Local quality gate is clean against this branch:
Code Understanding and AI Usage
Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?
Explain your implementation approach:
The issue spelled out the structure exactly — overview, prerequisites, credentials, configuration, verification, KQL examples, troubleshooting — so the work was mostly grounding each section in the real code paths instead of inventing flags. I read
app/integrations/catalog.pyto pin the env-var names (AZURE_LOG_ANALYTICS_WORKSPACE_ID,AZURE_LOG_ANALYTICS_TOKEN,AZURE_LOG_ANALYTICS_ENDPOINT,AZURE_TENANT_ID,AZURE_SUBSCRIPTION_ID,AZURE_MAX_RESULTS) and the persistent-store schema;app/tools/AzureMonitorLogsTool/__init__.pyfor the200-row hard cap, the defaulttakeclause, and the request shape OpenSRE actually emits (POST /v1/workspaces/<id>/querywithtimespan=PT<N>M); andapp/integrations/verify.py:_verify_azurefor the exact verify-output string the troubleshooting section refers back to. I considered following the Coralogix doc style verbatim (it's the shortest), but Better Stack is a closer fit — both are token-auth REST integrations with a sovereign-cloud endpoint variation — so I matched its layout (Setup → Investigation tool → Verify → Troubleshooting → Security) and added an explicit credential-acquisition section because Azure's OAuth2 client-credentials flow is the most error-prone step. The KQL examples are deliberately mundane (AppTraces,AppDependencies) so a reader can paste them straight into the Logs blade without tweaking schema names. Thedocs/docs.jsonchange is the one-line nav registration that makes the page render in the docs site.Checklist before requesting a review