feat(uptime): Add AI-powered assertion suggestions backend#108382
feat(uptime): Add AI-powered assertion suggestions backend#108382
Conversation
Add a new endpoint that uses Seer's LLM proxy to generate assertion
suggestions for uptime monitors based on preview check responses.
- New endpoint: POST /organizations/{org}/uptime-assertion-suggestions/
- Uses has_seer_access() to respect both gen-ai-features flag and
hide_ai_features org opt-out
- Parses preview check responses and sends to Seer for analysis
- Converts Seer suggestions to uptime checker assertion JSON format
- Rate limited to 1 req/5s per user, 10 req/60s per org
- Includes comprehensive unit tests
- Use orjson with OPT_INDENT_2 for pretty-printing in prompt building - Widen user param type to User | AnonymousUser to match request.user - Add None checks before `in` operator on Optional debug strings
|
🚨 Warning: This pull request contains Frontend and Backend changes! It's discouraged to make changes to Sentry's Frontend and Backend in a single pull request. The Frontend and Backend are not atomically deployed. If the changes are interdependent of each other, they must be separated into two pull requests and be made forward or backwards compatible, such that the Backend or Frontend can be safely deployed independently. Have questions? Please ask in the |
|
the frontend change being flagged here was automated and intentional via getsantry |
- Remove logger.info that logged full response body (potential PII exposure) - Remove response_data from debug string on missing status_code - Truncate response body to 16KB in LLM prompt to prevent context overflow
0b91cf7 to
82b48fa
Compare
Falsy but valid bodies like `[]`, `0`, `false` were being treated as "N/A" due to truthiness check.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
- Include plain-text/HTML response bodies directly in prompt instead of JSON-encoding them (which escaped newlines and wrapped in quotes) - Remove redundant has_seer_access check from generate_assertion_suggestions since the endpoint already gates on it at line 89 - Simplify function signature (remove unused org/user params)
wedamija
left a comment
There was a problem hiding this comment.
Approved for testing in hackweek
| # Truncate response bodies longer than this in the LLM prompt to avoid | ||
| # exceeding the model's context window. | ||
| MAX_BODY_LENGTH = 16_000 |
There was a problem hiding this comment.
I wonder if allowing an arbitrary http body can be a security risk at all... Although I guess we also allow errors to be passed into Seer, and they can also contain prompt injection attacks
There was a problem hiding this comment.
Good call. This is the one Seer integration that embeds raw user-controlled content (HTTP response bodies) rather than serialized event data.
The attack surface is bounded — Gemini's response_schema constrains output to the ASSERTION_SUGGESTIONS_SCHEMA, and the Pydantic model validates parsing — so the worst case is bad suggestions, not data exfiltration. But it's still worth mitigating.
I've added two things:
- A system prompt instruction telling the model to treat the HTTP response data as untrusted content for analysis only
- XML delimiter tags (
<http_response_body>,<http_response_headers>) around the untrusted content to help the model distinguish data from instructions
These are standard LLM prompt injection mitigations. Not bulletproof, but meaningful given the constrained output schema.
There was a problem hiding this comment.
yeah, there's not really much that can be injected, since this is a pure LLM call with no tool usage, etc. the worst they could do is extract the system prompt, but that's already public anyways.
| - Body: {body_str}""" | ||
|
|
||
|
|
||
| def suggestion_to_assertion_json(suggestion: SuggestedAssertion) -> dict[str, Any]: |
There was a problem hiding this comment.
Is there a way to just have seer structure these so that we don't need to then convert them?
There was a problem hiding this comment.
The conversion layer is intentional — the uptime checker format is too complex for reliable structured LLM output. Compare header assertions:
LLM returns (flat):
{"assertion_type": "header", "comparison": "equals", "header_name": "content-type", "expected_value": "application/json"}Checker needs (nested):
{"op": "header_check", "key_op": {"cmp": "equals"}, "key_operand": {"header_op": "literal", "value": "content-type"}, "value_op": {"cmp": "equals"}, "value_operand": {"header_op": "literal", "value": "application/json"}}A few reasons to keep the split:
- Simpler schema = more reliable LLM output (especially for Gemini's structured output)
- The conversion is deterministic and testable — easier to debug than LLM formatting issues
- Decouples the LLM prompt from checker internals — if the checker format changes, only
suggestion_to_assertion_jsonneeds updating - We need both shapes anyway: the API response includes the flat fields (
confidence,explanation) alongside the convertedassertion_json
…estions Wrap untrusted HTTP response data in XML delimiter tags and add a system prompt instruction to treat the content as data only. This helps the LLM distinguish between instructions and user-controlled response bodies.
Replace hardcoded 200 example with placeholder so Gemini uses the actual observed status code from the response.
src/sentry/uptime/endpoints/organization_uptime_assertion_suggestions.py
Show resolved
Hide resolved
Match the comment pattern from the preview check endpoint to explain why active_regions[0] is safe after validation.
## Summary - Adds an "AI Suggestions" button to the uptime alert form and detector forms that opens a drawer with AI-generated assertion suggestions - Each suggestion is rendered as a card showing the assertion type, comparison, expected value, confidence score, and explanation - Users can apply individual suggestions or all at once; applying sets the form's assertion field - Gated behind `uptime-ai-assertion-suggestions` + `gen-ai-features` + `uptime-runtime-assertions` flags and `!hideAiFeatures` ### New files - `assertionSuggestionsButton.tsx` — button component that collects form data and triggers the API call - `assertionSuggestionsDrawerContent.tsx` — drawer content with suggestion cards and apply actions - `assertionSuggestionCard.tsx` — individual suggestion card component - `connectedAssertionSuggestionsButton.tsx` — form-connected wrapper for detector forms - `types.tsx` — shared TypeScript types for suggestions ### Modified files - `uptimeAlertForm.tsx` — conditionally renders the suggestions button - `detectors/components/forms/uptime/index.tsx` — adds button to new/edit detector forms ## Test plan - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionsButton.spec.tsx` - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionsDrawerContent.spec.tsx` - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionCard.spec.tsx` - [ ] Verify button hidden when any feature flag is missing - [ ] Verify `hideAiFeatures` org option hides the button Depends on: #108382
## Summary - Adds an "AI Suggestions" button to the uptime alert form and detector forms that opens a drawer with AI-generated assertion suggestions - Each suggestion is rendered as a card showing the assertion type, comparison, expected value, confidence score, and explanation - Users can apply individual suggestions or all at once; applying sets the form's assertion field - Gated behind `uptime-ai-assertion-suggestions` + `gen-ai-features` + `uptime-runtime-assertions` flags and `!hideAiFeatures` ### New files - `assertionSuggestionsButton.tsx` — button component that collects form data and triggers the API call - `assertionSuggestionsDrawerContent.tsx` — drawer content with suggestion cards and apply actions - `assertionSuggestionCard.tsx` — individual suggestion card component - `connectedAssertionSuggestionsButton.tsx` — form-connected wrapper for detector forms - `types.tsx` — shared TypeScript types for suggestions ### Modified files - `uptimeAlertForm.tsx` — conditionally renders the suggestions button - `detectors/components/forms/uptime/index.tsx` — adds button to new/edit detector forms ## Test plan - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionsButton.spec.tsx` - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionsDrawerContent.spec.tsx` - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionCard.spec.tsx` - [ ] Verify button hidden when any feature flag is missing - [ ] Verify `hideAiFeatures` org option hides the button Depends on: #108382
## Summary
- Adds a new API endpoint `POST
/api/0/organizations/{org}/uptime-assertion-suggestions/` that runs a
preview check and uses Seer's LLM proxy to analyze the HTTP response and
suggest useful monitoring assertions
- Introduces `seer_assertions.py` module with prompt engineering,
structured output parsing (via Gemini), and conversion logic to
translate AI suggestions into the uptime checker's assertion JSON format
- Gates the endpoint behind `gen-ai-features` +
`uptime-runtime-assertions` flags and respects `hide_ai_features`
organization opt-out via `has_seer_access()`
- Includes rate limiting (1 req/5s per user, 10 req/60s per org) since
each call hits both the uptime checker and Seer
### New files
-
`src/sentry/uptime/endpoints/organization_uptime_assertion_suggestions.py`
— API endpoint
- `src/sentry/uptime/seer_assertions.py` — Seer integration, prompt,
parsing, assertion conversion
- `tests/sentry/uptime/test_seer_assertions.py` — unit tests for
parsing, prompting, and conversion logic
## Test plan
- [ ] Unit tests pass: `pytest
tests/sentry/uptime/test_seer_assertions.py`
- [ ] Endpoint integration: verify 403 when flags disabled, 200 with
valid preview check data
- [ ] Verify rate limiting behaves correctly
- [ ] Confirm `hide_ai_features` org option blocks access even when
flags are enabled
Depends on: #108178 (feature flag registration, already merged)
Replaces: #108377 (closed due to force-push race)
---------
Co-authored-by: getsantry[bot] <66042841+getsantry[bot]@users.noreply.github.com>
## Summary - Adds an "AI Suggestions" button to the uptime alert form and detector forms that opens a drawer with AI-generated assertion suggestions - Each suggestion is rendered as a card showing the assertion type, comparison, expected value, confidence score, and explanation - Users can apply individual suggestions or all at once; applying sets the form's assertion field - Gated behind `uptime-ai-assertion-suggestions` + `gen-ai-features` + `uptime-runtime-assertions` flags and `!hideAiFeatures` ### New files - `assertionSuggestionsButton.tsx` — button component that collects form data and triggers the API call - `assertionSuggestionsDrawerContent.tsx` — drawer content with suggestion cards and apply actions - `assertionSuggestionCard.tsx` — individual suggestion card component - `connectedAssertionSuggestionsButton.tsx` — form-connected wrapper for detector forms - `types.tsx` — shared TypeScript types for suggestions ### Modified files - `uptimeAlertForm.tsx` — conditionally renders the suggestions button - `detectors/components/forms/uptime/index.tsx` — adds button to new/edit detector forms ## Test plan - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionsButton.spec.tsx` - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionsDrawerContent.spec.tsx` - [ ] `CI=true pnpm test static/app/views/alerts/rules/uptime/assertionSuggestionCard.spec.tsx` - [ ] Verify button hidden when any feature flag is missing - [ ] Verify `hideAiFeatures` org option hides the button Depends on: #108382
Summary
POST /api/0/organizations/{org}/uptime-assertion-suggestions/that runs a preview check and uses Seer's LLM proxy to analyze the HTTP response and suggest useful monitoring assertionsseer_assertions.pymodule with prompt engineering, structured output parsing (via Gemini), and conversion logic to translate AI suggestions into the uptime checker's assertion JSON formatgen-ai-features+uptime-runtime-assertionsflags and respectshide_ai_featuresorganization opt-out viahas_seer_access()New files
src/sentry/uptime/endpoints/organization_uptime_assertion_suggestions.py— API endpointsrc/sentry/uptime/seer_assertions.py— Seer integration, prompt, parsing, assertion conversiontests/sentry/uptime/test_seer_assertions.py— unit tests for parsing, prompting, and conversion logicTest plan
pytest tests/sentry/uptime/test_seer_assertions.pyhide_ai_featuresorg option blocks access even when flags are enabledDepends on: #108178 (feature flag registration, already merged)
Replaces: #108377 (closed due to force-push race)