feat(autopilot): Add finish reason to prompt#107229
Conversation
src/sentry/autopilot/tasks.py
Outdated
| @@ -370,9 +373,17 @@ def run_missing_sdk_integration_detector_for_project( | |||
|
|
|||
| # Output | |||
|
|
|||
| Return a JSON array of missing integration names using exact names from the docs. | |||
| Example: `["zodErrorsIntegration"]` | |||
| If none missing: `[]`""" | |||
| Return a JSON object with: | |||
| - `missing_integrations`: Array of missing integration names using exact names from the docs | |||
| - `finish_reason`: A short snake_case string describing the outcome: | |||
| - `done`: Successfully analyzed the project (even if no integrations are missing) | |||
| - `missing_sentry_init`: Could not find Sentry initialization code (`Sentry.init` or `sentry_sdk.init`) | |||
| - `missing_dependency_file`: Could not find any dependency file for the project | |||
| - For other issues, use a descriptive snake_case reason (e.g., `docs_unavailable`) | |||
|
|
|||
| Example success: `{{"missing_integrations": ["zodErrorsIntegration"], "finish_reason": "done"}}` | |||
| Example no missing: `{{"missing_integrations": [], "finish_reason": "done"}}` | |||
| Example no init: `{{"missing_integrations": [], "finish_reason": "missing_sentry_init"}}`""" | |||
There was a problem hiding this comment.
Should we use constants for the return values, so that we ensure they are consistent?
| @@ -385,6 +396,7 @@ def run_missing_sdk_integration_detector_for_project( | |||
| # Extract the structured result | |||
| result = state.get_artifact("missing_integrations", MissingSdkIntegrationsResult) | |||
| missing_integrations = result.missing_integrations if result else [] | |||
There was a problem hiding this comment.
Bug: The call to state.get_artifact can raise a ValidationError if the AI response is missing the required finish_reason field. This unhandled exception causes the task to fail silently.
Severity: HIGH
Suggested Fix
Wrap the call to state.get_artifact in a try...except pydantic.ValidationError block to handle cases where the artifact data is malformed. Alternatively, make the finish_reason field in the MissingSdkIntegrationsResult model optional with a default value to prevent validation from failing.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: src/sentry/autopilot/tasks.py#L387
Potential issue: The `state.get_artifact` method at line 387 uses Pydantic's `parse_obj`
to validate data against the `MissingSdkIntegrationsResult` model, which requires a
`finish_reason` field. If the AI model returns data without this field, `parse_obj`
raises a `ValidationError`. This exception is not handled locally, so the `result`
variable is never assigned. The subsequent fallback logic for a `None` result is never
reached. The entire task then fails, caught by a generic `except Exception` block that
logs a non-specific error and prevents any Sentry issues from being created for the
missing integrations.
Did we get this right? 👍 / 👎 to inform future reviews.
src/sentry/autopilot/tasks.py
Outdated
| Return a JSON object with: | ||
| - `missing_integrations`: Array of missing integration names using exact names from the docs | ||
| - `finish_reason`: A short snake_case string describing the outcome: | ||
| - `done`: Successfully analyzed the project (even if no integrations are missing) |
There was a problem hiding this comment.
'done' may indicate that the task was completed unsuccessfully as well - should we call this "success" instead?
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
shellmayr
left a comment
There was a problem hiding this comment.
Looks good overall - left some comments 👍
No description provided.