Skip to main content

Documentation Index

Fetch the complete documentation index at: https://opensre.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

OpenSRE is an open-source framework for building AI SRE agents that investigate production incidents using your existing observability stack, cloud context, and runbooks.
Install OpenSRE, run onboarding, then investigate a sample alert:
opensre onboard
opensre investigate -i tests/e2e/kubernetes/fixtures/datadog_k8s_alert.json
Yes. OpenSRE is based on LangGraph, so you can self-host it on your own infrastructure using the LangGraph runtime. Before deploying, set LLM_PROVIDER and the matching provider key (for example ANTHROPIC_API_KEY when LLM_PROVIDER=anthropic).
OpenSRE supports multiple providers, including Anthropic, OpenAI, OpenRouter, and Gemini via LLM_PROVIDER plus the matching API key. Additional providers and overrides are documented in .env.example.
Usually yes. OpenSRE integrates with 60+ systems across observability, cloud, incident management, data platforms, and collaboration tools. See the Integrations section in docs for connector-specific setup steps.
Running opensre starts an interactive incident-response shell where you can describe issues in plain language, stream investigations live, and ask grounded follow-up questions in the same session.
OpenSRE is designed for security-sensitive environments and uses structured, auditable workflows. Anonymous telemetry can be disabled with OPENSRE_NO_TELEMETRY=1. For vulnerability reports, email [email protected].