AI Guard isn't available in the site.
AI Guard can automatically evaluate LLM calls made through supported AI ecosystem packages, without requiring manual API calls. When your application uses one of the supported packages, the Datadog SDK instruments it to evaluate those calls through AI Guard automatically. No code changes are required.
Supported frameworks and libraries
| Package | Supported Versions | SDK Version |
|---|
| LangChain | >= 0.1.20 | >= 3.14.0 |
| Package | Supported Versions | SDK Version |
|---|
| AI SDK | v6 | >=5.95.0 |
Set up the Datadog Agent
SDKs use the Datadog Agent to send AI Guard data to Datadog. The Agent must be running and accessible to your application.
If you don't use the Datadog Agent, the AI Guard evaluator API still works, but you can't see AI Guard traces in Datadog.
Required environment variables
Set the following environment variables in your application:
| Variable | Value |
|---|
DD_AI_GUARD_ENABLED | true |
DD_API_KEY | <YOUR_API_KEY> |
DD_APP_KEY | <YOUR_APPLICATION_KEY> |
DD_ENV | <YOUR_ENVIRONMENT> |
DD_SERVICE | <YOUR_SERVICE> |
DD_TRACE_ENABLED | true |
By default, automatic integrations follow the blocking configuration set in the AI Guard service settings. To disable blocking for a specific service, set DD_AI_GUARD_BLOCK to false (equivalent to the block option in the SDK and REST API):
| Variable | Value |
|---|
DD_AI_GUARD_BLOCK | false |
Integrations
The LangChain integration automatically applies AI Guard evaluations to calls made through the LangChain Python SDK.
Traced operations
AI Guard automatically evaluates the following LangChain operations:
- LLMs:
llm.invoke(), llm.ainvoke()
- Chat models:
chat_model.invoke(), chat_model.ainvoke()
- Tools:
BaseTool.invoke(), BaseTool.ainvoke()
The AI SDK integration automatically applies AI Guard evaluations to text and object generation, embeddings, and tool calls.
Traced operations
Further reading
Additional helpful documentation, links, and articles: