For AI agents: A markdown version of this page is available at https://docs.datadoghq.com/security/ai_guard/setup/manual_integrations.md. A documentation index is available at /llms.txt.

Manual Integrations

AI Guard isn't available in the site.

Manual integrations require additional configuration to enable AI Guard protection. Follow the instructions for each framework to set up AI Guard evaluations.

Supported frameworks and libraries

Python

FrameworkSupported VersionsSDK Version
Amazon Strands>= 1.29.0>= 4.7.0
LiteLLM Proxy>= 1.78.5>= 4.8.0

Set up the Datadog Agent

SDKs use the Datadog Agent to send AI Guard data to Datadog. The Agent must be running and accessible to your application.

If you don't use the Datadog Agent, the AI Guard evaluator API still works, but you can't see AI Guard traces in Datadog.

Required environment variables

Set the following environment variables in your application:

VariableValue
DD_AI_GUARD_ENABLEDtrue
DD_API_KEY<YOUR_API_KEY>
DD_APP_KEY<YOUR_APPLICATION_KEY>
DD_ENV<YOUR_ENVIRONMENT>
DD_SERVICE<YOUR_SERVICE>

Integrations

Amazon Strands

Python

The Amazon Strands integration enables AI Guard evaluations for applications built with the Amazon Strands Agents SDK.

Setup

Install dd-trace-py v4.7.0 or later:

pip install ddtrace>=4.7.0

Next, define the entry point for the integration with a plugin or hook provider:

  • Plugin (recommended):
from ddtrace.appsec.ai_guard import AIGuardStrandsPlugin

agent = Agent(
    model=model,
    plugins=[AIGuardStrandsPlugin()]
)
  • HookProvider (legacy):
from ddtrace.appsec.ai_guard import AIGuardStrandsHookProvider

agent = Agent(
    model=model,
    hooks=[AIGuardStrandsHookProvider()]
)

LiteLLM Proxy

Python

The LiteLLM Proxy integration enables AI Guard evaluations for applications using the LiteLLM Proxy.

Setup

Install dd-trace-py v4.8.0 or later:

pip install ddtrace>=4.8.0

Import Datadog’s LiteLLM guardrail next to your configuration file (for example, guardrails.py):

from ddtrace.appsec.ai_guard.integrations.litellm import DatadogAIGuardGuardrail

__all__ = ["DatadogAIGuardGuardrail"]

Add the imported guardrail to your configuration file:

guardrails:
  - guardrail_name: datadog_ai_guard
    litellm_params:
      guardrail: guardrails.DatadogAIGuardGuardrail
      mode: [pre_call, post_call]
      on_input: true
      on_output: true
      block: true

The guardrail supports all three modes: pre_call, post_call, and during_call.

By default, the guardrail follows the blocking configuration set in the AI Guard service settings. To disable blocking, set the block parameter to false (equivalent to the block option in the SDK and REST API).

Further reading

Additional helpful documentation, links, and articles: