Integrations#

aiobotocore#

The aiobotocore integration will trace all AWS calls made with the aiobotocore library. This integration is not enabled by default.

Enabling#

The aiobotocore integration is not enabled by default. Use patch() to enable the integration:

from ddtrace import patch
patch(aiobotocore=True)

Configuration#

ddtrace.config.aiobotocore['tag_no_params']

This opts out of the default behavior of adding span tags for a narrow set of API parameters.

To not collect any API parameters, ddtrace.config.aiobotocore.tag_no_params = True or by setting the environment variable DD_AWS_TAG_NO_PARAMS=true.

Default: False

aiopg#

Instrument aiopg to report a span for each executed Postgres queries:

from ddtrace import patch
import aiopg

# If not patched yet, you can patch aiopg specifically
patch(aiopg=True)

# This will report a span with the default settings
async with aiopg.connect(DSN) as db:
    with (await db.cursor()) as cursor:
        await cursor.execute("SELECT * FROM users WHERE id = 1")

Configuration#

ddtrace.config.aiopg["service"]

The service name reported by default for aiopg spans.

This option can also be set with the DD_AIOPG_SERVICE environment variable.

Default: "postgres"

algoliasearch#

The Algoliasearch integration will add tracing to your Algolia searches.

import ddtrace.auto

from algoliasearch import algoliasearch
client = alogliasearch.Client(<ID>, <API_KEY>)
index = client.init_index(<INDEX_NAME>)
index.search("your query", args={"attributesToRetrieve": "attribute1,attribute1"})

Configuration#

ddtrace.config.algoliasearch['collect_query_text']

Whether to pass the text of your query onto Datadog. Since this may contain sensitive data it’s off by default

Default: False

aredis#

The aredis integration traces aredis requests.

Enabling#

The aredis integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(aredis=True)

Configuration#

ddtrace.config.aredis["service"]

The service name reported by default for aredis traces.

This option can also be set with the DD_AREDIS_SERVICE environment variable.

Default: "redis"

ddtrace.config.aredis["cmd_max_length"]

Max allowable size for the aredis command span tag. Anything beyond the max length will be replaced with "...".

This option can also be set with the DD_AREDIS_CMD_MAX_LENGTH environment variable.

Default: 1000

ddtrace.config.aredis["resource_only_command"]

The span resource will only include the command executed. To include all arguments in the span resource, set this value to False.

This option can also be set with the DD_REDIS_RESOURCE_ONLY_COMMAND environment variable.

Default: True

asgi#

The asgi middleware for tracing all requests to an ASGI-compliant application.

To configure tracing manually:

from ddtrace.contrib.asgi import TraceMiddleware

# app = <your asgi app>
app = TraceMiddleware(app)

Then use ddtrace-run when serving your application. For example, if serving with Uvicorn:

ddtrace-run uvicorn app:app

The middleware also supports using a custom function for handling exceptions for a trace:

from ddtrace.contrib.asgi import TraceMiddleware

def custom_handle_exception_span(exc, span):
    span.set_tag("http.status_code", 501)

# app = <your asgi app>
app = TraceMiddleware(app, handle_exception_span=custom_handle_exception_span)

To retrieve the request span from the scope of an ASGI request use the span_from_scope function:

from ddtrace.contrib.asgi import span_from_scope

def handle_request(scope, send):
    span = span_from_scope(scope)
    if span:
        span.set_tag(...)
    ...

Configuration#

ddtrace.config.asgi['distributed_tracing']

Whether to use distributed tracing headers from requests received by your Asgi app.

Default: True

ddtrace.config.asgi['service_name']

The service name reported for your ASGI app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'asgi'

ddtrace.config.asgi['obfuscate_404_resource']

Indicates whether to obfuscate resource name for spans that result in a 404 response code.

This setting also applies to other integrations built on ASGI, including FastAPI and Starlette.

Can also be configured via the DD_ASGI_OBFUSCATE_404_RESOURCE environment variable.

Default: 'False'

DD_TRACE_WEBSOCKET_MESSAGES_ENABLED#

Indicates whether to trace websocket messages.

Default: 'True'

DD_TRACE_WEBSOCKET_MESSAGES_INHERIT_SAMPLING#

Indicates whether websocket message spans should inherit sampling from the handshake span.

Default: 'True'

DD_TRACE_WEBSOCKET_MESSAGES_SEPARATE_TRACES#

Indicates whether websocket message spans should be on their own trace.

If disabled, websocket messages will have the handshake as parent span.

If disabled, DD_TRACE_WEBSOCKET_MESSAGES_INHERIT_SAMPLING will be ignored.

Default: 'True'

aiohttp#

The aiohttp integration traces requests made with the client or to the server.

The client is automatically instrumented while the server must be manually instrumented using middleware.

Client#

Enabling#

The client integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(aiohttp=True)

Configuration#

ddtrace.config.aiohttp_client['distributed_tracing']

Include distributed tracing headers in requests sent from the aiohttp client.

This option can also be set with the DD_AIOHTTP_CLIENT_DISTRIBUTED_TRACING environment variable.

Default: True

ddtrace.config.aiohttp_client['split_by_domain']

Whether or not to use the domain name of requests as the service name.

Default: False

ddtrace.config.aiohttp['disable_stream_timing_for_mem_leak']

Whether or not to to address a potential memory leak in the aiohttp integration. When set to True, this flag may cause streamed response span timing to be inaccurate.

Default: False

Server#

Enabling#

Automatic instrumentation is not available for the server, instead the provided trace_app function must be used:

from aiohttp import web
from ddtrace.contrib.aiohttp import trace_app

# create your application
app = web.Application()
app.router.add_get('/', home_handler)

# trace your application handlers
trace_app(app, service='async-api')
web.run_app(app, port=8000)

Integration settings are attached to your application under the datadog_trace namespace. You can read or update them as follows:

# disables distributed tracing for all received requests
app['datadog_trace']['distributed_tracing_enabled'] = False

Available settings are:

  • service (default: aiohttp-web): set the service name used by the tracer. Usually this configuration must be updated with a meaningful name.

  • distributed_tracing_enabled (default: True): enable distributed tracing during the middleware execution, so that a new span is created with the given trace_id and parent_id injected via request headers.

When a request span is created, a new Context for this logical execution is attached to the request object, so that it can be used in the application code:

async def home_handler(request):
    ctx = request['datadog_context']
    # do something with the tracing Context

All HTTP tags are supported for this integration.

aiohttp-jinja2#

The aiohttp_jinja2 integration adds tracing of template rendering.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(aiohttp_jinja2=True)

aiokafka#

This integration instruments the aiokafka<https://github.com/aio-libs/aiokafka> library to trace event streaming.

Enabling#

The aiokafka integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(aiokafka=True)
import aiokafka
...

Configuration#

ddtrace.config.aiokafka["service"]

The service name reported by default for your kafka spans.

This option can also be set with the DD_AIOKAFKA_SERVICE environment variable.

Default: "kafka"

ddtrace.config.aiokafka["distributed_tracing_enabled"]

Whether to enable distributed tracing between Kafka messages.

This option can also be set with the DD_KAFKA_PROPAGATION_ENABLED environment variable.

Default: "False"

aiomysql#

The aiomysql integration instruments the aiomysql library to trace MySQL queries.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(aiomysql=True)

Configuration#

ddtrace.config.aiomysql["service"]

The service name reported by default for aiomysql spans.

This option can also be set with the DD_AIOMYSQL_SERVICE environment variable.

Default: "mysql"

anthropic#

The Anthropic integration instruments the Anthropic Python library to traces for requests made to the models for messages.

All traces submitted from the Anthropic integration are tagged by:

  • service, env, version: see the Unified Service Tagging docs.

  • anthropic.request.model: Anthropic model used in the request.

  • anthropic.request.api_key: Anthropic API key used to make the request (obfuscated to match the Anthropic UI representation sk-...XXXX where XXXX is the last 4 digits of the key).

  • anthropic.request.parameters: Parameters used in anthropic package call.

Enabling#

The Anthropic integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Note that these commands also enable the httpx integration which traces HTTP requests from the Anthropic library.

Alternatively, use patch() to manually enable the Anthropic integration:

from ddtrace import config, patch

patch(anthropic=True)

Configuration#

ddtrace.config.anthropic["service"]

The service name reported by default for Anthropic requests.

Alternatively, set this option with the DD_ANTHROPIC_SERVICE environment variable.

asyncio#

This integration provides context management for tracing the execution flow of concurrent execution of asyncio.Task.

asyncpg#

The asyncpg integration traces database requests made using connection and cursor objects.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(asyncpg=True)

Configuration#

ddtrace.config.asyncpg['service']

The service name reported by default for asyncpg connections.

This option can also be set with the DD_ASYNCPG_SERVICE environment variable.

Default: postgres

avro#

The Avro integration will trace all Avro read / write calls made with the avro library. This integration is enabled by default.

Enabling#

The avro integration is enabled by default. Use patch() to enable the integration:

from ddtrace import patch
patch(avro=True)

Configuration#

Azure Functions#

The azure_functions integration traces all http requests to your Azure Function app.

Enabling#

The azure_functions integration is enabled by default when using import ddtrace.auto.

Configuration#

ddtrace.config.azure_functions["service"]

The service name reported by default for azure function apps.

This option can also be set with the DD_SERVICE environment variable.

Default: "azure_functions"

ddtrace.config.azure_functions['distributed_tracing']

Whether to parse distributed tracing headers from requests or messages received by your azure function apps.

This option can also be set with the DD_AZURE_FUNCTIONS_DISTRIBUTED_TRACING environment variable.

Default: True

botocore#

The Botocore integration will trace all AWS calls made with the botocore library. Libraries like Boto3 that use Botocore will also be patched.

Enabling#

The botocore integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(botocore=True)

To patch only specific botocore modules, pass a list of the module names instead:

from ddtrace import patch
patch(botocore=['s3', 'sns'])

Configuration#

ddtrace.config.botocore['distributed_tracing']

Whether to inject distributed tracing data to requests in SQS, SNS, EventBridge, Kinesis Streams and Lambda.

Can also be enabled with the DD_BOTOCORE_DISTRIBUTED_TRACING environment variable.

Example:

from ddtrace import config

# Enable distributed tracing
config.botocore['distributed_tracing'] = True

Default: True

ddtrace.config.botocore['invoke_with_legacy_context']

This preserves legacy behavior when tracing directly invoked Python and Node Lambda functions instrumented with datadog-lambda-python < v41 or datadog-lambda-js < v3.58.0.

Legacy support for older libraries is available with ddtrace.config.botocore.invoke_with_legacy_context = True or by setting the environment variable DD_BOTOCORE_INVOKE_WITH_LEGACY_CONTEXT=true.

Default: False

ddtrace.config.botocore['operations'][<operation>].error_statuses = "<error statuses>"

Definition of which HTTP status codes to consider for making a span as an error span.

By default response status codes of '500-599' are considered as errors for all endpoints.

Example marking 404, and 5xx as errors for s3.headobject API calls:

from ddtrace import config

config.botocore['operations']['s3.headobject'].error_statuses = '404,500-599'

See HTTP - Custom Error Codes documentation for more examples.

ddtrace.config.botocore['tag_no_params']

This opts out of the default behavior of collecting a narrow set of API parameters as span tags.

To not collect any API parameters, ddtrace.config.botocore.tag_no_params = True or by setting the environment variable DD_AWS_TAG_NO_PARAMS=true.

Default: False

ddtrace.config.botocore['instrument_internals']

This opts into collecting spans for some internal functions, including parsers.ResponseParser.parse.

Can also be enabled with the DD_BOTOCORE_INSTRUMENT_INTERNALS environment variable.

Default: False

ddtrace.config.botocore['dynamodb_primary_key_names_for_tables']

This enables DynamoDB API calls to be instrumented with span pointers. Many DynamoDB API calls do not include the Item’s Primary Key fields as separate values, so they need to be provided to the tracer separately. This field should be structured as a dict keyed by the table names as str. Each value should be the set of primary key field names (as str) for the associated table. The set may have exactly one or two elements, depending on the Table’s Primary Key schema.

In python this would look like:

ddtrace.config.botocore['dynamodb_primary_key_names_for_tables'] = {
    'table_name': {'key1', 'key2'},
    'other_table': {'other_key'},
}

Can also be enabled with the DD_BOTOCORE_DYNAMODB_TABLE_PRIMARY_KEYS environment variable which is parsed as a JSON object with strings for keys and lists of strings for values.

This would look something like:

export DD_BOTOCORE_DYNAMODB_TABLE_PRIMARY_KEYS='{
    "table_name": ["key1", "key2"],
    "other_table": ["other_key"]
}'

Default: {}

ddtrace.config.botocore['add_span_pointers']

This enables the addition of span pointers to spans associated with successful AWS API calls.

Alternatively, you can set this option with the DD_BOTOCORE_ADD_SPAN_POINTERS environment variable.

Default: True

boto2#

Boto integration will trace all AWS calls made via boto2.

Enabling#

The boto integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(boto=True)

Configuration#

ddtrace.config.boto['tag_no_params']

This opts out of the default behavior of collecting a narrow set of API parameters as span tags.

To not collect any API parameters, ddtrace.config.boto.tag_no_params = True or by setting the environment variable DD_AWS_TAG_NO_PARAMS=true.

Default: False

Bottle#

The bottle integration traces the Bottle web framework. Add the following plugin to your app:

import bottle
from ddtrace import tracer
from ddtrace.contrib.bottle import TracePlugin

app = bottle.Bottle()
plugin = TracePlugin(service="my-web-app")
app.install(plugin)

All HTTP tags are supported for this integration.

Configuration#

ddtrace.config.bottle['distributed_tracing']

Whether to parse distributed tracing headers from requests received by your bottle app.

Can also be enabled with the DD_BOTTLE_DISTRIBUTED_TRACING environment variable.

Default: True

Example:

from ddtrace import config

# Enable distributed tracing
config.bottle['distributed_tracing'] = True

Celery#

The Celery integration will trace all tasks that are executed in the background. Functions and class based tasks are traced only if the Celery API is used, so calling the function directly or via the run() method will not generate traces. However, calling apply(), apply_async() and delay() will produce tracing data. To trace your Celery application, call the patch method:

import celery
from ddtrace import patch

patch(celery=True)
app = celery.Celery()

@app.task
def my_task():
    pass

class MyTask(app.Task):
    def run(self):
        pass

Configuration#

ddtrace.config.celery['distributed_tracing']

Whether or not to pass distributed tracing headers to Celery workers. Note: this flag applies to both Celery workers and callers separately.

On the caller: enabling propagation causes the caller and worker to share a single trace while disabling causes them to be separate.

On the worker: enabling propagation causes context to propagate across tasks, such as when Task A queues work for Task B, or if Task A retries. Disabling propagation causes each celery.run task to be in its own separate trace.

Can also be enabled with the DD_CELERY_DISTRIBUTED_TRACING environment variable.

Default: False

ddtrace.config.celery['producer_service_name']

Sets service name for producer

Default: 'celery-producer'

ddtrace.config.celery['worker_service_name']

Sets service name for worker

Default: 'celery-worker'

CherryPy#

The Cherrypy trace middleware will track request timings. It uses the cherrypy hooks and creates a tool to track requests and errors

Usage#

To install the middleware, add:

from ddtrace.trace import tracer
from ddtrace.contrib.cherrypy import TraceMiddleware

and create a TraceMiddleware object:

traced_app = TraceMiddleware(cherrypy, service="my-cherrypy-app")

Configuration#

ddtrace.config.cherrypy['distributed_tracing']

Whether to parse distributed tracing headers from requests received by your CherryPy app.

Can also be enabled with the DD_CHERRYPY_DISTRIBUTED_TRACING environment variable.

Default: True

ddtrace.config.cherrypy['service']

The service name reported for your CherryPy app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'cherrypy'

Example:: Here is the end result, in a sample app:

import cherrypy

from ddtrace.contrib.cherrypy import TraceMiddleware
TraceMiddleware(cherrypy, service="my-cherrypy-app")

@cherrypy.tools.tracer()
class HelloWorld(object):
    def index(self):
        return "Hello World"
    index.exposed = True

cherrypy.quickstart(HelloWorld())

Claude Agent SDK#

This integration instruments the claude-agent-sdk library.

Enabling#

The claude_agent_sdk integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(claude_agent_sdk=True)
import claude_agent_sdk
...

Global Configuration#

ddtrace.config.claude_agent_sdk["service"]

The service name reported by default for claude_agent_sdk spans.

This option can also be set with the DD_CLAUDE_AGENT_SDK_SERVICE environment variable.

Consul#

Instrument Consul to trace KV queries.

Only supports tracing for the synchronous client.

import ddtrace.auto will automatically patch your Consul client to make it work.

from ddtrace import patch
import consul

# If not patched yet, you can patch consul specifically
patch(consul=True)

# This will report a span with the default settings
client = consul.Consul(host="127.0.0.1", port=8500)
client.get("my-key")

Configuration#

ddtrace.config.consul["service"]

The service name reported by default for consul spans.

This option can also be set with the DD_CONSUL_SERVICE environment variable.

Default: "consul"

Coverage#

The Coverage.py integration traces test code coverage when using pytest or unittest.

Enabling#

The Coverage.py integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Alternately, use patch() to manually enable the integration:

from ddtrace import patch
patch(coverage=True)

Note: Coverage.py instrumentation is only enabled if pytest or unittest instrumentation is enabled.

CrewAI#

The CrewAI integration instruments the CrewAI Python library to emit traces for crew/task/agent/tool executions.

All traces submitted from the CrewAI integration are tagged by: - service, env, version: see the Unified Service Tagging docs.

Enabling#

The CrewAI integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Alternatively, use patch() to manually enable the CrewAI integration:

from ddtrace import patch

patch(crewai=True)

Configuration#

ddtrace.config.crewai["service"]

The service name reported by default for CrewAI requests.

Alternatively, set this option with the DD_CREWAI_SERVICE environment variable.

datadog_lambda#

The aws_lambda integration currently enables traces to be sent before an impending timeout in an AWS Lambda function instrumented with the Datadog Lambda Python package.

Enabling#

The aws_lambda integration is enabled automatically for AWS Lambda functions which have been instrumented with Datadog.

Configuration#

This integration is configured automatically when ddtrace-run or import ddtrace.auto is used.

Configuration#

Important

You can configure some features with environment variables.

ddtrace.contrib.internal.aws_lambda.DD_APM_FLUSH_DEADLINE_MILLISECONDS#

Used to determine when to submit spans before a timeout occurs. When the remaining time in an AWS Lambda invocation is less than DD_APM_FLUSH_DEADLINE_MILLISECONDS, the tracer will attempt to submit the current active spans and all finished spans.

Default: 100

For additional configuration refer to Instrumenting Python Serverless Applications by Datadog.

Django#

The Django integration traces requests, views, template renderers, database and cache calls in a Django application.

Enable Django tracing automatically via ddtrace-run:

ddtrace-run python manage.py runserver

Django tracing can also be enabled manually:

import ddtrace.auto

To have Django capture the tracer logs, ensure the LOGGING variable in settings.py looks similar to:

LOGGING = {
    'loggers': {
        'ddtrace': {
            'handlers': ['console'],
            'level': 'WARNING',
        },
    },
}

Configuration#

Important

Note that the in-code configuration must be run before Django is instrumented. This means that in-code configuration will not work with ddtrace-run and before a call to patch or import ddtrace.auto.

ddtrace.config.django['distributed_tracing_enabled']

Whether or not to parse distributed tracing headers from requests received by your Django app.

Default: True

ddtrace.config.django['service_name']

The service name reported for your Django app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'django'

ddtrace.config.django['cache_service_name']

The service name reported for your Django app cache layer.

Can also be configured via the DD_DJANGO_CACHE_SERVICE_NAME environment variable.

Default: 'django'

ddtrace.config.django['database_service_name']

A string reported as the service name of the Django app database layer.

Can also be configured via the DD_DJANGO_DATABASE_SERVICE_NAME environment variable.

Takes precedence over database_service_name_prefix.

Default: ''

ddtrace.config.django['database_service_name_prefix']

A string to be prepended to the service name reported for your Django app database layer.

Can also be configured via the DD_DJANGO_DATABASE_SERVICE_NAME_PREFIX environment variable.

The database service name is the name of the database appended with ‘db’. Has a lower precedence than database_service_name.

Default: ''

ddtrace.config.django["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also be configured via the DD_DJANGO_TRACE_FETCH_METHODS environment variable.

Default: False

DD_DJANGO_TRACING_MINIMAL#

Enables minimal tracing mode for performance-sensitive applications. When enabled, this disables Django ORM, cache, and template instrumentation while keeping middleware instrumentation enabled. This can significantly reduce overhead by removing Django-specific spans while preserving visibility into the underlying database drivers, cache clients, and other integrations.

This is equivalent to setting: - DD_DJANGO_INSTRUMENT_TEMPLATES=false - DD_DJANGO_INSTRUMENT_DATABASES=false - DD_DJANGO_INSTRUMENT_CACHES=false

For example, with DD_DJANGO_INSTRUMENT_DATABASES=false, Django ORM query spans are disabled but database driver spans (e.g., psycopg, MySQLdb) will still be created, providing visibility into the actual database queries without the Django ORM overhead.

Consider using this option if your application is performance-sensitive and the additional Django-layer spans are not required for your observability needs.

Default: True

New in version v3.15.0.

ddtrace.config.django['instrument_middleware']

Whether or not to instrument middleware.

Can also be enabled with the DD_DJANGO_INSTRUMENT_MIDDLEWARE environment variable.

Default: True

ddtrace.config.django['instrument_templates']

Whether or not to instrument template rendering.

Can be enabled with the DD_DJANGO_INSTRUMENT_TEMPLATES=true or DD_DJANGO_TRACING_MINIMAL=false environment variables.

Default: False

ddtrace.config.django['instrument_databases']

Whether or not to instrument databases.

Can be enabled with the DD_DJANGO_INSTRUMENT_DATABASES=true or DD_DJANGO_TRACING_MINIMAL=false environment variables.

Default: False

ddtrace.config.django['instrument_caches']

Whether or not to instrument caches.

Can be enabled with the DD_DJANGO_INSTRUMENT_CACHES=true or DD_DJANGO_TRACING_MINIMAL=false environment variables.

Default: False

ddtrace.config.django.http['trace_query_string']

Whether or not to include the query string as a tag.

Default: False

ddtrace.config.django['include_user_name']

Whether or not to include the authenticated user’s name/id as a tag on the root request span.

Can also be configured via the DD_DJANGO_INCLUDE_USER_NAME environment variable.

Default: True

ddtrace.config.django['include_user_email']

(ASM) Whether or not to include the authenticated user’s email (if available) as a tag on the root request span on a user event.

Can also be configured via the DD_DJANGO_INCLUDE_USER_EMAIL environment variable.

Default: False

ddtrace.config.django['include_user_login']

(ASM) Whether or not to include the authenticated user’s login (if available) as a tag on the root request span on a user event.

Can also be configured via the DD_DJANGO_INCLUDE_USER_LOGIN environment variable.

Default: True

ddtrace.config.django['include_user_realname']

(ASM) Whether or not to include the authenticated user’s real name (if available) as a tag on the root request span on a user event.

Can also be configured via the DD_DJANGO_INCLUDE_USER_REALNAME environment variable.

Default: False

ddtrace.config.django['use_handler_resource_format']

Whether or not to use the resource format “{method} {handler}”. Can also be enabled with the DD_DJANGO_USE_HANDLER_RESOURCE_FORMAT environment variable.

The default resource format for Django >= 2.2.0 is otherwise “{method} {urlpattern}”.

Default: False

ddtrace.config.django['use_handler_with_url_name_resource_format']

Whether or not to use the resource format “{method} {handler}.{url_name}”. Can also be enabled with the DD_DJANGO_USE_HANDLER_WITH_URL_NAME_RESOURCE_FORMAT environment variable.

This configuration applies only for Django <= 2.2.0.

Default: False

ddtrace.config.django['use_legacy_resource_format']

Whether or not to use the legacy resource format “{handler}”. Can also be enabled with the DD_DJANGO_USE_LEGACY_RESOURCE_FORMAT environment variable.

The default resource format for Django >= 2.2.0 is otherwise “{method} {urlpattern}”.

Default: False

Example:

from ddtrace import config

# Enable distributed tracing
config.django['distributed_tracing_enabled'] = True

# Override service name
config.django['service_name'] = 'custom-service-name'

Headers tracing is supported for this integration.

dogpile.cache#

Instrument dogpile.cache to report all cached lookups.

This will add spans around the calls to your cache backend (e.g. redis, memory, etc). The spans will also include the following tags:

  • key/keys: The key(s) dogpile passed to your backend. Note that this will be the output of the region’s function_key_generator, but before any key mangling is applied (i.e. the region’s key_mangler).

  • region: Name of the region.

  • backend: Name of the backend class.

  • hit: If the key was found in the cache.

  • expired: If the key is expired. This is only relevant if the key was found.

While cache tracing will generally already have keys in tags, some caching setups will not have useful tag values - such as when you’re using consistent hashing with memcached - the key(s) will appear as a mangled hash.

# Patch before importing dogpile.cache
from ddtrace import patch
patch(dogpile_cache=True)

from dogpile.cache import make_region

region = make_region().configure(
    "dogpile.cache.pylibmc",
    expiration_time=3600,
    arguments={"url": ["127.0.0.1"]},
)

@region.cache_on_arguments()
def hello(name):
    # Some complicated, slow calculation
    return "Hello, {}".format(name)

Dramatiq#

Enabling#

The dramatiq integration will trace background tasks as marked by the @dramatiq.actor decorator. To trace your dramatiq app, call the patch method:

import dramatiq
from ddtrace import patch

patch(dramatiq=True)

@dramatiq.actor
def my_background_task():
    # do something

@dramatiq.actor
def my_other_task(content):
    # do something

if __name__ == "__main__":
    my_background_task.send()
    my_other_task.send("mycontent")
    # Can also call the methods with options
    # my_other_task.send_with_options(("mycontent"), {"max_retries"=3})

You may also enable dramatiq tracing automatically via ddtrace-run:

ddtrace-run python app.py

Elasticsearch#

The Elasticsearch integration will trace Elasticsearch queries.

Enabling#

The elasticsearch integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
from elasticsearch import Elasticsearch

patch(elasticsearch=True)
# This will report spans with the default instrumentation
es = Elasticsearch(port=ELASTICSEARCH_CONFIG['port'])
# Example of instrumented query
es.indices.create(index='books', ignore=400)

OpenSearch is also supported (opensearch-py):

from ddtrace import patch
from opensearchpy import OpenSearch

patch(elasticsearch=True)
os = OpenSearch()
# Example of instrumented query
os.indices.create(index='books', ignore=400)

Configuration#

ddtrace.config.elasticsearch['service']

The service name reported for your elasticsearch app.

Example:

from ddtrace import config

# Override service name
config.elasticsearch['service'] = 'custom-service-name'

Falcon#

To trace the falcon web framework, install the trace middleware:

import falcon
from ddtrace.contrib.falcon import TraceMiddleware

mw = TraceMiddleware('my-falcon-app')
falcon.API(middleware=[mw])

You can also use the autopatching functionality:

import falcon
from ddtrace.trace import patch

patch(falcon=True)

app = falcon.API()

To disable distributed tracing when using autopatching, set the DD_FALCON_DISTRIBUTED_TRACING environment variable to False.

Headers tracing is supported for this integration.

Fastapi#

The fastapi integration will trace requests to and from FastAPI.

Enabling#

The fastapi integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
from fastapi import FastAPI

patch(fastapi=True)
app = FastAPI()

When registering your own ASGI middleware using FastAPI’s add_middleware() function, keep in mind that Datadog spans close after your middleware’s call to await self.app() returns. This means that accesses of span data from within the middleware should be performed prior to this call.

Configuration#

ddtrace.config.fastapi['service_name']

The service name reported for your fastapi app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'fastapi'

ddtrace.config.fastapi['request_span_name']

The span name for a fastapi request.

Default: 'fastapi.request'

See asgi configuration for details on resource name obfuscation.

Example:

from ddtrace import config

# Override service name
config.fastapi['service_name'] = 'custom-service-name'

# Override request span name
config.fastapi['request_span_name'] = 'custom-request-span-name'

Flask#

The Flask integration will add tracing to all requests to your Flask application.

This integration will track the entire Flask lifecycle including user-defined endpoints, hooks, signals, and template rendering.

To configure tracing manually:

import ddtrace.auto

from flask import Flask

app = Flask(__name__)


@app.route('/')
def index():
    return 'hello world'


if __name__ == '__main__':
    app.run()

You may also enable Flask tracing automatically via ddtrace-run:

ddtrace-run python app.py

Note that if you are using Runtime Code Analysis to detect vulnerabilities (DD_IAST_ENABLED=1) and your main app.py file contains code outside the app.run() call (e.g. routes or utility functions) you will need to import and call ddtrace_iast_flask_patch() before the app.run() to ensure the code inside the main module is patched to propagation works:

from flask import Flask
from ddtrace.appsec._iast import ddtrace_iast_flask_patch

app = Flask(__name__)

if __name__ == '__main__':
    ddtrace_iast_flask_patch()
    app.run()

Configuration#

ddtrace.config.flask['distributed_tracing_enabled']

Whether to parse distributed tracing headers from requests received by your Flask app.

Default: True

ddtrace.config.flask['service_name']

The service name reported for your Flask app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'flask'

ddtrace.config.flask['collect_view_args']

Whether to add request tags for view function argument values.

Default: True

ddtrace.config.flask['template_default_name']

The default template name to use when one does not exist.

Default: <memory>

ddtrace.config.flask['trace_signals']

Whether to trace Flask signals (before_request, after_request, etc).

Default: True

Example:

from ddtrace import config

# Enable distributed tracing
config.flask['distributed_tracing_enabled'] = True

# Override service name
config.flask['service_name'] = 'custom-service-name'

All HTTP tags are supported for this integration.

Flask Cache#

The tracer supports both Flask-Cache and Flask-Caching.

To initialize a traced cache:

Cache = get_traced_cache(service='my-flask-cache-app')

Here is the end result, in a sample app:

from flask import Flask

from ddtrace.contrib.flask_cache import get_traced_cache

app = Flask(__name__)

# get the traced Cache class
Cache = get_traced_cache(service='my-flask-cache-app')

# use the Cache as usual with your preferred CACHE_TYPE
cache = Cache(app, config={'CACHE_TYPE': 'simple'})

def counter():
    # this access is traced
    conn_counter = cache.get("conn_counter")

Use a specific Cache implementation with:

from ddtrace.contrib.flask_cache import get_traced_cache

from flask_caching import Cache

Cache = get_traced_cache(service='my-flask-cache-app', cache_cls=Cache)

futures#

The futures integration propagates the current active tracing context to tasks spawned using a ThreadPoolExecutor. The integration ensures that when operations are executed in another thread, those operations can continue the previously generated trace.

Enabling#

The futures integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(futures=True)

gevent#

The gevent integration adds support for tracing across greenlets.

Note

If ddtrace-run is not being used then be sure to import ddtrace.auto before importing from the gevent library. If ddtrace-run is being used then no additional configuration is required.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(gevent=True)

Example of the context propagation:

def my_parent_function():
    with tracer.trace("web.request") as span:
        span.service = "web"
        gevent.spawn(worker_function)


def worker_function():
    # then trace its child
    with tracer.trace("greenlet.call") as span:
        span.service = "greenlet"
        ...

        with tracer.trace("greenlet.child_call") as child:
            ...

google-adk#

The Google ADK integration instruments the Google ADK Python SDK to create spans for Agent requests.

All traces submitted from the Google ADK integration are tagged by:

Enabling#

The Google ADK integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Configuration#

ddtrace.config.google_adk["service"]

The service name reported by default for Google ADK requests.

Set this option with the DD_GOOGLE_ADK_SERVICE environment variable.

google-genai#

The Google GenAI integration instruments the Google GenAI Python SDK to trace LLM requests made to Gemini and VertexAI models.

All traces submitted from the Google GenAI integration are tagged by:

Enabling#

The Google GenAI integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Alternatively, use patch() to manually enable the Google GenAI integration:

from ddtrace import config, patch

patch(google_genai=True)

Configuration#

ddtrace.config.google_genai["service"]

The service name reported by default for Google GenAI requests.

Alternatively, set this option with the DD_GOOGLE_GENAI_SERVICE environment variable.

graphql#

This integration instruments graphql-core queries.

Enabling#

The graphql integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(graphql=True)
import graphql
...

Configuration#

ddtrace.config.graphql["service"]

The service name reported by default for graphql instances.

This option can also be set with the DD_SERVICE environment variable.

Default: "graphql"

ddtrace.config.graphql["resolvers_enabled"]

To enable graphql.resolve spans set DD_TRACE_GRAPHQL_RESOLVERS_ENABLED to True

Default: False

Enabling instrumentation for resolvers will produce a graphql.resolve span for every graphql field. For complex graphql queries this could produce large traces.

ddtrace.config.graphql["_error_extensions"]

Enable setting user-provided error extensions on span events for graphql errors.

Default: None

Grpc#

The gRPC integration traces the client and server using the interceptor pattern.

Enabling#

The gRPC integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(grpc=True)

# use grpc like usual

Configuration#

ddtrace.config.grpc["service"]

The service name reported by default for gRPC client instances.

This option can also be set with the DD_GRPC_SERVICE environment variable.

Default: "grpc-client"

ddtrace.config.grpc_server["service"]

The service name reported by default for gRPC server instances.

This option can also be set with the DD_SERVICE or DD_GRPC_SERVER_SERVICE environment variables.

Default: "grpc-server"

gunicorn#

ddtrace works with Gunicorn.

Note

If you cannot wrap your Gunicorn server with the ddtrace-run command and it uses gevent workers be sure to import ddtrace.auto as early as possible in your application’s lifecycle. Do not use ddtrace-run with import ddtrace.auto.

httplib#

Trace the standard library httplib/http.client libraries to trace HTTP requests.

Enabling#

The httplib integration is disabled by default. It can be enabled when using ddtrace-run or import ddtrace.auto using the DD_PATCH_MODULES environment variable or DD_TRACE_HTTPLIB_ENABLED:

DD_PATCH_MODULES=httplib:true ddtrace-run ....

Configuration#

ddtrace.config.httplib['distributed_tracing']

Include distributed tracing headers in requests sent from httplib.

This option can also be set with the DD_HTTPLIB_DISTRIBUTED_TRACING environment variable.

Default: True

Headers tracing is supported for this integration.

httpx#

The httpx integration traces all HTTP requests made with the httpx library.

Enabling#

The httpx integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Alternatively, use patch() to manually enable the integration:

from ddtrace import patch
patch(httpx=True)

# use httpx like usual

Configuration#

ddtrace.config.httpx['service']

The default service name for httpx requests. By default the httpx integration will not define a service name and inherit its service name from its parent span.

If you are making calls to uninstrumented third party applications you can set this setting or use the ddtrace.config.httpx['split_by_domain'] setting.

This option can also be set with the DD_HTTPX_SERVICE environment variable.

Default: None

ddtrace.config.httpx['distributed_tracing']

Whether or not to inject distributed tracing headers into requests.

Default: True

ddtrace.config.httpx['split_by_domain']

Whether or not to use the domain name of requests as the service name.

This setting takes precedence over ddtrace.config.httpx['service']

Default: False

Headers tracing is supported for this integration.

HTTP Tagging is supported for this integration.

Jinja2#

The jinja2 integration traces templates loading, compilation and rendering. Auto instrumentation is available using the patch. The following is an example:

from ddtrace import patch
from jinja2 import Environment, FileSystemLoader

patch(jinja2=True)

env = Environment(
    loader=FileSystemLoader("templates")
)
template = env.get_template('mytemplate.html')

Configuration#

ddtrace.config.jinja2["service"]

The service name reported by default for jinja2 spans.

This option can also be set with the DD_JINJA2_SERVICE environment variable.

By default, the service name is set to None, so it is inherited from the parent span. If there is no parent span and the service name is not overridden the agent will drop the traces.

Default: None

Kafka#

This integration instruments the confluent-kafka<https://github.com/confluentinc/confluent-kafka-python> library to trace event streaming.

Enabling#

The kafka integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(kafka=True)
import confluent_kafka
...

Configuration#

ddtrace.config.kafka["service"]

The service name reported by default for your kafka spans.

This option can also be set with the DD_KAFKA_SERVICE environment variable.

Default: "kafka"

ddtrace.config.kafka["distributed_tracing_enabled"]

Whether to enable distributed tracing between Kafka messages.

This option can also be set with the DD_KAFKA_PROPAGATION_ENABLED environment variable.

Default: "False"

Note: Data Streams Monitoring (DD_DATA_STREAMS_ENABLED=true) or distributed tracing (DD_KAFKA_PROPAGATION_ENABLED=true) will only work if Kafka message headers are supported. If log.message.format.version is set in the Kafka broker configuration, it must be set to 0.11.0.0 or higher.

kombu#

Instrument kombu to report AMQP messaging.

ref:import ddtrace.auto<ddtraceauto> and ddtrace-run will not automatically patch your Kombu client to make it work, as this would conflict with the Celery integration. You must specifically request kombu be patched, as in the example below.

Note: To permit distributed tracing for the kombu integration you must enable the tracer with priority sampling. Refer to the documentation here: https://ddtrace.readthedocs.io/en/stable/advanced_usage.html#priority-sampling

Without enabling distributed tracing, spans within a trace generated by the kombu integration might be dropped without the whole trace being dropped.

Run with DD_PATCH_MODULES=kombu:true:

import ddtrace.auto
import kombu

# If not patched yet, you can patch kombu specifically
patch(kombu=True)

# This will report a span with the default settings
conn = kombu.Connection("amqp://guest:[email protected]:5672//")
conn.connect()
task_queue = kombu.Queue('tasks', kombu.Exchange('tasks'), routing_key='tasks')
to_publish = {'hello': 'world'}
producer = conn.Producer()
producer.publish(to_publish,
                 exchange=task_queue.exchange,
                 routing_key=task_queue.routing_key,
                 declare=[task_queue])

Configuration#

ddtrace.config.kombu["service"]

The service name reported by default for kombu spans.

This option can also be set with the DD_KOMBU_SERVICE environment variable.

Default: "kombu"

LangChain#

The LangChain integration instruments the LangChain Python library to emit traces for requests made to the LLMs, chat models, embeddings, chains, and vector store interfaces.

All traces submitted from the LangChain integration are tagged by:

  • service, env, version: see the Unified Service Tagging docs.

  • langchain.request.provider: LLM provider used in the request.

  • langchain.request.model: LLM/Chat/Embeddings model used in the request.

  • langchain.request.api_key: LLM provider API key used to make the request (obfuscated into the format ...XXXX where XXXX is the last 4 digits of the key).

Note: For langchain>=0.1.0, this integration drops tracing support for the following deprecated langchain operations in favor of the recommended alternatives in the langchain changelog docs. This includes:

  • langchain.chain.Chain.run/arun with langchain.chain.Chain.invoke/ainvoke

  • langchain.embeddings.openai.OpenAIEmbeddings.embed_documents with langchain_openai.OpenAIEmbeddings.embed_documents

  • langchain.vectorstores.pinecone.Pinecone.similarity_search with langchain_pinecone.PineconeVectorStore.similarity_search

Note: For langchain>=0.2.0, this integration does not patch langchain-community if it is not available, as langchain-community is no longer a required dependency of langchain>=0.2.0. This means that this integration will not trace the following:

  • Embedding calls made using langchain_community.embeddings.*

  • Vector store similarity search calls made using langchain_community.vectorstores.*

  • Total cost metrics for OpenAI requests

Enabling#

The LangChain integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Note that these commands also enable the requests and aiohttp integrations which trace HTTP requests to LLM providers, as well as the openai integration which traces requests to the OpenAI library.

Alternatively, use patch() to manually enable the LangChain integration::

from ddtrace import config, patch

# Note: be sure to configure the integration before calling patch()! # config.langchain[“logs_enabled”] = True

patch(langchain=True)

# to trace synchronous HTTP requests # patch(langchain=True, requests=True)

# to trace asynchronous HTTP requests (to the OpenAI library) # patch(langchain=True, aiohttp=True)

# to include underlying OpenAI spans from the OpenAI integration # patch(langchain=True, openai=True)

Configuration#

ddtrace.config.langchain["service"]

The service name reported by default for LangChain requests.

Alternatively, set this option with the DD_LANGCHAIN_SERVICE environment variable.

LangGraph#

The LangGraph integration instruments the LangGraph Python library to emit traces for graph and node invocations.

All traces submitted from the LangGraph integration are tagged by: - service, env, version: see the Unified Service Tagging docs.

Enabling#

The LangGraph integration is enabled automatically when you use ddtrace-run or import ddtrace.auto. Alternatively, use patch() to manually enable the LangGraph integration:

from ddtrace import patch
patch(langgraph=True)

Configuration#

ddtrace.config.langgraph["service"]
The service name reported by default for LangGraph requests.
Alternatively, set this option with the ``DD_LANGGRAPH_SERVICE`` environment variable.

LiteLLM#

The LiteLLM integration instruments the LiteLLM Python SDK and proxy server.

All traces submitted from the LiteLLM integration are tagged by:

  • service, env, version: see the Unified Service Tagging docs.

  • litellm.request.model: Model used in the request. This may be just the model name (e.g. gpt-3.5-turbo) or the model name with the route defined (e.g. openai/gpt-3.5-turbo).

  • litellm.request.host: Host where the request is sent (if specified).

Enabling#

The LiteLLM integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Alternatively, use patch() to manually enable the LiteLLM integration:

from ddtrace import patch

patch(litellm=True)

Configuration#

ddtrace.config.litellm["service"]

The service name reported by default for LiteLLM requests.

Alternatively, set this option with the DD_LITELLM_SERVICE environment variable.

Logbook#

Datadog APM traces can be integrated with the logs produced by `logbook by:

1. Having ddtrace patch the logbook module. This will configure a patcher which appends trace related values to the log.

  1. Ensuring the logger has a format which emits new values from the log record

3. For log correlation between APM and logs, the easiest format is via JSON so that no further configuration needs to be done in the Datadog UI assuming that the Datadog trace values are at the top level of the JSON

Enabling#

Patch logbook#

Logbook support is auto-enabled when ddtrace-run and a structured logging format (ex: JSON) is used. To disable this integration, set the environment variable DD_LOGS_INJECTION=false.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(logbook=True)

Proper Formatting#

The trace values are patched to every log at the top level of the record. In order to correlate logs, it is highly recommended to use JSON logs which can be achieved by using a handler with a proper formatting:

handler = FileHandler('output.log', format_string='{{\"message\": "{record.message}",'
                                                      '\"dd.trace_id\": "{record.extra[dd.trace_id]}",'
                                                      '\"dd.span_id\": "{record.extra[dd.span_id]}",'
                                                      '\"dd.env\": "{record.extra[dd.env]}",'
                                                      '\"dd.service\": "{record.extra[dd.service]}",'
                                                      '\"dd.version\": "{record.extra[dd.version]}"}}')
handler.push_application()

Note that the extra field does not have a dd object but rather only a dd.trace_id, dd.span_id, etc. To access the trace values inside extra, please use the [] operator.

This will create a handler for the application that formats the logs in a way that is JSON with all the Datadog trace values in a JSON format that can be automatically parsed by the Datadog backend.

For more information, please see the attached guide for the Datadog Logging Product: https://docs.datadoghq.com/logs/log_collection/python/

Logging#

Datadog APM traces can be integrated with the logs product by:

1. Having ddtrace patch the logging module. This will add trace attributes to the log record.

2. Updating the log formatter used by the application. In order to inject tracing information using the log the formatter must be updated to include the tracing attributes from the log record.

Enabling#

Patch logging#

Datadog support for built-in logging is enabled by default when you either: run your application with the ddtrace-run command, or Import ddtrace.auto in your code. If you are using the ddtrace library directly, you can enable logging support by calling: ddtrace.patch(logging=True). Note: Directly enabling integrations via ddtrace.patch(…) is not recommended.

Update Log Format#

Make sure that your log format supports the following attributes: dd.trace_id, dd.span_id, dd.service, dd.env, dd.version. These values will be automatically added to the log record by the ddtrace library.

Example:

import logging
from ddtrace.trace import tracer

FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
          '[dd.service=%(dd.service)s dd.env=%(dd.env)s '
          'dd.version=%(dd.version)s '
          'dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] '
          '- %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger()
log.level = logging.INFO


@tracer.wrap()
def hello():
    log.info('Hello, World!')

hello()

Note that most host based setups log by default to UTC time. If the log timestamps aren’t automatically in UTC, the formatter can be updated to use UTC:

import time
logging.Formatter.converter = time.gmtime

For more information, please see the attached guide on common timestamp issues: https://docs.datadoghq.com/logs/guide/logs-not-showing-expected-timestamp/

Loguru#

Datadog APM traces can be integrated with the logs produced by `loguru by:

1. Having ddtrace patch the loguru module. This will configure a patcher which appends trace related values to the log.

  1. Ensuring the logger has a format which emits new values from the log record

3. For log correlation between APM and logs, the easiest format is via JSON so that no further configuration needs to be done in the Datadog UI assuming that the Datadog trace values are at the top level of the JSON

Enabling#

Patch loguru#

Loguru support is auto-enabled when ddtrace-run and a structured logging format (ex: JSON) is used. To disable this integration, set the environment variable DD_LOGS_INJECTION=false.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(loguru=True)

Proper Formatting#

The trace values are patched to every log at the top level of the record. In order to correlate logs, it is highly recommended to use JSON logs. Here are two ways to do this:

  1. Use the built-in serialize function within the library that emits the entire log record into a JSON log:

    from loguru import logger
    
    logger.add("app.log", serialize=True)
    

This will emit the entire log record with the trace values into a file “app.log”

  1. Create a custom format that includes the trace values in JSON format:

    def serialize(record):
        subset = {
            "message": record["message"],
            "dd.trace_id": record["dd.trace_id"],
            "dd.span_id": record["dd.span_id"],
            "dd.env": record["dd.env"],
            "dd.version": record["dd.version"],
            "dd.service": record["dd.service"],
        }
    return json.dumps(subset)
    
    def log_format(record):
        record["extra"]["serialized"] = serialize(record)
        return "{extra[serialized]}\n"
    logger.add("app.log", format=log_format)
    

This will emit the log in a format where the output contains the trace values of the log at the top level of a JSON along with the message. The log will not include all the possible information in the record, but rather only the values included in the subset object within the serialize method

For more information, please see the attached guide for the Datadog Logging Product: https://docs.datadoghq.com/logs/log_collection/python/

Mako#

The mako integration traces templates rendering. Auto instrumentation is available using import ddtrace.auto. The following is an example:

import ddtrace.auto

from mako.template import Template

t = Template(filename="index.html")

MCP#

The MCP (Model Context Protocol) integration instruments the MCP Python library to emit traces for client tool calls and server tool executions.

All traces submitted from the MCP integration are tagged by:

Enabling#

The MCP integration is enabled automatically when you use ddtrace-run or import ddtrace.auto. Alternatively, use patch() to manually enable the MCP integration:

from ddtrace import patch
patch(mcp=True)

Configuration#

ddtrace.config.mcp["service"]
The service name reported by default for MCP requests.
Alternatively, set this option with the ``DD_MCP_SERVICE`` environment variable.
ddtrace.config.mcp["distributed_tracing"]
Whether or not to enable distributed tracing for MCP requests.
Alternatively, you can set this option with the ``DD_MCP_DISTRIBUTED_TRACING`` environment
variable.
Default: ``True``

MariaDB#

The MariaDB integration instruments the MariaDB library to trace queries.

Enabling#

The MariaDB integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(mariadb=True)

Configuration#

ddtrace.config.mariadb["service"]

The service name reported by default for MariaDB spans.

This option can also be set with the DD_MARIADB_SERVICE environment variable.

Default: "mariadb"

Molten#

The molten web framework is automatically traced by ddtrace:

import ddtrace.auto
from molten import App, Route

def hello(name: str, age: int) -> str:
    return f'Hello {age} year old named {name}!'
app = App(routes=[Route('/hello/{name}/{age}', hello)])

You may also enable molten tracing automatically via ddtrace-run:

ddtrace-run python app.py

Configuration#

ddtrace.config.molten['distributed_tracing']

Whether to parse distributed tracing headers from requests received by your Molten app.

Default: True

ddtrace.config.molten['service_name']

The service name reported for your Molten app.

Can also be configured via the DD_SERVICE or DD_MOLTEN_SERVICE environment variables.

Default: 'molten'

All HTTP tags are supported for this integration.

mysql-connector#

The mysql integration instruments the mysql library to trace MySQL queries.

Enabling#

The mysql integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(mysql=True)

Configuration#

ddtrace.config.mysql["service"]

The service name reported by default for mysql spans.

This option can also be set with the DD_MYSQL_SERVICE environment variable.

Default: "mysql"

ddtrace.config.mysql["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_MYSQL_TRACE_FETCH_METHODS environment variable.

Default: False

Only the default full-Python integration works. The binary C connector, provided by _mysql_connector, is not supported.

Help on mysql.connector can be found on: https://dev.mysql.com/doc/connector-python/en/

mysqlclient#

The mysqldb integration instruments the mysqlclient library to trace MySQL queries.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(mysqldb=True)

Configuration#

ddtrace.config.mysqldb["service"]

The service name reported by default for spans.

This option can also be set with the DD_MYSQLDB_SERVICE environment variable.

Default: "mysql"

ddtrace.config.mysqldb["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_MYSQLDB_TRACE_FETCH_METHODS environment variable.

Default: False

ddtrace.config.mysqldb["trace_connect"]

Whether or not to trace connecting.

Can also be configured via the DD_MYSQLDB_TRACE_CONNECT environment variable.

Default: False

This package works for mysqlclient. Only the default full-Python integration works. The binary C connector provided by _mysql is not supported.

Help on mysqlclient can be found on: https://mysqlclient.readthedocs.io/

OpenAI#

The OpenAI integration instruments the OpenAI Python library to emit traces for requests made to the models, completions, chat completions, images, embeddings, audio, files, and moderations endpoints.

All traces submitted from the OpenAI integration are tagged by:

  • service, env, version: see the Unified Service Tagging docs.

  • openai.request.endpoint: OpenAI API endpoint used in the request.

  • openai.request.method: HTTP method type used in the request.

  • openai.request.model: OpenAI model used in the request.

  • openai.organization.name: OpenAI organization name used in the request.

  • openai.organization.id: OpenAI organization ID used in the request (when available).

  • openai.user.api_key: OpenAI API key used to make the request (obfuscated to match the OpenAI UI representation sk-...XXXX where XXXX is the last 4 digits of the key).

Streamed Responses Support#

The OpenAI integration estimates prompt and completion token counts for streamed completion/chat completion responses if stream_options["include_usage"] is set to False in the request. This is because the usage field is not returned by default in streamed completion/chat completions, which is what the integration relies on for reporting token metrics.

The _est_tokens function implements token count estimations. It returns the average of simple token estimation techniques that do not rely on installing a tokenizer.

Enabling#

The OpenAI integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Note that these commands also enable the requests and aiohttp integrations which trace HTTP requests from the OpenAI library.

Alternatively, use patch() to manually enable the OpenAI integration:

from ddtrace import config, patch

# Note: be sure to configure the integration before calling ``patch()``!
# eg. config.openai["logs_enabled"] = True

patch(openai=True)

# to trace synchronous HTTP requests from the OpenAI library
# patch(openai=True, requests=True)

# to trace asynchronous HTTP requests from the OpenAI library
# patch(openai=True, aiohttp=True)

Configuration#

ddtrace.config.openai["service"]

The service name reported by default for OpenAI requests.

Alternatively, set this option with the DD_OPENAI_SERVICE environment variable.

OpenAI Agents#

The OpenAI Agents integration instruments the openai-agents Python library to emit traces for agent workflows.

All traces submitted from the OpenAI Agents integration are tagged by: - service, env, version: see the Unified Service Tagging docs.

Enabling#

The OpenAI Agents integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Alternatively, use patch() to manually enable the OpenAI Agents integration:

from ddtrace import patch

patch(openai_agents=True)

Configuration#

ddtrace.config.openai_agents["service"]

The service name reported by default for OpenAI Agents requests.

Alternatively, set this option with the DD_OPENAI_AGENTS_SERVICE environment variable.

pylibmc#

Instrument pylibmc to report Memcached queries.

import ddtrace.auto will automatically patch your pylibmc client to make it work.

# Be sure to import pylibmc and not pylibmc.Client directly,
# otherwise you won't have access to the patched version
from ddtrace import patch
import pylibmc

# If not patched yet, you can patch pylibmc specifically
patch(pylibmc=True)

# One client instrumented with default configuration
client = pylibmc.Client(["localhost:11211"]
client.set("key1", "value1")

PynamoDB#

The PynamoDB integration traces all db calls made with the pynamodb library through the connection API.

Enabling#

The PynamoDB integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

import pynamodb
from ddtrace import patch, config
patch(pynamodb=True)

Configuration#

ddtrace.config.pynamodb["service"]

The service name reported by default for the PynamoDB instance.

This option can also be set with the DD_PYNAMODB_SERVICE environment variable.

Default: "pynamodb"

PyODBC#

The pyodbc integration instruments the pyodbc library to trace pyodbc queries.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(pyodbc=True)

Configuration#

ddtrace.config.pyodbc["service"]

The service name reported by default for pyodbc spans.

This option can also be set with the DD_PYODBC_SERVICE environment variable.

Default: "pyodbc"

ddtrace.config.pyodbc["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_PYODBC_TRACE_FETCH_METHODS environment variable.

Default: False

pymemcache#

Instrument pymemcache to report memcached queries.

import ddtrace.auto will automatically patch the pymemcache Client:

from ddtrace import patch

# If not patched yet, patch pymemcache specifically
patch(pymemcache=True)

# Import reference to Client AFTER patching
import pymemcache
from pymemcache.client.base import Client

# This will report a span with the default settings
client = Client(('localhost', 11211))
client.set("my-key", "my-val")

Configuration#

ddtrace.config.pymemcache["service"]

The service name reported by default for pymemcache spans.

This option can also be set with the DD_PYMEMCACHE_SERVICE environment variable.

Default: "pymemcache"

Pymemcache HashClient will also be indirectly patched as it uses Client under the hood.

Pymongo#

Instrument pymongo to report MongoDB queries.

The pymongo integration works by wrapping pymongo’s MongoClient and AsyncMongoClient to trace network calls. Pymongo 3.0+ is supported for synchronous operations. AsyncMongoClient support requires pymongo 4.12+. import ddtrace.auto will automatically patch both client types.

from ddtrace import patch
import pymongo

patch(pymongo=True)

# Synchronous usage
client = pymongo.MongoClient()
db = client["test-db"]
db.teams.find({"name": "Toronto Maple Leafs"})

# Asynchronous usage (pymongo 4.12+)
from pymongo.asynchronous.mongo_client import AsyncMongoClient

async def example():
    client = AsyncMongoClient()
    db = client["test-db"]
    async for doc in db.teams.find({"name": "Toronto Maple Leafs"}):
        print(doc)
    await client.close()

Configuration#

ddtrace.config.pymongo["service"]
The service name reported by default for pymongo spans

The option can also be set with the DD_PYMONGO_SERVICE environment variable

Default: "pymongo"

pymysql#

The pymysql integration instruments the pymysql library to trace MySQL queries.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(pymysql=True)

Configuration#

ddtrace.config.pymysql["service"]

The service name reported by default for pymysql spans.

This option can also be set with the DD_PYMYSQL_SERVICE environment variable.

Default: "mysql"

ddtrace.config.pymysql["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_PYMYSQL_TRACE_FETCH_METHODS environment variable.

Default: False

Pyramid#

To trace requests from a Pyramid application, trace your application config:

from pyramid.config import Configurator
from ddtrace.contrib.pyramid import trace_pyramid

settings = {
    'datadog_trace_service' : 'my-web-app-name',
}

config = Configurator(settings=settings)
trace_pyramid(config)

# use your config as normal.
config.add_route('index', '/')

Available settings are:

  • datadog_trace_service: change the pyramid service name

  • datadog_trace_enabled: sets if the Tracer is enabled or not

  • datadog_distributed_tracing: set it to False to disable Distributed Tracing

If you use the pyramid.tweens settings value to set the tweens for your application, you need to add ddtrace.contrib.pyramid:trace_tween_factory explicitly to the list. For example:

settings = {
    'datadog_trace_service' : 'my-web-app-name',
    'pyramid.tweens', 'your_tween_no_1\\nyour_tween_no_2\\nddtrace.contrib.pyramid:trace_tween_factory',
}

config = Configurator(settings=settings)
trace_pyramid(config)

# use your config as normal.
config.add_route('index', '/')

All HTTP tags are supported for this integration.

pytest#

The pytest integration traces test executions.

Enabling#

Enable traced execution of tests using pytest runner by running pytest --ddtrace or by modifying any configuration file read by pytest (pytest.ini, setup.cfg, …):

[pytest]
ddtrace = 1

If you need to disable it, the option --no-ddtrace will take precedence over --ddtrace and (pytest.ini, setup.cfg, …)

You can enable all integrations by using the --ddtrace-patch-all option alongside --ddtrace or by adding this to your configuration:

[pytest]
ddtrace = 1
ddtrace-patch-all = 1

Note

The ddtrace plugin for pytest has the side effect of importing the ddtrace package and starting a global tracer.

If this is causing issues for your pytest runs where traced execution of tests is not enabled, you can deactivate the plugin:

[pytest]
addopts = -p no:ddtrace

See the pytest documentation for more details.

Global Configuration#

ddtrace.config.pytest["service"]

The service name reported by default for pytest traces.

This option can also be set with the integration specific DD_PYTEST_SERVICE environment variable, or more generally with the DD_SERVICE environment variable.

Default: Name of the repository being tested, otherwise "pytest" if the repository name cannot be found.

ddtrace.config.pytest["operation_name"]

The operation name reported by default for pytest traces.

This option can also be set with the DD_PYTEST_OPERATION_NAME environment variable.

Default: "pytest.test"

pytest-bdd#

The pytest-bdd integration traces executions of scenarios and steps.

Enabling#

Please follow the instructions for enabling pytest integration.

Note

The ddtrace.pytest_bdd plugin for pytest-bdd has the side effect of importing the ddtrace package and starting a global tracer.

If this is causing issues for your pytest-bdd runs where traced execution of tests is not enabled, you can deactivate the plugin:

[pytest]
addopts = -p no:ddtrace.pytest_bdd

See the pytest documentation for more details.

pytest-benchmark#

The pytest-benchmark integration traces executions of pytest benchmarks.

protobuf#

The Protobuf integration will trace all Protobuf read / write calls made with the google.protobuf library. This integration is enabled by default.

Enabling#

The protobuf integration is enabled by default. Use patch() to enable the integration:

from ddtrace import patch
patch(protobuf=True)

Configuration#

psycopg#

The psycopg integration instruments the psycopg and psycopg2 libraries to trace Postgres queries.

Enabling#

The psycopg integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(psycopg=True)

Configuration#

ddtrace.config.psycopg["service"]

The service name reported by default for psycopg spans.

This option can also be set with the DD_PSYCOPG_SERVICE environment variable.

Default: "postgres"

ddtrace.config.psycopg["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_PSYCOPG_TRACE_FETCH_METHODS environment variable.

Default: False

ddtrace.config.psycopg["trace_connect"]

Whether or not to trace psycopg.connect method.

Can also configured via the DD_PSYCOPG_TRACE_CONNECT environment variable.

Default: False

Ray#

The ray integration traces:
  • Job lifetime (job submit, job run)

  • Task submission and execution

  • Actor method submission and execution

Enabling#

Ray instrumentation is experimental. It is deactivated by default. To enable it, you have to follow one of the two methods below:

The recommended way to instrument Ray, is to instrument the Ray cluster using ddtrace-run:

DD_PATCH_MODULES="ray:true, aiohttp:false, grpc:false, requests:false" ddtrace-run ray start --head

DD_PATCH_MODULES will allow to reduce noise by sending only the jobs related spans.

You can also do it by starting Ray head with a tracing startup hook:

ray start --head --tracing-startup-hook=ddtrace.contrib.ray:setup_tracing

Note that this method does not provide full tracing capabilities if ray.init() is not called at the top of your job scripts.

Configuration#

The Ray integration can be configured using environment variables:

  • DD_TRACE_RAY_CORE_API: Enable tracing of Ray’s core API functions like ray.wait()

    (default: False)

  • DD_TRACE_RAY_ARGS_KWARGS: Enable tracing of arguments and keyword arguments passed to

    Ray tasks and actor methods (default: False)

  • DD_TRACE_EXPERIMENTAL_LONG_RUNNING_FLUSH_INTERVAL: Interval for resubmitting long-running

    spans (default: 120.0 seconds)

  • DD_TRACE_RAY_USE_ENTRYPOINT_AS_SERVICE_NAME: Whether to use the job entrypoint as the

    service name (default: False). If True, the entrypoint will be used as the service name if DD_SERVICE is not set and a job name is not specified in the metadata.

  • DD_TRACE_RAY_REDACT_ENTRYPOINT_PATHS: Whether to redact file paths in the job entrypoint

    (default: True). If True, file paths in the entrypoint will be redacted to avoid leaking sensitive information.

Ray service name can be configured, in order of precedence by:

  • specifying DD_SERVICE when initializing your Ray cluster.

  • setting DD_TRACE_RAY_USE_ENTRYPOINT_AS_SERVICE_NAME=True. In this case, the service name will be the name of your entrypoint script.

  • specifying in metadata during job submission:

    ray job submit --metadata-json='{"job_name": "my_model"}' -- python entrypoint.py
    

By default, the service name will be unnamed.ray.job.

Notes#

  • The integration disables Ray’s built-in OpenTelemetry tracing to avoid duplicate telemetry.

  • Actor methods like ping and _polling are excluded from tracing to reduce noise.

  • Actors whose names start with an underscore (_) are not instrumented.

redis#

The redis integration traces redis requests.

Enabling#

The redis integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(redis=True)

Configuration#

ddtrace.config.redis["service"]

The service name reported by default for redis traces.

This option can also be set with the DD_REDIS_SERVICE environment variable.

Default: "redis"

ddtrace.config.redis["cmd_max_length"]

Max allowable size for the redis command span tag. Anything beyond the max length will be replaced with "...".

This option can also be set with the DD_REDIS_CMD_MAX_LENGTH environment variable.

Default: 1000

ddtrace.config.redis["resource_only_command"]

The span resource will only include the command executed. To include all arguments in the span resource, set this value to False.

This option can also be set with the DD_REDIS_RESOURCE_ONLY_COMMAND environment variable.

Default: True

redis-py-cluster#

Instrument rediscluster to report Redis Cluster queries.

import ddtrace.auto will automatically patch your Redis Cluster client to make it work.

from ddtrace import patch
import rediscluster

# If not patched yet, you can patch redis specifically
patch(rediscluster=True)

# This will report a span with the default settings
client = rediscluster.StrictRedisCluster(startup_nodes=[{'host':'localhost', 'port':'7000'}])
client.get('my-key')

Configuration#

ddtrace.config.rediscluster["service"]
The service name reported by default for rediscluster spans

The option can also be set with the DD_REDISCLUSTER_SERVICE environment variable

Default: 'rediscluster'

ddtrace.config.rediscluster["cmd_max_length"]

Max allowable size for the rediscluster command span tag. Anything beyond the max length will be replaced with "...".

This option can also be set with the DD_REDISCLUSTER_CMD_MAX_LENGTH environment variable.

Default: 1000

ddtrace.config.aredis["resource_only_command"]

The span resource will only include the command executed. To include all arguments in the span resource, set this value to False.

This option can also be set with the DD_REDIS_RESOURCE_ONLY_COMMAND environment variable.

Default: True

Requests#

The requests integration traces all HTTP requests made with the requests library.

The default service name used is requests but it can be configured to match the services that the specific requests are made to.

Enabling#

The requests integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(requests=True)

# use requests like usual

Configuration#

ddtrace.config.requests['service']

The service name reported by default for requests queries. This value will be overridden if the split_by_domain setting is enabled.

This option can also be set with the DD_REQUESTS_SERVICE environment variable.

Default: "requests"

ddtrace.config.requests['distributed_tracing']

Whether or not to parse distributed tracing headers.

Default: True

ddtrace.config.requests['trace_query_string']

Whether or not to include the query string as a tag.

Default: False

ddtrace.config.requests['split_by_domain']

Whether or not to use the domain name of requests as the service name.

Default: False

RQ#

The RQ integration will trace your jobs.

Usage#

The rq integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(rq=True)

Worker Usage#

ddtrace-run can be used to easily trace your workers:

DD_SERVICE=myworker ddtrace-run rq worker

Configuration#

ddtrace.config.rq['distributed_tracing_enabled']
ddtrace.config.rq_worker['distributed_tracing_enabled']

If True the integration will connect the traces sent between the enqueuer and the RQ worker.

This option can also be set with the DD_RQ_DISTRIBUTED_TRACING_ENABLED environment variable on either the enqueuer or worker applications.

Default: True

ddtrace.config.rq['service']

The service name reported by default for RQ spans from the app.

This option can also be set with the DD_SERVICE or DD_RQ_SERVICE environment variables.

Default: rq

ddtrace.config.rq_worker['service']

The service name reported by default for RQ spans from workers.

This option can also be set with the DD_SERVICE environment variable.

Default: rq-worker

Sanic#

The Sanic integration will trace requests to and from Sanic.

Enable Sanic tracing automatically via ddtrace-run:

ddtrace-run python app.py

Sanic tracing can also be enabled explicitly:

import ddtrace.auto

from sanic import Sanic
from sanic.response import text

app = Sanic(__name__)

@app.route('/')
def index(request):
    return text('hello world')

if __name__ == '__main__':
    app.run()

Configuration#

ddtrace.config.sanic['distributed_tracing_enabled']

Whether to parse distributed tracing headers from requests received by your Sanic app.

Default: True

ddtrace.config.sanic['service_name']

The service name reported for your Sanic app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'sanic'

Example:

from ddtrace import config

# Enable distributed tracing
config.sanic['distributed_tracing_enabled'] = True

# Override service name
config.sanic['service_name'] = 'custom-service-name'

Selenium#

The Selenium integration enriches Test Visibility data with extra tags and, if available, Real User Monitoring session replays.

Enabling#

The Selenium integration is enabled by default in test contexts (eg: pytest, or unittest). Use patch() to enable the integration:

from ddtrace import patch
patch(selenium=True)

When using pytest, the –ddtrace-patch-all flag is required in order for this integration to be enabled.

Configuration#

The Selenium integration can be configured using the following options:

DD_CIVISIBILITY_RUM_FLUSH_WAIT_MILLIS: The time in milliseconds to wait after flushing the RUM session.

Snowflake#

The snowflake integration instruments the snowflake-connector-python library to trace Snowflake queries.

Note that this integration is in beta.

Enabling#

The integration is not enabled automatically when using ddtrace-run or import ddtrace.auto.

Use environment variable DD_TRACE_SNOWFLAKE_ENABLED=true or DD_PATCH_MODULES:snowflake:true to manually enable the integration.

Configuration#

ddtrace.config.snowflake["service"]

The service name reported by default for snowflake spans.

This option can also be set with the DD_SNOWFLAKE_SERVICE environment variable.

Default: "snowflake"

ddtrace.config.snowflake["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_SNOWFLAKE_TRACE_FETCH_METHODS environment variable.

Default: False

Starlette#

The Starlette integration will trace requests to and from Starlette.

Enabling#

The starlette integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
from starlette.applications import Starlette

patch(starlette=True)
app = Starlette()

Configuration#

ddtrace.config.starlette['distributed_tracing']

Whether to parse distributed tracing headers from requests received by your Starlette app.

Can also be enabled with the DD_STARLETTE_DISTRIBUTED_TRACING environment variable.

Default: True

ddtrace.config.starlette['service_name']

The service name reported for your starlette app.

Can also be configured via the DD_SERVICE environment variable.

Default: 'starlette'

ddtrace.config.starlette['request_span_name']

The span name for a starlette request.

Default: 'starlette.request'

See asgi configuration for details on resource name obfuscation.

Example:

from ddtrace import config

# Enable distributed tracing
config.starlette['distributed_tracing'] = True

# Override service name
config.starlette['service_name'] = 'custom-service-name'

# Override request span name
config.starlette['request_span_name'] = 'custom-request-span-name'

Structlog#

Datadog APM traces can be integrated with the logs produced by structlog by:

1. Having ddtrace patch the structlog module. This will add a processor in the beginning of the chain that adds trace attributes to the event_dict

2. For log correlation between APM and logs, the easiest format is via JSON so that no further configuration needs to be done in the Datadog UI assuming that the Datadog trace values are at the top level of the JSON

Enabling#

Patch structlog#

Structlog support is auto-enabled when ddtrace-run and a structured logging (ex: JSON) is used. To disable this integration, set the environment variable DD_LOGS_INJECTION=false.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(structlog=True)

Proper Formatting#

The trace attributes are injected via a processor in the processor block of the configuration whether that be the default processor chain or a user-configured chain.

An example of a configuration that outputs to a file that can be injected into is as below:

structlog.configure(
    processors=[structlog.processors.JSONRenderer()],
    logger_factory=structlog.WriteLoggerFactory(file=Path("app").with_suffix(".log").open("wt")))

For more information, please see the attached guide for the Datadog Logging Product: https://docs.datadoghq.com/logs/log_collection/python/

SQLAlchemy#

Enabling the SQLAlchemy integration is only necessary if there is no instrumentation available or enabled for the underlying database engine (e.g. pymysql, psycopg, mysql-connector, etc.).

To trace sqlalchemy queries, add instrumentation to the engine class using the patch method that must be called before importing sqlalchemy:

# patch before importing `create_engine`
from ddtrace import patch
patch(sqlalchemy=True)

# use SQLAlchemy as usual
from sqlalchemy import create_engine

engine = create_engine('sqlite:///:memory:')
engine.connect().execute("SELECT COUNT(*) FROM users")

SQLite#

The sqlite integration instruments the built-in sqlite module to trace SQLite queries.

Enabling#

The integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(sqlite=True)

Configuration#

ddtrace.config.sqlite["service"]

The service name reported by default for sqlite spans.

This option can also be set with the DD_SQLITE_SERVICE environment variable.

Default: "sqlite"

ddtrace.config.sqlite["trace_fetch_methods"]

Whether or not to trace fetch methods.

Can also configured via the DD_SQLITE_TRACE_FETCH_METHODS environment variable.

Default: False

Subprocess#

The subprocess integration will add tracing to all subprocess executions started in your application. It will be automatically enabled if Application Security is enabled with:

DD_APPSEC_ENABLED=true

Configuration#

ddtrace.config.subprocess['sensitive_wildcards']

Comma separated list of fnmatch-style wildcards Subprocess parameters matching these wildcards will be scrubbed and replaced by a “?”.

Default: None for the config value but note that there are some wildcards always enabled in this integration that you can check on ddtrace.contrib.subprocess.constants.SENSITIVE_WORDS_WILDCARDS.

Tornado#

The Tornado integration traces all RequestHandler defined in a Tornado web application. Auto instrumentation is available using the patch function that must be called before importing the tornado library.

This integration supports Tornado >=6.1, which is asyncio-based. The integration properly handles async/await coroutines and functions that return Futures, ensuring accurate span durations and correct context propagation.

The following is an example:

# patch before importing tornado and concurrent.futures
import asyncio
import tornado.web
import tornado.httpserver

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

async def main():
    app = tornado.web.Application([
        (r"/", MainHandler),
    ])
    server = tornado.httpserver.HTTPServer(app)
    server.listen(8888)

    # Let the asyncio loop run forever
    await asyncio.Event().wait()

if __name__ == "__main__":
    asyncio.run(main())

When any type of RequestHandler is hit, a request root span is automatically created. If you want to trace more parts of your application, you can use the wrap() decorator and the trace() method as usual:

class MainHandler(tornado.web.RequestHandler):
    async def get(self):
        await self.notify()
        await self.blocking_method()
        with tracer.trace('tornado.before_write') as span:
            # trace more work in the handler

    @tracer.wrap('tornado.executor_handler')
    @tornado.concurrent.run_on_executor
    def blocking_method(self):
        # do something expensive

    @tracer.wrap('tornado.notify', service='tornado-notification')
    async def notify(self):
        # do something

If you are overriding the on_finish or log_exception methods on a RequestHandler, you will need to call the super method to ensure the tracer’s patched methods are called:

class MainHandler(tornado.web.RequestHandler):
    async def get(self):
        self.write("Hello, world")

    def on_finish(self):
        super(MainHandler, self).on_finish()
        # do other clean-up

    def log_exception(self, typ, value, tb):
        super(MainHandler, self).log_exception(typ, value, tb)
        # do other logging

Tornado settings can be used to change some tracing configuration, like:

settings = {
    'datadog_trace': {
        'default_service': 'my-tornado-app',
        'tags': {'env': 'production'},
        'distributed_tracing': False,
    },
}

app = tornado.web.Application([
    (r'/', MainHandler),
], **settings)

The available settings are:

  • default_service (default: tornado-web): set the service name used by the tracer. Usually this configuration must be updated with a meaningful name. Can also be configured via the DD_SERVICE environment variable.

  • tags (default: {}): set global tags that should be applied to all spans.

  • enabled (default: True): define if the tracer is enabled or not. If set to false, the code is still instrumented but no spans are sent to the APM agent.

  • distributed_tracing (default: None): enable distributed tracing if this is called remotely from an instrumented application. Overrides the integration config which is configured via the DD_TORNADO_DISTRIBUTED_TRACING environment variable. We suggest to enable it only for internal services where headers are under your control.

  • agent_hostname (default: localhost): define the hostname of the APM agent.

  • agent_port (default: 8126): define the port of the APM agent.

  • settings (default: {}): Tracer extra settings used to change, for instance, the filtering behavior.

unittest#

The unittest integration traces test executions.

Enabling#

The unittest integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Alternately, use patch() to manually enable the integration:

from ddtrace import patch
patch(unittest=True)

Global Configuration#

ddtrace.config.unittest["operation_name"]

The operation name reported by default for unittest traces.

This option can also be set with the DD_UNITTEST_OPERATION_NAME environment variable.

Default: "unittest.test"

ddtrace.config.unittest["strict_naming"]

Requires all unittest tests to start with test as stated in the Python documentation

This option can also be set with the DD_CIVISIBILITY_UNITTEST_STRICT_NAMING environment variable.

Default: True

urllib#

Trace the standard library urllib.request library to trace HTTP requests and detect SSRF vulnerabilities. It is enabled by default if DD_IAST_ENABLED is set to True (for detecting sink points) and/or DD_ASM_ENABLED is set to True (for exploit prevention).

urllib3#

The urllib3 integration instruments tracing on http calls with optional support for distributed tracing across services the client communicates with.

Enabling#

The urllib3 integration is not enabled by default. Use either ddtrace-run or import ddtrace.auto with DD_PATCH_MODULES or DD_TRACE_URLLIB3_ENABLED to enable it. DD_PATCH_MODULES=urllib3 ddtrace-run python app.py or DD_PATCH_MODULES=urllib3:true python app.py:

import ddtrace.auto
# use urllib3 like usual

Configuration#

ddtrace.config.urllib3['service']

The service name reported by default for urllib3 client instances.

This option can also be set with the DD_URLLIB3_SERVICE environment variable.

Default: "urllib3"

ddtrace.config.urllib3['distributed_tracing']

Whether or not to parse distributed tracing headers.

Default: True

ddtrace.config.urllib3['trace_query_string']

Whether or not to include the query string as a tag.

Default: False

ddtrace.config.urllib3['split_by_domain']

Whether or not to use the domain name of requests as the service name.

Default: False

valkey#

The valkey integration traces valkey requests.

Enabling#

The valkey integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(valkey=True)

Configuration#

ddtrace.config.valkey["service"]

The service name reported by default for valkey traces.

This option can also be set with the DD_VALKEY_SERVICE environment variable.

Default: "valkey"

ddtrace.config.valkey["cmd_max_length"]

Max allowable size for the valkey command span tag. Anything beyond the max length will be replaced with "...".

This option can also be set with the DD_VALKEY_CMD_MAX_LENGTH environment variable.

Default: 1000

ddtrace.config.valkey["resource_only_command"]

The span resource will only include the command executed. To include all arguments in the span resource, set this value to False.

This option can also be set with the DD_VALKEY_RESOURCE_ONLY_COMMAND environment variable.

Default: True

vertexai#

The Vertex AI integration instruments the Vertex Generative AI SDK for Python for requests made to Google models.

All traces submitted from the Vertex AI integration are tagged by:

  • service, env, version: see the Unified Service Tagging docs.

  • vertexai.request.provider: LLM provider used in the request (e.g. google for Google models).

  • vertexai.request.model: Google model used in the request.

Enabling#

The Vertex AI integration is enabled automatically when you use ddtrace-run or import ddtrace.auto.

Alternatively, use patch() to manually enable the Vertex AI integration:

from ddtrace import config, patch

patch(vertexai=True)

Configuration#

ddtrace.config.vertexai["service"]

The service name reported by default for Vertex AI requests.

Alternatively, set this option with the DD_VERTEXAI_SERVICE environment variable.

Vertica#

The Vertica integration will trace queries made using the vertica-python library.

Vertica will be automatically instrumented with import ddtrace.auto, or when using the ddtrace-run command.

Vertica is instrumented on import. To instrument Vertica manually use the patch function. Note the ordering of the following statements:

from ddtrace import patch
patch(vertica=True)

import vertica_python

# use vertica_python like usual

To configure the Vertica integration globally you can use the Config API:

from ddtrace import config, patch
patch(vertica=True)

config.vertica['service_name'] = 'my-vertica-database'

vLLM#

The vLLM integration traces requests through the vLLM V1 engine.

Note: This integration only supports vLLM V1 (VLLM_USE_V1=1). V0 engine support has been removed as V0 is deprecated and will be removed in a future vLLM release.

Enabling#

The vLLM integration is enabled automatically when using ddtrace-run or patch_all().

Alternatively, use patch() to manually enable the integration:

from ddtrace import patch
patch(vllm=True)

Configuration#

ddtrace.config.vllm["service"]

The service name reported by default for vLLM requests.

This option can also be set with the DD_VLLM_SERVICE environment variable.

Default: "vllm"

Architecture#

The integration uses engine-side tracing to capture all requests regardless of API entry point:

  1. Model Name Injection (LLMEngine.__init__ / AsyncLLM.__init__): - Extracts and stores model name for span tagging - Forces log_stats=True to enable latency and token metrics collection

  2. Context Injection (Processor.process_inputs): - Injects Datadog trace context into trace_headers - Context propagates through the engine automatically

  3. Span Creation (OutputProcessor.process_outputs): - Creates spans when requests finish - Extracts data from RequestState and EngineCoreOutput - Decodes prompt from token IDs for chat requests when text is unavailable - Works for all operations: completion, chat, embedding, cross-encoding

This design ensures: - All requests are traced (AsyncLLM, LLM, API server, chat) - Complete timing and token metrics from engine stats - Full prompt text capture (including chat conversations) - Minimal performance overhead

Span Tags#

All spans are tagged with:

Request Information: - vllm.request.model: Model name - vllm.request.provider: "vllm"

Latency Metrics: - vllm.latency.ttft: Time to first token (seconds) - vllm.latency.queue: Queue wait time (seconds) - vllm.latency.prefill: Prefill phase time (seconds) - vllm.latency.decode: Decode phase time (seconds) - vllm.latency.inference: Total inference time (seconds)

LLMObs Tags (when LLMObs is enabled):

For completion/chat operations: - input_messages: Prompt text (auto-decoded for chat requests) - output_messages: Generated text - input_tokens: Number of input tokens - output_tokens: Number of generated tokens - temperature, max_tokens, top_p, n: Sampling parameters - num_cached_tokens: Number of KV cache hits

For embedding operations: - input_documents: Input text or token IDs - output_value: Embedding shape description - embedding_dim: Embedding dimension - num_embeddings: Number of embeddings returned

Supported Operations#

Async Streaming (AsyncLLM): - generate(): Text completion - encode(): Text embedding

Offline Batch (LLM): - generate(): Text completion - chat(): Multi-turn conversations - encode(): Text embedding - _cross_encoding_score(): Cross-encoding scores

API Server: - All OpenAI-compatible endpoints (automatically traced through engine)

Requirements#

  • vLLM V1 (VLLM_USE_V1=1)

  • vLLM >= 0.10.2 (for V1 trace header propagation support)

Webbrowser#

Trace the standard library webbrowser library to trace HTTP requests and detect SSRF vulnerabilities. It is enabled by default if DD_IAST_ENABLED is set to True (for detecting sink points) and/or DD_ASM_ENABLED is set to True (for exploit prevention).

WSGI#

The Datadog WSGI middleware traces all WSGI requests.

Usage#

The middleware can be used manually via the following command:

from ddtrace.contrib.wsgi import DDWSGIMiddleware

# application is a WSGI application
application = DDWSGIMiddleware(application)

Configuration#

ddtrace.config.wsgi["service"]

The service name reported for the WSGI application.

This option can also be set with the DD_SERVICE environment variable.

Default: "wsgi"

ddtrace.config.wsgi["distributed_tracing"]

Configuration that allows distributed tracing to be enabled.

Default: True

All HTTP tags are supported for this integration.

yaaredis#

The yaaredis integration traces yaaredis requests.

Enabling#

The yaaredis integration is enabled automatically when using ddtrace-run or import ddtrace.auto.

Or use patch() to manually enable the integration:

from ddtrace import patch
patch(yaaredis=True)

Configuration#

ddtrace.config.yaaredis["service"]

The service name reported by default for yaaredis traces.

This option can also be set with the DD_YAAREDIS_SERVICE environment variable.

Default: "redis"

ddtrace.config.yaaredis["cmd_max_length"]

Max allowable size for the yaaredis command span tag. Anything beyond the max length will be replaced with "...".

This option can also be set with the DD_YAAREDIS_CMD_MAX_LENGTH environment variable.

Default: 1000

ddtrace.config.aredis["resource_only_command"]

The span resource will only include the command executed. To include all arguments in the span resource, set this value to False.

This option can also be set with the DD_REDIS_RESOURCE_ONLY_COMMAND environment variable.

Default: True