Elasticsearch Destination
This product is not supported for your selected
Datadog site. (
).
Use Observability Pipelines’ Elasticsearch destination to send logs to Elasticsearch.
Setup
Set up the Elasticsearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
- In the Mode dropdown menu, select Bulk or Data streams.
- Bulk mode
- Uses Elasticsearch’s Bulk API to send batched events directly into a standard index.
- Choose this mode when you want direct control over index naming and lifecycle management. Data is appended to the index you specify, and you are responsible for handling rollovers, deletions, and mappings.
- To configure Bulk mode:
- In the Index field, optionally enter the name of the Elasticsearch index. You can use template syntax to dynamically route logs to different indexes based on specific fields in your logs, for example
logs-{{service}}.
- Data streams mode
- Uses Elasticsearch Data Streams for log storage. Data streams automatically manage backing indexes and rollovers, making them ideal for time series log data.
- Choose this mode when you want Elasticsearch to manage the index lifecycle for you. Data streams ensures smooth rollovers, Index Lifecycle Management (ILM) compatibility, and optimized handling of time-based data.
- To configure Data streams mode, optionally define the data stream name (default is
logs-generic-default) by entering the following information:- In the Type field, enter the category of data being ingested, for example
logs. - In the Dataset field, specify the format or data source that describes the structure, for example
apache. - In the Namespace field, enter the grouping for organizing your data streams, for example
production. - In the UI, there is a preview of the data stream name you configured. With the above example inputs, the data stream name that the Worker writes to is
logs-apache-production.
- Enter the name for the Elasticsearch index. See template syntax if you want to route logs to different indexes based on specific fields in your logs.
- Enter the Elasticsearch version.
- Optionally, toggle the switch to enable Buffering Options. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn’t create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing logs to disk, ensuring buffered logs persist through a Worker restart. See Configurable buffers for destinations for more information.
- If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
- To configure a buffer on your destination:
- Select the buffer type you want to set (Memory or Disk).
- Enter the buffer size and select the unit.
- Maximum memory buffer size is 128 GB.
- Maximum disk buffer size is 500 GB.
Set the environment variables
- Elasticsearch authentication username:
- Stored in the environment variable
DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
- Elasticsearch authentication password:
- Stored in the environment variable
DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
- Elasticsearch endpoint URL:
- Stored in the environment variable
DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
| Max Events | Max Bytes | Timeout (seconds) |
|---|
| None | 10,000,000 | 1 |