List annotation queues. Optionally filter by project ID or queue IDs. These parameters are mutually exclusive.
If neither is provided, all queues in the organization are returned.
Arguments
Query Strings
Name
Type
Description
projectId
string
Filter annotation queues by project ID. Cannot be used together with queueIds.
queueIds
array
Filter annotation queues by queue IDs (comma-separated). Cannot be used together with projectId.
Response containing a list of LLM Observability annotation queues.
Expand All
Field
Type
Description
data [required]
[object]
List of annotation queues.
attributes [required]
object
Attributes of an LLM Observability annotation queue.
created_at [required]
date-time
Timestamp when the queue was created.
created_by [required]
string
Identifier of the user who created the queue.
description [required]
string
Description of the annotation queue.
modified_at [required]
date-time
Timestamp when the queue was last modified.
modified_by [required]
string
Identifier of the user who last modified the queue.
name [required]
string
Name of the annotation queue.
owned_by [required]
string
Identifier of the user who owns the queue.
project_id [required]
string
Identifier of the project this queue belongs to.
id [required]
string
Unique identifier of the annotation queue.
type [required]
enum
Resource type of an LLM Observability annotation queue.
Allowed enum values: queues
{"data":[{"attributes":{"created_at":"2024-01-15T10:30:00Z","created_by":"00000000-0000-0000-0000-000000000002","description":"Queue for annotating customer support traces","modified_at":"2024-01-15T10:30:00Z","modified_by":"00000000-0000-0000-0000-000000000002","name":"My annotation queue","owned_by":"00000000-0000-0000-0000-000000000002","project_id":"a33671aa-24fd-4dcd-9b33-a8ec7dde7751"},"id":"b5e7f3a1-9c2d-4f8b-a1e6-3d7e9f0a2b4c","type":"queues"}]}
Create a new annotation queue. Only name, project_id, and description are accepted.
Fields such as created_by, owned_by, created_at, modified_by, and modified_at are inferred by the backend.
Data object for creating an LLM Observability annotation queue.
attributes [required]
object
Attributes for creating an LLM Observability annotation queue.
description
string
Description of the annotation queue.
name [required]
string
Name of the annotation queue.
project_id [required]
string
Identifier of the project this queue belongs to.
type [required]
enum
Resource type of an LLM Observability annotation queue.
Allowed enum values: queues
{"data":{"attributes":{"description":"Queue for annotating customer support traces","name":"My annotation queue","project_id":"a33671aa-24fd-4dcd-9b33-a8ec7dde7751"},"type":"queues"}}
Response containing a single LLM Observability annotation queue.
Expand All
Field
Type
Description
data [required]
object
Data object for an LLM Observability annotation queue.
attributes [required]
object
Attributes of an LLM Observability annotation queue.
created_at [required]
date-time
Timestamp when the queue was created.
created_by [required]
string
Identifier of the user who created the queue.
description [required]
string
Description of the annotation queue.
modified_at [required]
date-time
Timestamp when the queue was last modified.
modified_by [required]
string
Identifier of the user who last modified the queue.
name [required]
string
Name of the annotation queue.
owned_by [required]
string
Identifier of the user who owns the queue.
project_id [required]
string
Identifier of the project this queue belongs to.
id [required]
string
Unique identifier of the annotation queue.
type [required]
enum
Resource type of an LLM Observability annotation queue.
Allowed enum values: queues
{"data":{"attributes":{"created_at":"2024-01-15T10:30:00Z","created_by":"00000000-0000-0000-0000-000000000002","description":"Queue for annotating customer support traces","modified_at":"2024-01-15T10:30:00Z","modified_by":"00000000-0000-0000-0000-000000000002","name":"My annotation queue","owned_by":"00000000-0000-0000-0000-000000000002","project_id":"a33671aa-24fd-4dcd-9b33-a8ec7dde7751"},"id":"b5e7f3a1-9c2d-4f8b-a1e6-3d7e9f0a2b4c","type":"queues"}}
Response containing a single LLM Observability annotation queue.
Expand All
Field
Type
Description
data [required]
object
Data object for an LLM Observability annotation queue.
attributes [required]
object
Attributes of an LLM Observability annotation queue.
created_at [required]
date-time
Timestamp when the queue was created.
created_by [required]
string
Identifier of the user who created the queue.
description [required]
string
Description of the annotation queue.
modified_at [required]
date-time
Timestamp when the queue was last modified.
modified_by [required]
string
Identifier of the user who last modified the queue.
name [required]
string
Name of the annotation queue.
owned_by [required]
string
Identifier of the user who owns the queue.
project_id [required]
string
Identifier of the project this queue belongs to.
id [required]
string
Unique identifier of the annotation queue.
type [required]
enum
Resource type of an LLM Observability annotation queue.
Allowed enum values: queues
{"data":{"attributes":{"created_at":"2024-01-15T10:30:00Z","created_by":"00000000-0000-0000-0000-000000000002","description":"Queue for annotating customer support traces","modified_at":"2024-01-15T10:30:00Z","modified_by":"00000000-0000-0000-0000-000000000002","name":"My annotation queue","owned_by":"00000000-0000-0000-0000-000000000002","project_id":"a33671aa-24fd-4dcd-9b33-a8ec7dde7751"},"id":"b5e7f3a1-9c2d-4f8b-a1e6-3d7e9f0a2b4c","type":"queues"}}
Response containing a custom LLM Observability evaluator configuration.
Expand All
Field
Type
Description
data [required]
object
Data object for a custom LLM Observability evaluator configuration.
attributes [required]
object
Attributes of a custom LLM Observability evaluator configuration.
category
string
Category of the evaluator.
created_at [required]
date-time
Timestamp when the evaluator configuration was created.
created_by
object
A Datadog user associated with a custom evaluator configuration.
email
string
Email address of the user.
eval_name [required]
string
Name of the custom evaluator.
last_updated_by
object
A Datadog user associated with a custom evaluator configuration.
email
string
Email address of the user.
llm_judge_config
object
LLM judge configuration for a custom evaluator.
assessment_criteria
object
Criteria used to assess the pass/fail result of a custom evaluator.
max_threshold
double
Maximum numeric threshold for a passing result.
min_threshold
double
Minimum numeric threshold for a passing result.
pass_values
[string]
Specific output values considered as a passing result.
pass_when
boolean
When true, a boolean output of true is treated as passing.
inference_params [required]
object
LLM inference parameters for a custom evaluator.
frequency_penalty
double
Frequency penalty to reduce repetition.
max_tokens
int64
Maximum number of tokens to generate.
presence_penalty
double
Presence penalty to reduce repetition.
temperature
double
Sampling temperature for the LLM.
top_k
int64
Top-k sampling parameter.
top_p
double
Top-p (nucleus) sampling parameter.
last_used_library_prompt_template_name
string
Name of the last library prompt template used.
modified_library_prompt_template
boolean
Whether the library prompt template was modified.
output_schema
object
JSON schema describing the expected output format of the LLM judge.
parsing_type
enum
Output parsing type for a custom LLM judge evaluator.
Allowed enum values: structured_output,json
prompt_template
[object]
List of messages forming the LLM judge prompt template.
content
string
Text content of the message.
contents
[object]
Multi-part content blocks for the message.
type [required]
string
Content block type.
value [required]
object
Value of a prompt message content block.
text
string
Text content of the message block.
tool_call
object
A tool call within a prompt message.
arguments
string
JSON-encoded arguments for the tool call.
id
string
Unique identifier of the tool call.
name
string
Name of the tool being called.
type
string
Type of the tool call.
tool_call_result
object
A tool call result within a prompt message.
name
string
Name of the tool that produced this result.
result
string
The result returned by the tool.
tool_id
string
Identifier of the tool call this result corresponds to.
type
string
Type of the tool result.
role [required]
string
Role of the message author.
llm_provider
object
LLM provider configuration for a custom evaluator.
bedrock
object
AWS Bedrock-specific options for LLM provider configuration.
region
string
AWS region for Bedrock.
integration_account_id
string
Integration account identifier.
integration_provider
enum
Name of the LLM integration provider.
Allowed enum values: openai,amazon-bedrock,anthropic,azure-openai,vertex-ai,llm-proxy
model_name
string
Name of the LLM model.
vertex_ai
object
Google Vertex AI-specific options for LLM provider configuration.
location
string
Google Cloud region.
project
string
Google Cloud project ID.
target
object
Target application configuration for a custom evaluator.
application_name [required]
string
Name of the ML application this evaluator targets.
enabled [required]
boolean
Whether the evaluator is active for the target application.
eval_scope
enum
Scope at which to evaluate spans.
Allowed enum values: span,trace,session
filter
string
Filter expression to select which spans to evaluate.
root_spans_only
boolean
When true, only root spans are evaluated.
sampling_percentage
double
Percentage of traces to evaluate. Must be greater than 0 and at most 100.
updated_at [required]
date-time
Timestamp when the evaluator configuration was last updated.
id [required]
string
Unique name identifier of the evaluator configuration.
type [required]
enum
Type of the custom LLM Observability evaluator configuration resource.
Allowed enum values: evaluator_config
{"data":{"attributes":{"category":"Custom","created_at":"2024-01-15T10:30:00Z","created_by":{"email":"[email protected]"},"eval_name":"my-custom-evaluator","last_updated_by":{"email":"[email protected]"},"llm_judge_config":{"assessment_criteria":{"max_threshold":1,"min_threshold":0.7,"pass_values":["pass","yes"],"pass_when":true},"inference_params":{"frequency_penalty":0,"max_tokens":1024,"presence_penalty":0,"temperature":0.7,"top_k":50,"top_p":1},"last_used_library_prompt_template_name":"sentiment-analysis-v1","modified_library_prompt_template":false,"output_schema":{},"parsing_type":"structured_output","prompt_template":[{"content":"Rate the quality of the following response:","contents":[{"type":"text","value":{"text":"What is the sentiment of this review?","tool_call":{"arguments":"{\"location\": \"San Francisco\"}","id":"call_abc123","name":"get_weather","type":"function"},"tool_call_result":{"name":"get_weather","result":"sunny, 72F","tool_id":"call_abc123","type":"function"}}}],"role":"user"}]},"llm_provider":{"bedrock":{"region":"us-east-1"},"integration_account_id":"my-account-id","integration_provider":"openai","model_name":"gpt-4o","vertex_ai":{"location":"us-central1","project":"my-gcp-project"}},"target":{"application_name":"my-llm-app","enabled":true,"eval_scope":"span","filter":"@service:my-service","root_spans_only":true,"sampling_percentage":50},"updated_at":"2024-01-15T10:30:00Z"},"id":"my-custom-evaluator","type":"evaluator_config"}}
Data object for creating or updating a custom LLM Observability evaluator configuration.
attributes [required]
object
Attributes for creating or updating a custom LLM Observability evaluator configuration.
category
string
Category of the evaluator.
eval_name
string
Name of the custom evaluator. If provided, must match the eval_name path parameter.
llm_judge_config
object
LLM judge configuration for a custom evaluator.
assessment_criteria
object
Criteria used to assess the pass/fail result of a custom evaluator.
max_threshold
double
Maximum numeric threshold for a passing result.
min_threshold
double
Minimum numeric threshold for a passing result.
pass_values
[string]
Specific output values considered as a passing result.
pass_when
boolean
When true, a boolean output of true is treated as passing.
inference_params [required]
object
LLM inference parameters for a custom evaluator.
frequency_penalty
double
Frequency penalty to reduce repetition.
max_tokens
int64
Maximum number of tokens to generate.
presence_penalty
double
Presence penalty to reduce repetition.
temperature
double
Sampling temperature for the LLM.
top_k
int64
Top-k sampling parameter.
top_p
double
Top-p (nucleus) sampling parameter.
last_used_library_prompt_template_name
string
Name of the last library prompt template used.
modified_library_prompt_template
boolean
Whether the library prompt template was modified.
output_schema
object
JSON schema describing the expected output format of the LLM judge.
parsing_type
enum
Output parsing type for a custom LLM judge evaluator.
Allowed enum values: structured_output,json
prompt_template
[object]
List of messages forming the LLM judge prompt template.
content
string
Text content of the message.
contents
[object]
Multi-part content blocks for the message.
type [required]
string
Content block type.
value [required]
object
Value of a prompt message content block.
text
string
Text content of the message block.
tool_call
object
A tool call within a prompt message.
arguments
string
JSON-encoded arguments for the tool call.
id
string
Unique identifier of the tool call.
name
string
Name of the tool being called.
type
string
Type of the tool call.
tool_call_result
object
A tool call result within a prompt message.
name
string
Name of the tool that produced this result.
result
string
The result returned by the tool.
tool_id
string
Identifier of the tool call this result corresponds to.
type
string
Type of the tool result.
role [required]
string
Role of the message author.
llm_provider
object
LLM provider configuration for a custom evaluator.
bedrock
object
AWS Bedrock-specific options for LLM provider configuration.
region
string
AWS region for Bedrock.
integration_account_id
string
Integration account identifier.
integration_provider
enum
Name of the LLM integration provider.
Allowed enum values: openai,amazon-bedrock,anthropic,azure-openai,vertex-ai,llm-proxy
model_name
string
Name of the LLM model.
vertex_ai
object
Google Vertex AI-specific options for LLM provider configuration.
location
string
Google Cloud region.
project
string
Google Cloud project ID.
target [required]
object
Target application configuration for a custom evaluator.
application_name [required]
string
Name of the ML application this evaluator targets.
enabled [required]
boolean
Whether the evaluator is active for the target application.
eval_scope
enum
Scope at which to evaluate spans.
Allowed enum values: span,trace,session
filter
string
Filter expression to select which spans to evaluate.
root_spans_only
boolean
When true, only root spans are evaluated.
sampling_percentage
double
Percentage of traces to evaluate. Must be greater than 0 and at most 100.
id
string
Name of the evaluator. If provided, must match the eval_name path parameter.
type [required]
enum
Type of the custom LLM Observability evaluator configuration resource.
Allowed enum values: evaluator_config
{"data":{"attributes":{"category":"Custom","eval_name":"my-custom-evaluator","llm_judge_config":{"assessment_criteria":{"max_threshold":1,"min_threshold":0.7,"pass_values":["pass","yes"],"pass_when":true},"inference_params":{"frequency_penalty":0,"max_tokens":1024,"presence_penalty":0,"temperature":0.7,"top_k":50,"top_p":1},"last_used_library_prompt_template_name":"sentiment-analysis-v1","modified_library_prompt_template":false,"output_schema":{},"parsing_type":"structured_output","prompt_template":[{"content":"Rate the quality of the following response:","contents":[{"type":"text","value":{"text":"What is the sentiment of this review?","tool_call":{"arguments":"{\"location\": \"San Francisco\"}","id":"call_abc123","name":"get_weather","type":"function"},"tool_call_result":{"name":"get_weather","result":"sunny, 72F","tool_id":"call_abc123","type":"function"}}}],"role":"user"}]},"llm_provider":{"bedrock":{"region":"us-east-1"},"integration_account_id":"my-account-id","integration_provider":"openai","model_name":"gpt-4o","vertex_ai":{"location":"us-central1","project":"my-gcp-project"}},"target":{"application_name":"my-llm-app","enabled":true,"eval_scope":"span","filter":"@service:my-service","root_spans_only":true,"sampling_percentage":50}},"id":"my-custom-evaluator","type":"evaluator_config"}}