Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 31 additions & 1 deletion .github/workflows/shared-test-integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ on:
workflow_call:
inputs:
integration:
description: 'Integration type (agentops, mlflow, weave)'
description: 'Integration type (agentops, mlflow, weave, mcp, openai)'
required: true
type: string
os:
Expand Down Expand Up @@ -62,7 +62,16 @@ jobs:
sleep 10 # Wait for server to start
shell: bash

- name: Setup MCP server
if: inputs.integration == 'mcp'
run: |
cd python/tests/integration
python mcp_server.py &
sleep 10 # Wait for server to start
shell: bash

- name: Run ${{ inputs.integration }} baseline test
if: inputs.integration != 'openai'
env:
# Common environment variables
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
Expand All @@ -79,6 +88,7 @@ jobs:
shell: bash

- name: Run ${{ inputs.integration }} integration tests
if: inputs.integration != 'openai'
env:
# Common environment variables
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
Expand All @@ -94,6 +104,26 @@ jobs:
python ${{ inputs.integration }}_poml.py
shell: bash

- name: Run OpenAI response format test
if: inputs.integration == 'openai'
env:
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
cd python/tests/integration
python openai_response_format.py
shell: bash

- name: Run OpenAI tool call test
if: inputs.integration == 'openai'
env:
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
cd python/tests/integration
python openai_tool_call.py
shell: bash

- name: Run mlflow additional integration tests
if: inputs.integration == 'mlflow'
env:
Expand Down
14 changes: 12 additions & 2 deletions .github/workflows/test-after-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,12 @@ jobs:
set -ex
for poml_file in examples/*.poml; do
echo "Testing $poml_file"
npx poml -f "$poml_file"
# Check if context file exists
if [ -f "${poml_file%.poml}.context.json" ]; then
npx poml -f "$poml_file" --context-file "${poml_file%.poml}.context.json"
else
npx poml -f "$poml_file"
fi
done
shell: bash

Expand Down Expand Up @@ -85,6 +90,11 @@ jobs:
set -ex
for poml_file in examples/*.poml; do
echo "Testing $poml_file"
poml-cli -f "$poml_file"
# Check if context file exists
if [ -f "${poml_file%.poml}.context.json" ]; then
poml-cli -f "$poml_file" --context-file "${poml_file%.poml}.context.json"
else
poml-cli -f "$poml_file"
fi
done
shell: bash
4 changes: 2 additions & 2 deletions .github/workflows/test-integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ jobs:
test-linux:
strategy:
matrix:
integration: [agentops, mlflow, weave]
integration: [agentops, mlflow, weave, mcp, openai]
fail-fast: false
uses: ./.github/workflows/shared-test-integration.yml
with:
Expand All @@ -22,7 +22,7 @@ jobs:
strategy:
matrix:
os: [windows-latest, macos-latest]
integration: [agentops, mlflow, weave]
integration: [agentops, mlflow, weave, mcp, openai]
fail-fast: false
uses: ./.github/workflows/shared-test-integration.yml
with:
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,10 @@ node_modules
__pycache__/
poml-*.tgz
/.env
/examples/.env
/logs
/mlruns
/mlartifacts
/site
/docs/typescript/reference/**/*.md
pomlruns
Binary file added docs/media/integration-agentops.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/media/integration-mlflow-prompt.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/media/integration-mlflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/media/integration-weave-prompt.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/media/integration-weave.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
99 changes: 99 additions & 0 deletions docs/python/integration/agentops.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# AgentOps Integration

AgentOps is an observability platform designed for AI agents and LLM applications. The POML-AgentOps integration automatically traces your POML calls and sends them to AgentOps for monitoring, debugging, and analytics.

![AgentOps trace view showing POML operations](../../media/integration-agentops.png)

## Installation and Configuration

Install POML with AgentOps support:

```bash
pip install poml[agent]
```

Or install AgentOps separately:

```bash
pip install agentops
```

Set up your AgentOps API key as an environment variable:

```bash
export AGENTOPS_API_KEY="your-api-key-here"
```

You can obtain an API key from the [AgentOps dashboard](https://app.agentops.ai).

## Basic Usage

Enable POML tracing with AgentOps:

```python
import os
import poml
import agentops
from openai import OpenAI

# Initialize AgentOps. Trace is automatically started.
agentops.init()

# Enable POML tracing with AgentOps
poml.set_trace("agentops", trace_dir="pomlruns")

# Use POML as usual
client = OpenAI()
messages = poml.poml(
"explain_code.poml",
context={"code_path": "sample.py"},
format="openai_chat"
)

response = client.chat.completions.create(
model="gpt-5",
**messages
)

# Trace ends automatically at the end of the script.
```

## What Gets Traced

When AgentOps integration is enabled, POML automatically captures POML Operations. Each POML call is logged as an operation with:

- **Operation Name**: "poml"
- **Prompt Content**: The raw POML source
- **Context Variables**: All context variables passed to the POML call
- **Stylesheet**: Any stylesheet configuration
- **Result**: The processed prompt structure sent to the LLM

### Example Trace Data

```json
{
"resource_attributes": {
"imported_libraries": "[\"agentops\",\"poml\"]"
},
"span_attributes": {
"agentops": {
"span": {
"kind": "task"
}
},
"task": {
"input": "{\"args\": [\"../assets/explain_code.poml\", {\"code_path\": \"sample.py\"}, null], \"kwargs\": {}}",
"output": "{\"messages\": [{\"speaker\": \"human\", \"content\": \"# Task\\n\\nYou are a senior Python developer. Please explain the code.\\n\\n```\\ndef greet(name):\\n print(f\\\"Hello, {name}!\\\")\\n\\ndef add(a, b):\\n return a + b\\n\\ndef factorial(n):\\n if n == 0:\\n return 1\\n else:\\n return n * factorial(n - 1)\\n\\ndef is_even(num):\\n return num % 2 == 0\\n\\ndef main():\\n greet(\\\"Alice\\\")\\n x = 5\\n y = 7\\n print(f\\\"{x} + {y} = {add(x, y)}\\\")\\n print(f\\\"Factorial of {x} is {factorial(x)}\\\")\\n if is_even(x):\\n print(f\\\"{x} is even\\\")\\n else:\\n print(f\\\"{x} is odd\\\")\\n\\nif __name__ == \\\"__main__\\\":\\n main()\\n```\"}], \"runtime\": {\"temperature\": 0.7, \"maxTokens\": 256}}"
},
"operation": {
"name": "poml"
}
}
}
```

## See Also

- [POML Tracing Guide](../trace.md)
- [AgentOps Documentation](https://docs.agentops.ai)
- [AgentOps Dashboard](https://app.agentops.ai)
Loading
Loading