PyAI Conf
Register now

AI Observability Platform for LLMs, Apps & AI Agents

Complete observability for LLM applications

Monitor your entire AI application stack, not just the LLM calls. Logfire is a production-grade AI observability platform that also supports general observability. It helps you analyze, debug, and optimize AI systems faster. See LLM interactions and agent behavior alongside standard API requests and database queries in one unified view.

Companies who trust Pydantic Logfire

beauhurst
boostedai
crosby
deepscribe
fleetai
kraken
legora
Seekr
polarsh
stuut
workwhile
atlaai
Sophos
aignostics
amboss
motorola
epistemix
nous
pictet
simpleclub
voxmedia
xero
zenhub
tigerdata
beauhurst
boostedai
crosby
deepscribe
fleetai
kraken
legora
Seekr
polarsh
stuut
workwhile
atlaai
Sophos
aignostics
amboss
motorola
epistemix
nous
pictet
simpleclub
voxmedia
xero
zenhub
tigerdata

Understanding

What is an AI observability platform?

An AI observability platform is a tool that provides advanced features beyond traditional monitoring. While standard monitoring tells you that a system failed, an observability tool allows you to identify the underlying causes. In the era of Large Language Models (LLMs) and autonomous agents, this distinction is critical.

An effective AI observability platform allows engineering teams to trace the lifecycle of a prompt, analyze token usage and latency per step, and benchmark model responses against groundedness and toxicity metrics.

The Full Picture

Break down silos: one tool for both AI and general observability

Most engineering teams are forced to use one observability tool for their backend application and a completely separate one for their LLMs. However, problems in production AI applications rarely come from the LLM alone. They hide in the seams: slow database queries that delay context retrieval, API timeouts during agent tool calls, inefficient vector searches, or memory leaks in background tasks. You need visibility across your entire application stack, not just the LLM calls.

What Logfire shows you

  • Complete application traces from request to response
  • Database queries, API calls, and business logic
  • Dashboards and application metrics
  • One platform with first-class AI & general observability for your entire application

What others show you

  • LLM request/response only
  • Missing context on performance bottlenecks
  • No visibility into retrieval quality
  • Separate tools for app monitoring

The Pydantic Stack

From prompt to validated output in one trace

See how Pydantic AI, AI Gateway, and Logfire work together. Define your schema with Pydantic models, extract structured data with an AI agent, route through Gateway for model flexibility, and observe the entire flow in Logfire.

import logfire
from pydantic import BaseModel
from pydantic_ai import Agent

logfire.configure()
logfire.instrument_pydantic_ai()


class City(BaseModel):
    name: str
    country: str
    population: int
    tourist_population: int
    landmarks: list[str]


agent = Agent(
    'gateway/openai:gpt-5',
    output_type=City,
    instructions='Extract information about the city',
)
result = agent.run_sync(
    'London is home to over nine million people, making it the largest city in the United Kingdom.
     Around thirty million tourists visit each year, drawn by landmarks like Big Ben, the Tower
      of London, and Buckingham Palace.'
)

logfire.info(f'Here is the output: {result.output=}')

Why Logfire for AI Observability?

OpenTelemetry-Native

Built on industry-standard OpenTelemetry. No vendor lock-in, export to any backend, or use our hosted platform.

Complete Application Traces

See your entire application: LLM calls, agent reasoning, database queries, API requests, vector searches, business logic, JS/TS frontend.

Logfire Acts as an MCP Server

Use your favorite coding assistant (like Claude Code, Open Code, and Cursor) to talk directly to your Logfire data inside your code editor.

Integrated Evaluation Framework

Use Pydantic Evals to continuously evaluate LLM outputs in production. Curate datasets from production traces and catch regressions before users do.

Real-Time Cost Tracking

Track LLM API costs in real-time. Identify expensive prompts, optimize model selection, and set budget alerts. See exactly where your AI spending goes.

Pydantic AI & AI Gateway Integration

Natively integrates with Pydantic AI and Pydantic AI Gateway for model routing & budget control across all major LLM providers.

From Local Dev to Production

See all app traces in real-time as you code. Catch bugs in development, carry the same observability through to production. No tool switching, no friction.

Multi-Language Support

Logfire works with all languages and features dedicated SDKs for Python, TypeScript/JavaScript, and Rust.

Query Your Data with SQL

Drill down into your traces with SQL and use Natural Language Processing (NLP) to auto-generate your SQL queries.

Need SSO, custom data retention, or self-hosting? Talk to our team

Open Standards

Monitor your stack with OpenTelemetry

Logfire is built on OpenTelemetry, giving you a unified view of logs, traces, and metrics with no vendor lock-in. Our SDKs for Python, Rust, and TypeScript make instrumentation simple, and power features like live spans that render before they complete.

Logs

Structured and automatically redacted, with every log (span) linked to its trace. Search instantly or query with SQL.

Traces

One end-to-end timeline that combines APIs, databases, third-party calls, LLMs, and AI agents in one view.

Metrics

Track what matters to you: latency, errors, performance, cost, or any trend across your system. Set custom SLOs and alerts to keep your application reliable.

Integrations

Logfire works with your entire stack

Observability should not require a rewrite of your codebase. Built on open standards (OTel) with SDKs for Python, Javascript/Typescript, and Rust, Logfire supports auto-instrumentation for AI frameworks, web frameworks, databases, background workers, browsers, and more.

Rust

Built on the tracing + opentelemetry ecosystem

OpenTelemetry

GoJava.NETRubyPHPErlang/ElixirSwiftC++

Logfire is built on OpenTelemetry. Any language with an OpenTelemetry SDK can send traces, logs, and metrics to Logfire.

Insights

Query your data with full SQL

Query your data with full Postgres flavored SQL — all the control and (for many) nothing new to learn. Even if you don't like writing SQL, LLMs do, so SQL plus an MCP server lets your IDE use Pydantic Logfire as a window into your app's execution. Search for obstacles and opportunities, as you (or the AI) writes code.

diagram showing an IDE using MCP server to query Logfire data

Enterprise Ready

Enterprise-level AI observability

AI applications often process sensitive user data. As a result, enterprise-level AI observability platforms need to meet strict security, compliance, and data privacy standards. Pydantic Logfire is architected to meet the rigorous governance standards of enterprise engineering teams.

Data sovereignty & self-hosting

Industries with strict data residency requirements (Finance, Healthcare, Legal) can make use of our fully self-hosted enterprise plan.

SOC2 Type II certified

Logfire is SOC2 Type II certified. We did not receive any exceptions in our report. A copy is available upon request.

HIPAA compliant

Logfire is HIPAA compliant. We are able to offer Business Associate Agreements (BAAs) to customers on our enterprise plans.

GDPR compliance & EU data region

Pydantic is fully GDPR compliant. For customers who need data kept in the EU, we offer an EU Data Region .

Logfire is already making developers' lives easier

Ready to see your complete AI application?

Start monitoring your LLMs, agents, and entire application stack in minutes. 10 million free spans per month. No credit card required.

Frequently asked questions

FOR DEVELOPERS
Ready to start building?

Logfire's has SDKs for Python, TypeScript/JavaScript, and Rust. The Python SDK is open source under the MIT license and wraps the OpenTelemetry Python package. By default, it will send data to the Logfire platform but you could send data to any OpenTelemetry Protocol (OTLP) compliant endpoint.