Skip to content

Home

Protolink Logo

A lightweight, production-ready framework for agent-to-agent communication, built on and extending Google's A2A protocol.

Get Started View on GitHub


Welcome to the Protolink documentation.

This site provides an overview of the framework, its concepts, and how to use it in your projects.

Current release: see protolink on PyPI.

Python Version PyPI version Ruff ty Ask DeepWiki License: MIT PyPI Downloads

Contents

ProtoLink is a lightweight, production-ready Python framework for building distributed multi-agent systems where AI agents communicate directly with each other.

Each ProtoLink agent is a self-contained runtime that can embed an LLM, manage execution context, expose and consume tools (native or via MCP), and coordinate with other agents over a unified transport layer.

ProtoLink implements and extends Google’s Agent-to-Agent (A2A) specification for agent identity, capability declaration, and discovery, while going beyond A2A by enabling true agent-to-agent collaboration.

The framework emphasizes minimal boilerplate, explicit control, and production-readiness, making it suitable for both research and real-world systems.


The landscape of AI agents is shifting, from monolithic scripts driven by a single model, towards Multi-Agent Systems where specialized, autonomous agents collaborate to solve complex problems.

But today's frameworks often trap you in a walled garden:

  • 🔒 Locked into a specific LLM (OpenAI, Anthropic, etc.)
  • 🔒 Locked into a specific Transport for communication
  • 🔒 Locked into specific Tooling schemes
  • 🔒 Agents are just functions, not independent entities

Protolink breaks free from this model.

In Protolink, an Agent is an autonomous, centralized object that serves as the core unit of your system. It is designed to be fully modular so you can plug in any LLM, Tools, Transport, Storage, OpenTelemetry, and Authentication stack you need.

Care only about the logic. Leave the communication, agent lifecycle, inference, tooling, authentication, memory, and logging to Protolink.

Unlike the base A2A specifications, Protolink enables more open and flexible communication: agents can call another agent's LLM for reasoning, invoke its tools directly, or define custom communication schemes. This creates a flexible mesh where specialized agents leverage each other's native capabilities without rigid orchestration bottlenecks.

ProtoLink implements Google’s A2A protocol at the wire level, while providing a higher-level agent runtime that unifies client, server, transport, tools, and LLMs into a single composable abstraction the Agent.

Concept Google A2A ProtoLink
Agent Protocol-level concept Runtime object
Transport External server concern Agent-owned
Client Separate Built-in
LLM Out of scope First-class
Tools Out of scope Native + MCP
UX Enterprise infra Developer-first
  • Build agents quickly
    See Getting Started and Agents for the core concepts and basic setup.

  • Choose your transport
    Explore Transports to switch between HTTP, WebSocket, runtime, and future transports with minimal code changes.

  • Plug in LLMs & tools
    Use LLMs and Tools to wire in language models and both native & MCP tools as agent modules.

Key ideas

  • Unified Agent model: a single autonomous AI Agent instance handles both client and server responsibilities, incorporating LLMs and tools.
  • Flexible transports: HTTP, WebSocket, in‑process runtime, and planned JSON‑RPC / gRPC transports. Change one line of code to switch protocols.
  • LLM‑ready architecture: first‑class integration with API, local, and server‑hosted LLMs.
  • Tools as modules: native Python tools and MCP tools plugged directly into agents. Import tools from thousands of existing MCP servers instantly.
  • Resilience by design: by decoupling the Brain (LLM) from the Body (Agent), you are immune to provider outages or pricing changes.
  • Developer freedom: the pluggable architecture means you own your stack. No vendor lock-in, no framework constraints—just clean, composable components.

Use this documentation to:

  • Install Protolink and run your first agent.
  • Understand how agents, transports, LLMs, and tools fit together.
  • Explore practical examples you can adapt to your own systems.

Protolink is open source under the MIT license. Contributions are welcome – see the repository’s Contributing section on GitHub.