AiraHUB is a sophisticated, decentralized registry and orchestration engine designed to manage and coordinate a network of independent AI agents. It goes beyond a simple service directory by incorporating intelligent planning, a persistent learning mechanism (ReasoningBank), and standardized communication protocols (MCP and A2A) to enable complex, multi-agent task execution.
- AiraHub 3 is based on AiraHub 2 , check : https://github.com/IhateCreatingUserNames2/AiraHub2
- Agent Registry & Discovery: Agents can register their capabilities (tools and skills), send heartbeats to indicate they are online, and be discovered by other services based on their metadata, tags, or supported skills.
- Intelligent Orchestration: Takes a high-level goal in natural language (e.g., "Analyze last quarter's sales data and generate a summary presentation") and automatically creates and executes a multi-step plan using the available tools from the agent network.
- ReasoningBank - A Learning System: Agents can contribute their task experiences (both successes and failures). AiraHUB distills these experiences into generalized, reusable reasoning patterns, allowing the orchestrator and other agents to learn from the collective's past activities.
- Standardized Communication:
- MCP (Model Context Protocol): Implements a JSON-RPC 2.0 based streamable HTTP endpoint for robust, bi-directional tool calls between the hub and agents.
- A2A (Agent-to-Agent): Facilitates direct, task-based communication between registered agents, brokered by the hub.
- Pluggable Architecture:
- Flexible Storage: Built on an abstract storage layer, defaulting to a simple, file-based SQLite database (no external DB required!), but can be extended to support other databases.
- Flexible AI Backends: Uses
LiteLLMto support a wide range of LLMs (OpenAI, Anthropic, Google, OpenRouter, etc.) for planning and reasoning. Supports both API-based and local (sentence-transformers) models for generating vector embeddings.
- Universal Adapter: Provides a simplified MCP facade, exposing the entire power of the orchestration engine as a single, powerful tool for external clients (like large language models).
AiraHUB is built with a modular, service-oriented architecture using FastAPI.
+------------------+ +-------------------------------------------------------+ +------------------+
| | | AiraHUB | | |
| External Client |----->| Adapter (Simplified MCP Facade) | | Agent Alpha |
| (e.g., LLM UI) | | | | (Weather Tool) |
+------------------+ |-------------------------------------------------------| +--------^---------+
| Orchestration Service | |
| (Receives Goal -> Creates Plan -> Executes Plan) | | MCP
| | ^ | |
| v | | v
| +-------------------+ +----------------------+ | +------------------+
| | Reasoning Service | | Agent Service | | | |
| | (ReasoningBank) | | (Tool & Agent Cache) | |----->| Agent Beta |
| +-------------------+ +----------------------+ | | (Database Tool) |
| ^ ^ | +------------------+
| | | |
| +---------------------------------------------------+ |
| | Abstract Storage Layer (Default: SQLite) | |
| +---------------------------------------------------+ |
+-------------------------------------------------------+
- Python 3.9 or higher
- An API key for an LLM provider supported by LiteLLM (e.g., OpenAI, Anthropic via OpenRouter).
-
Clone the repository:
git clone https://github.com/IhateCreatingUserNames2/AiraHUB3/airahub.git cd airahub -
Create and activate a virtual environment:
python -m venv .venv # On Windows # .\.venv\Scripts\activate # On macOS/Linux source .venv/bin/activate
-
Install the dependencies: A
requirements.txtfile for this project would look like this. Create it and then run the install command.requirements.txt:fastapi uvicorn[standard] pydantic python-dotenv httpx aiosqlite litellm sentence-transformers numpyInstall command:
pip install -r requirements.txt
AiraHUB is configured using environment variables. Create a .env file in the root directory by copying the example below.
.env file:
# --- LLM Configuration (uses LiteLLM) ---
# Set the model for planning and reasoning distillation. OpenRouter is a good choice.
LITELLM_MODEL_INFERENCE="openrouter/anthropic/claude-3-haiku"
# Provide your API key for the service you're using.
# For OpenRouter, get it from https://openrouter.ai/keys
OPENROUTER_API_KEY="sk-or-v1-..."
# --- Embedding Model Configuration ---
# Provider can be "sentence_transformers" (local, free) or "litellm" (API-based).
# If you have a GPU or a decent CPU, sentence_transformers is recommended.
EMBEDDING_PROVIDER="sentence_transformers"
# If using sentence_transformers, specify the model name. all-MiniLM-L6-v2 is a good lightweight default.
# If using litellm, specify a model like "text-embedding-ada-002" and provide the corresponding API key (e.g., OPENAI_API_KEY).
LITELLM_MODEL_EMBEDDING="all-MiniLM-L6-v2"
# --- Server Configuration ---
HOST="0.0.0.0"
PORT="8017"
DEBUG="false" # Set to "true" for more verbose logging and auto-reloadOnce your .env file is configured, start the server using Uvicorn:
uvicorn main:app --host 0.0.0.0 --port 8017 --reload--reloadenables auto-reloading for development, which is useful if you setDEBUG="true".
You should see log output indicating that the server has started and connected to the SQLite database.
INFO: Started server process [12345]
INFO: Waiting for application startup.
INFO: AIRA Hub iniciando o ciclo de vida (startup)...
INFO: SQLiteStorage inicializado com o arquivo de banco de dados: airahub.db
INFO: Tabelas do SQLite garantidas (agents, reasoning_items, a2a_tasks).
INFO: Armazenamento inicializado com sucesso usando SQLite em 'airahub.db'.
...
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8017 (Press CTRL+C to quit)
You can now access the interactive API documentation at http://localhost:8017/docs.
AiraHUB exposes a set of powerful, modular endpoints:
-
Endpoints Compatible with Claude Desktop thru mcp-remote:
-
/[System]: Root endpoint for a simple health check. -
/agents[Agents]: Endpoints for registering, unregistering, discovering, and sending heartbeats for agents. This is the core of the service directory. -
/mcp/stream[MCP]: The main streamable HTTP endpoint for real-time, bi-directional tool calls following the MCP standard. -
/a2a[A2A]: Endpoints for the Agent-to-Agent communication protocol, allowing agents to discover each other's skills and delegate tasks. -
/orchestrate[Orchestration]: The high-level endpoint that accepts a natural language goal and executes it across the agent network. -
/reasoning[ReasoningBank]: Endpoints for agents to contribute their experiences and retrieve distilled insights to guide future tasks. -
/adapter[MCP Universal Adapter]: A simplified entry point that exposes the orchestration engine as a single, easy-to-use tool for external systems.
This example demonstrates the full power of AiraHUB.
-
Agent Registration: Two agents start up and register themselves with the Hub by POSTing to
/register.Agent-Searchregisters a tool namedfind_product(query: str).Agent-Analysisregisters a tool namedsummarize_data(data: dict).
-
User Request: A user sends a high-level goal to AiraHUB's orchestration endpoint.
POST /orchestrate- Body:
{ "goal": "Find the new 'Quantum X1' laptop and summarize its specifications." }
-
Planning Phase:
- The Orchestration Service receives the goal.
- It queries the Reasoning Service for similar past tasks, retrieving strategies like "For product searches, always use the
find_producttool first." - It queries the Agent Service for all available tools.
- It constructs a detailed prompt containing the goal, retrieved reasoning, and the list of tools, and sends it to the configured LLM.
- The LLM returns a structured, multi-step plan:
[ { "step": 1, "tool_name": "find_product", "arguments": { "query": "Quantum X1 laptop" }, "reasoning": "..." }, { "step": 2, "tool_name": "summarize_data", "arguments": { "data": "{result_from_step_1}" }, "reasoning": "..." } ]
-
Execution Phase:
- The Orchestrator executes Step 1: It finds
Agent-Searchvia the Agent Service and calls itsfind_producttool using the MCP protocol. Agent-Searchreturns the product specifications as JSON.- The Orchestrator executes Step 2: It calls
Agent-Analysis'ssummarize_datatool, passing the result from the first step. Agent-Analysisreturns a concise text summary.
- The Orchestrator executes Step 1: It finds
-
Response:
- AiraHUB returns the final summary to the user, along with the full execution log and the plan that was created.
- Vision: To transform AiraHub from a technical infrastructure into a vibrant economic ecosystem. This module allows third-party developers to not only register but also host, manage, and monetize their agents directly on the AiraHub platform. AiraHub becomes something like an APP Store for AI agents.
- Vision: To evolve AiraHub beyond a transactional request/response system into a platform for real-time, interactive collaboration between humans and agents, and among humans through agents. This module turns AiraHub into the "Twitch" or "Figma" for AI agents.
Contributions are welcome! Please feel free to open an issue to report a bug or suggest a feature, or submit a pull request with your improvements.
This project is licensed under the MIT License. See the LICENSE file for details.
