Intro to BoxLang AI Join our FREE Webinar this February 18th at 11am CDT.
Register Now
Professional Open Source

One API AI Providers

The official AI library for the BoxLang JVM dynamic language.
Unified, fluent APIs to orchestrate multi-model workflows, autonomous agents, RAG pipelines, and AI-powered apps.

Your App One line of code: aiChat( message ) BoxLang AI Unified Intelligence Hub 🔌 14+ AI Providers 🧠 RAG + Vector Search 🤖 Agents + Tools 📡 MCP Servers AI Providers OpenAI Claude Gemini Grok DeepSeek Ollama + 6 more ✨ One API → Unlimited AI Power Switch providers, combine models, orchestrate agents—all with simple, fluent code
14+ AI Providers
10+ Vector DBs
30+ File Formats
20+ Memory Types
MCP Servers & Invokers
Multi-Tenant
Event Driven

Why BoxLang Logo AI?

Build powerful AI workflows with one API — no vendor lock-in, full RAG & multi-provider support.

14+ AI Providers

One unified API for OpenAI, Claude, Gemini, Grok, Ollama, DeepSeek, Perplexity, Amazon Bedrock, Docker AI Models, and more. Switch providers with a single line.

// Default Provider
aiChat( msg )

// Specific Provider
aiChat(msg, {provider:"claude"})

// Chat Async Futures
aiChatAsync(msg, {provider:"grok"})
  .then( r => println(r) )
  .onError( e => println(e) )
  .get()
                    

Multi-Tenant Memory & Usage Tracking

Enterprise-grade memory isolation with userId and conversationId. 20+ memory types including vector search. Provider-agnostic request tagging with tenantId and usageMetadata for per-tenant billing and custom tracking.

// Multi-tenant memory
aiMemory(
    type: "vector",
    key: createUUID(),
    userId: "123",
    conversationId: "abc"
)

// Usage tracking
aiChat( msg, {
    tenantId: "org-123",
    usageMetadata: {
        costCenter: "eng",
        projectId: "proj-456",
        userId: "user-789"
    }
})
                    

AI Agents

Build autonomous agents with memory, tools, sub-agents, and reasoning. Perfect for complex workflows and multi-step tasks.

aiAgent(
  name: "Research Assistant",
  instructions: "Help research AI trends",
  memory: [window, summary, chroma ],
  subAgents: [research,coder]
)
.tools( [searchTool, dbTool] )
.run( "Search AI trends" )

AI Pipelines

Composable workflows with models, messages, transformers. Build reusable templates for any AI task.

aiMessage( "Explain AI in one sentence" )
    .system( "You are a helpful assistant." )
    .toDefaultModel()
    .transform( r => r.content.uCase() )
    .run()

Real-Time Tools

Enable AI to call functions, access APIs, and interact with external systems in real-time with built-in tool support.

weatherTool = aiTool(
  "get_weather",
  "Get current weather for a location",
  location => {
    // Call your weather API
    return getWeatherData( location )
  }
)

Vector Memory & RAG

Semantic search with 10+ vector databases. Build RAG systems with document loaders for 30+ file formats, with easy batching and auto-chunking.

aiDocuments( "/docs", {
  type: "directory",
  recursive: true,
  extensions: ["md", "txt", "pdf"]
} ).toMemory(
    memory  = pinecone,
    options = { chunkSize: 1000, overlap: 200 }
);

Streaming Responses

Real-time streaming through pipelines for responsive applications. Perfect for live chat interfaces.

aiMessage( "Write about ${topic}" )
  .system( "You are ${style}" )
  .toDefaultModel()
  .stream(
    ( chunk ) => print( chunk.choices?.first()?.delta?.content ?: "" ),
    // input bindings
    { style: "poetic", topic: "nature" }
)
                    

Local AI with Ollama

Run models locally for privacy, offline use, and zero API costs. Full Ollama integration included.

// Star the Ollama server
docker compose docker-compose-ollama up -d

// Configure Boxlang AI
settings: {
  provider: "ollama",
  model: "llama3"
}

// Chat away
aiChat( "Hello from local AI!" )

Document Loaders

Load PDFs, Word docs, CSVs, JSON, XML, Markdown, Web Scrapers, and 30+ formats. Perfect for RAG and document processing.

// Load a text file
docs = aiDocuments( "/path/to/document.txt" ).load()

// Load a directory of files
docs = aiDocuments( "/path/to/folder" ).load()

// Load from URL
docs = aiDocuments( "https://example.com/page.html" ).load()

// Load with auto-chunking
docs = aiDocuments( "/path/to/file.md" )
    .chunkSize( 500 )
    .overlap( 50 )
    .load()

MCP Server

BoxLang AI exposes MCP Server capabilities to create AI-powered microservices in either HTTP or STDIN transports. One easy endpoint by covention http://app/~bxai/mcp.bxm

MCPServer( "myApp" )
  .setDescription( "My Application MCP Server" )
  .registerTool(
    aiTool( "search", "Search for documents", ( query ) => {
     return searchService.search( query )
    } )
  )
  .registerTool(
    aiTool( "calculate", "Perform calculations", ( expression ) => {
     return evaluate( expression )
    } )
  )

MCP Invokers

Call MCP Servers directly from BoxLang AI with built-in invokers. Simplify distributed AI workflows, create internal tools, and microservices.

// Create an MCP client
mcpClient = MCP( "http://localhost:3000" )

// Send a request to a tool
result = mcpClient.send( "searchDocs", {
  query: "BoxLang syntax",
  limit: 10
} )

// Check the response
if ( result.isSuccess() ) {
    println( result.getData() )
} else {
    println( "Error: " & result.getError() )
}

Structured Output

Extract type-safe, validated data from AI responses using classes, structs, or JSON schemas.

// With class
model = aiModel()
  .structuredOutput( new Product() )

// With struct
model = aiModel()
  .structuredOutput( {
    name: "",
    price: 0.0,
    inStock: false
} )

// With array
model = aiModel()
  .structuredOutput( [new Contact()] )

Supported Providers

Switch between providers or use multiple-providers within the same AI Agent with zero code changes. You can also create your own custom providers easily by implementing the provider interface. Never be locked in again. Be fluid!

Multi-Model Orchestration with RAG

Full Stack AI: Combine vector search, multiple AI providers, and specialized agents in one workflow

Your App

User queries

BoxLang AI
Unified API RAG Pipeline Agent Router
AI Providers
GPT-4 Claude Gemini DeepSeek Llama + 7 more
RAG Pipeline

Docs

Chunk

Vector DB

Search

Agent Orchestration

Router

Writer

Coder

Analyst

Result

One API, Unlimited Possibilities: Mix and match models per task, combine semantic search with any provider, orchestrate complex multi-agent workflows

Autonomous AI Agents

Build intelligent agents that think, reason, and act.
More than simple chatbots—agents maintain memory, use tools, delegate to specialists, and orchestrate complex workflows autonomously.

Agent Architecture: Everything Connected

One Agent, Unlimited Capabilities: Connect memories, tools, sub-agents, and AI providers in a single orchestration layer

AI AGENT The Orchestrator 🧠 Reasoning Engine 🔄 State Management ⚡ Pipeline Integration 💾 Multi-Memory Simple Memory Vector/Semantic Search Custom Memory 🤖 LLM Bound OpenAI GPT-4 Claude Sonnet Ollama or Any LLM 🔧 Tools API Calls Database Queries Calculations 👥 Sub-Agents Research Agent Code Review Agent Data Analysis Agent 👤 User Input Natural Language "Analyze Q4 sales" 💬 Response Intelligent Output 📋 Instructions & Context "You are a data analyst.Be precise, cite sources." "Always use tools when data is required." ✨ One Agent → Orchestrates Everything: Memory + Tools + Sub-Agents + Custom LLMs
Autonomous Intelligence: Agents decide when to use tools, query memory, delegate to sub-agents, and reason about complex multi-step tasks—all automatically.

Memory Management

Attach one or more memories to each agent. Mix conversation history with vector search for hybrid intelligence.

Tool Calling

Agents automatically use tools to access APIs, databases, calculations, and external systems.

Sub-Agents

Delegate to specialized sub-agents for complex tasks. Parent agent automatically orchestrates delegation.

Pipeline Ready

Agents work seamlessly in pipelines. Chain multiple agents with transformers for advanced workflows.

Simple Agent with Memory & Tools

// Create tools
weatherTool = aiTool(
    "get_weather",
    "Get current weather",
    location => getWeatherData( location )
)

// Create agent with memory and tools
agent = aiAgent(
    name: "Assistant",
    instructions: "Help users with queries",
    memory: aiMemory( "simple" ),
    tools: [ weatherTool ]
)

// Run - agent uses tools automatically
response = agent.run(
    "What's the weather in Boston?"
)
println( response )

Agent with Sub-Agents

// Create specialized sub-agents
mathAgent = aiAgent(
    name: "MathAgent",
    instructions: "Expert in mathematics"
)

codeAgent = aiAgent(
    name: "CodeAgent",
    instructions: "Expert in programming"
)

// Parent agent delegates automatically
mainAgent = aiAgent(
    name: "Orchestrator",
    instructions: "Delegate to specialists",
    subAgents: [ mathAgent, codeAgent ]
)

// Parent decides which sub-agent to use
response = mainAgent.run(
    "Write code to calculate factorial"
)

RAG Agent with Multi-Memory

// Create vector memory
vectorMemory = aiMemory( "chroma", {
    collection: "docs",
    embeddingProvider: "openai"
} )

// Load documents
aiDocuments( "/docs", {
    type: "directory"
} ).toMemory( vectorMemory )

// Agent with multiple memories
agent = aiAgent(
    name: "Knowledge Assistant",
    instructions: "Answer using docs",
    memory: [
        aiMemory( "simple" ),    // Chat
        vectorMemory             // RAG
    ]
)

// Searches docs + remembers conversation
response = agent.run(
    "Explain authentication"
)

Agents in Pipelines

// Create agents
researcher = aiAgent(
    name: "Researcher"
)
summarizer = aiAgent(
    name: "Summarizer"
)
editor = aiAgent(
    name: "Editor"
)

// Chain agents in pipeline
pipeline = aiMessage()
    .user( "Research: ${topic}" )
    .to( researcher )
    .transform( r => "Summarize: " & r )
    .to( summarizer )
    .transform( r => "Polish: " & r )
    .to( editor )

result = pipeline.run( {
    topic: "Quantum Computing"
} )

What Makes Agents Powerful?

Auto State Management

Agents automatically handle message history, context, and state across interactions.

Reasoning Engine

Agents decide when and how to use tools, query memory, or delegate to sub-agents.

Multi-Memory Fan Out

Store data in multiple memory systems simultaneously for hybrid intelligence.

Streaming Support

Stream agent responses in real-time for responsive chat interfaces.

Event-Driven

Intercept agent lifecycle events for logging, monitoring, and custom workflows.

Multi-Tenant Ready

Track usage per tenant with built-in tenantId and usageMetadata support.

Supported Memories

Powerful multi-memory architecture where each agent can have one or more memories attached to it.
Mix standard conversation memories with vector-based semantic search for hybrid intelligence.
Want to use another memory provider? No problem, build your custom memory or custom vector memory provider easily!

Standard Memories

Windowed
Summary
Session
File
Cache
JDBC

Vector Memories

BoxVector
Hybrid
ChromaDB
PostgreSQL
Pinecone
Qdrant
Weaviate
Milvus
OpenSearch

Multi-Memory Architecture

Multi-Tenant Ready: Built-in isolation with userId and conversationId support across all memory types

AI Agent

Autonomous agent with instructions

Multiple Memories

1 or more memory types per agent

Hybrid Intelligence

Recent context + semantic search

Supported Document Loaders

Load content from 30+ file formats, databases, APIs, and web sources into vector databases for RAG.
Need a custom loader? Build your own by extending BaseDocumentLoader.

File Formats

Text
Markdown
PDF
CSV
JSON
XML
Logs

Web Sources

HTTP/HTTPS
Web Crawler
RSS/Atom

System Sources

Directory
SQL

RAG Pipeline with Document Loaders

Automatic Processing: Load, chunk, embed, and store documents with a single command

Source Files

PDFs, docs, web pages, databases

Auto-Chunking

Split into optimal sized chunks

Vector Store

Ready for semantic search

BoxLang AI Events

This is what makes BoxLang AI so powerful. You can easily listen and interact with the entire AI workflows.
Hook into every step of the AI pipeline to add logging, monitoring, validation, or custom logic.

Request/Response Lifecycle

onAIMessageCreate
onAIRequestCreate
onAIProviderRequest
onAIProviderCreate
onAIModelCreate
onAIRequest
onAIResponse
onAIError
onAIRateLimitHit
onAITokenCount
onMissingAiProvider

Pipeline & Model Execution

onAITransformCreate
beforeAIPipelineRun
afterAIPipelineRun
beforeAIModelInvoke
afterAIModelInvoke
onAIToolCreate
beforeAIToolExecute
afterAIToolExecute

MCP Server Events

onMCPServerCreate
onMCPServerRemove
onMCPRequest
onMCPResponse
onMCPError

Event-Driven AI Architecture

Complete Observability: Every interaction triggers events you can hook into

Your Application

Makes AI requests

Event System

Intercepts & notifies

Your Listeners

Log, validate, analyze

Model Context Protocol

Expose tools as MCP Servers or consume external MCP services as MCP Clients.
Build microservices for AI agents with multi-tenant support and HTTP/STDIO transports.

MCP Server

Expose Tools
Multi-Tenant
Multi-Server
HTTP Transport
STDIO Transport
Auth & CORS

MCP Client (Invokers)

Consume Services
Use as AI Tools
Agent Integration
Chat Integration
Discovery
Response Handling

MCP Integration Architecture

Distributed AI: Connect agents with external tools and microservices via standardized protocol

MCP Servers

Expose your tools & services

AI Agents

Use tools from servers

Applications

AI-powered experiences

Quick Start

Get started in minutes with simple examples. Click on our full documentation to dive deeper.

Installation

// Install via BoxLang Binary For OS Installation
install-bx-module bx-ai

// For Web Runtimes use CommandBox Installation
box install bx-ai

Configuration

// boxlang.json
{
  "modules": {
    "bxai": {
      "provider": "openai",
      "apiKey": "sk-..."
    }
  }
}

Simple Chat

// Basic chat
answer = aiChat( "Explain recursion" )
println( answer )

// With parameters
answer = aiChat(
    "Write a haiku about coding",
    { temperature: 0.9, model: "gpt-4" }
)

Structured Output

// Get JSON automatically
user = aiChat(
    "Create a user with name and email",
    { returnFormat: "json" }
)

println( user.name )
println( user.email )

Streaming

// Real-time responses
aiChatStream(
    "Tell me a story",
    ( chunk ) => {
        content = chunk.choices
            ?.first()
            ?.delta
            ?.content ?: ""
        print( content )
    }
)

AI Tools

// Create callable functions
weather = aiTool(
    name: "get_weather",
    description: "Get weather",
    callback: ( args ) => {
        return { temp: 72 }
    }
)

aiChat( "Weather in SF?", { tools: [weather] } )

Pipelines

// Build reusable workflows
pipeline = aiMessage()
    .system( "You are helpful" )
    .user( "Explain ${topic}" )
    .toDefaultModel()
    .transform( r => r.content )

result = pipeline.run( { topic: "AI" } )

AI Agents

// Autonomous agent
agent = aiAgent()
    .name( "Assistant" )
    .instructions( "Help research" )
    .memory( aiMemory( type: "windowed" ) )
    .tools( [searchTool] )

agent.chat( "Research AI trends" )

Document Processing

// Load documents for RAG
docs = aiDocuments( source: "docs/*.pdf" )

memory = aiMemory( type: "vector" )
memory.addDocuments( docs )

aiChat( "Summarize docs", { memory: memory } )

Async Operations

// Non-blocking requests
future = aiChatAsync( "Question 1" )
future2 = aiChatAsync( "Question 2" )

// Process results
future.then( r => println( r ) )
future2.then( r => println( r ) )

Built For Real-World Use Cases

From simple chatbots to complex AI pipelines

Chatbots & Assistants

Build conversational interfaces with memory and context awareness. Perfect for customer support and virtual assistants.

Code Generation

Generate, review, and explain code. Build AI-powered IDEs and development tools with real-time assistance.

RAG & Q&A Systems

Build knowledge bases that answer questions from your documents. Support 30+ file formats with vector search.

Content Generation

Create articles, documentation, marketing copy, and social media content. Automate content workflows.

Data Analysis

Extract insights from text and structured data. Build AI-powered analytics and reporting tools.

AI Agents & Workflows

Create autonomous agents that can research, analyze, and execute complex multi-step tasks.

Need Enterprise AI Implementations?

Ortus Solutions

Ortus Solutions offers professional services for multi-tenant AI platforms, RAG systems, and AI agent architectures. We built BoxLang AI — now we can help you build with it.

GSA Schedule Holder
20 Years Experience
250+ Projects

Resources

Everything you need to succeed with BoxLang AI

Documentation

Comprehensive guides, API reference, and tutorials

Read Docs

GitHub

Source code, examples, and issue tracking

View Repo

Community

Join our Slack channel and forums

Join Slack

BoxLang

Learn about the BoxLang language

Learn More

Want More Features?

BoxLang AI+ includes additional providers, advanced memory systems, enhanced tooling, and priority support.

Stay ahead with BoxLang

Get insider news and releases, personalized just for you