Codex MCP

The Codex MCP Server (non-official) connects your MCP client, like Claude or Cursor, directly to the OpenAI Codex. It lets you automate tasks with codex exec without interactive prompts, make sandboxed code edits that require your approval, and perform large-scale code analysis using simple @ file references.

Think of it as a direct line from your AI assistant to the powerful capabilities of the Codex CLI. You can use it for code reviews, automated refactoring, generating documentation, or even integrating it into CI pipelines.

Features

  • 📁 File reference support – Analyze entire codebases using @ syntax for files and directories.
  • 🛡️ Sandboxed operations – Multiple safety modes from read-only to full workspace access.
  • Structured change output – OLD/NEW patch format for clear code modifications.
  • 🔧 Multi-model support – Choose from gpt-5-codex, o3, codex-1, and other specialized models.
  • 🌐 Cross-platform compatibility – Windows, macOS, and Linux support with Node.js v18+.
  • 📊 Progress streaming – Real-time updates during long-running operations.
  • 🧠 Brainstorming tools – Structured ideation with SCAMPER, design-thinking, and lateral methodologies.

Use Cases

  • Code review automation – Analyze pull requests and suggest improvements by referencing entire directories with @src/ syntax
  • Legacy code refactoring – Use structured change mode to safely update outdated code patterns across multiple files
  • Technical documentation – Generate architecture explanations and code summaries by analyzing entire codebases
  • CI/CD integration – Automate code quality checks and security scans in pipeline environments using non-interactive modes

Installation

1. The quickest way to get started is with the one-line setup command. This registers the server with your Claude MCP client using npx. This method downloads and runs the package without a permanent global installation.

claude mcp add codex-cli -- npx -y @cexll/codex-mcp-server

After running this, type /mcp inside your Claude client to confirm the codex-cli server is active.

2. If you have it configured in Claude Desktop, you can import it directly into Claude Code:

Add the server to your Claude Desktop configuration file:

"codex-cli": {
  "command": "npx",
  "args": ["-y", "@cexll/codex-mcp-server"]
}

Run the import command:

claude mcp add-from-claude-desktop

3. Configure the MCP server by editing your client’s configuration file.

For the recommended npx usage, your configuration should look like this:

{
  "mcpServers": {
    "codex-cli": {
      "command": "npx",
      "args": ["-y", "@cexll/codex-mcp-server"]
    }
  }
}

If you installed the package globally (npm install -g @cexll/codex-mcp-server), the configuration is simpler:

{
  "mcpServers": {
    "codex-cli": {
      "command": "codex-mcp"
    }
  }
}

You can find the configuration file in these locations:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/claude/claude_desktop_config.json

4. The server gives you fine-grained control over how Codex interacts with your file system.

The sandbox: true parameter is a simple flag that enables fullAuto mode. This is equivalent to setting sandboxMode: "workspace-write" and approvalPolicy: "on-failure". It’s a convenient way to allow automated edits.

For more precise control, you can set the sandboxMode and approvalPolicy parameters yourself.

Sandbox Modes:

ModeDescription
read-onlyAllows analysis only; no file modifications.
workspace-writeCan modify files within the current workspace.
danger-full-accessFull system access, including network. Use with caution.

Approval Policies:

PolicyDescription
neverNo approvals are required for any action.
on-requestAsks for approval before every action.
on-failureOnly asks for approval if an operation fails.
untrustedMaximum security mode for high-risk changes.

5. Configuration Example (Balanced Automation):

This configuration enables Codex to write to files in your workspace and only prompts you for approval if an issue arises.

{
  "approvalPolicy": "on-failure",
  "sandboxMode": "workspace-write",
  "prompt": "refactor @src/utils for better performance"
}

The server includes smart defaults. If you set an approvalPolicy that implies write access, it automatically sets sandboxMode to "workspace-write" to prevent permission errors.

Usages

// Find potential bugs in a specific C source file
'use codex to find potential bugs in @src/core/main.c';

// Summarize the scripts section of a package.json file
'summarize the purpose of the scripts defined in @package.json';

// Write a unit test for a specific function within a file
'ask codex to write a unit test for the calculateTotal function in @src/utils/math.js';

// Analyze a Dockerfile and suggest optimizations
'analyze the Dockerfile (@Dockerfile) and suggest optimizations for a smaller image size';

// Explain the SOLID principles with code examples
'ask codex to explain the SOLID principles with code examples';

// Get a comparison of two web technologies
'ask codex for a comparison between WebSockets and Server-Sent Events';

// Ask for best practices related to web accessibility
'ask codex about accessibility best practices for a web form';

// Generate names for a new component library
'brainstorm names for a new component library using the convergent methodology';

// Generate alternative database schema designs
'use codex to generate three alternative database schemas for a multi-tenant application';

// Brainstorm user stories for a new feature
'ask codex to brainstorm user stories for a new feature that allows custom dashboards';

// Write and execute a shell script to rename files
'use codex to write and execute a shell script that renames all `.jpeg` files in `@assets/images` to `.jpg`';

// Safely install dependencies and run a test suite
'ask codex to safely install the dependencies from @package.json and run the test suite';

// Create a database migration script and preview the SQL
'use codex to create a migration script for our database and show me the SQL before running it';

// Use a powerful model in change mode to refactor React components
'ask codex using o3 in change mode to convert all class components in @src/components/ to functional components with hooks';

// Brainstorm with specific constraints for targeted results
"brainstorm marketing slogans for a new SaaS product with constraints: 'must be under 5 words, target developers, and mention speed'";

// Use sandbox mode to safely perform file system operations
'use codex with sandbox enabled to find all files larger than 1MB in the current directory and move them to a @tmp/large-files folder';

Available Tools

  • ask-codex: Sends a prompt to the Codex CLI, supporting file references (@), model selection, sandboxed execution, and structured diff outputs.
  • brainstorm: Generates ideas using structured methodologies like SCAMPER and design-thinking with domain-specific context.
  • ping: A simple test tool that echoes back a message to verify the server connection is active.
  • help: Displays the official Codex CLI help text and lists available commands.
  • fetch-chunk: Retrieves cached chunks from a previous changeMode response, used for handling large, structured edits.
  • timeout-test: A developer tool that runs for a specified duration to test long-running operations and timeout handling.

User Slash Commands

  • /analyze: Asks Codex to analyze specified files or directories, or to answer a general question.
  • /sandbox: Executes a prompt to test code or scripts within a controlled, sandboxed environment.
  • /help: Displays the Codex CLI help information directly in the client.
  • /ping: Tests the connection to the MCP server and echoes back an optional message.

Model Parameters

  • gpt-5-codex – Default, optimized for coding tasks
  • gpt-5 – General purpose reasoning
  • o3 – Deep reasoning for complex problems
  • codex-1 – Software engineering specialization
  • o4-mini – Fast, efficient processing
  • codex-mini-latest – Low-latency code Q&A

FAQs

Q: What is the actual difference between sandbox: true and setting sandboxMode manually?
A: sandbox: true is a convenience flag. It’s a shortcut for setting fullAuto: true, which in turn sets sandboxMode: "workspace-write" and approvalPolicy: "on-failure". If you need more specific control, like read-only access or a never approval policy, you should set sandboxMode and approvalPolicy individually.

Q: I’m getting a Permission Error: Operation blocked by sandbox policy. What should I check first?
A: First, ensure your sandboxMode is not set to "read-only" if you are trying to perform a write operation. Second, try using the sandbox: true flag to let the server apply its smart defaults, which often resolves permission issues. Finally, make sure you are on the latest version (v1.2.0 or higher), as it includes logic to prevent these errors.

Q: How does changeMode=true work?
A: When you use changeMode=true, the server doesn’t just return the AI’s text response. Instead, it instructs Codex to output a structured patch. The server then caches this patch and returns a summary. The AI can then use the fetch-chunk tool to retrieve the patch piece by piece, which is useful for reviewing large changes without overwhelming the context window.

Q: How do I include multiple files in a single analysis?
A: Use the @ syntax with directory paths: 'analyze @src/ and explain the architecture' or reference specific files: 'review @src/utils.js @src/config.js for consistency'.

Latest MCP Servers

Terminal

Give AI agents safe shell access via Terminal MCP, featuring native PTY support, sandbox restrictions, and asciicast session recording.

Cloudflare

Query and execute Cloudflare API calls through an MCP server that solves context overflow with isolated code execution and intelligent truncation.

Comet

An MCP server connecting Claude Code to Perplexity Comet for multi-agent web browsing, research delegation, and authenticated workflows.

View More MCP Servers >>

Featured MCP Servers

Microsoft Work IQ

Query Microsoft 365 tenant data with WorkIQ MCP server. Supports emails, calendar, documents, Teams, and people search through natural language.

Better Icons

Search and sync icons directly to your project files using Better Icons MCP server. Supports React, Vue, Svelte, and 150+ icon collections.

Apify

Connect AI assistants to 8000+ web scraping tools via Apify MCP Server. Extract social media data, contact details, and automate web research.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!