Use these options when you need more control over providers, policies, and integrations. For a quick start, see Config basics.
For background on project guidance, reusable capabilities, custom slash commands, subagent workflows, and integrations, see Customization. For configuration keys, see Configuration Reference.
Profiles
Profiles let you save named sets of configuration values and switch between them from the CLI.
Profiles are experimental and may change or be removed in future releases.
Profiles are not currently supported in the Codex IDE extension.
Define profiles under [profiles.<name>] in config.toml, then run codex --profile <name>:
model = "gpt-5.4"
approval_policy = "on-request"
model_catalog_json = "/Users/me/.codex/model-catalogs/default.json"
[profiles.deep-review]
model = "gpt-5-pro"
model_reasoning_effort = "high"
approval_policy = "never"
model_catalog_json = "/Users/me/.codex/model-catalogs/deep-review.json"
[profiles.lightweight]
model = "gpt-4.1"
approval_policy = "untrusted"
To make a profile the default, add profile = "deep-review" at the top level of config.toml. Codex loads that profile unless you override it on the command line.
Profiles can also override model_catalog_json. When both the top level and the selected profile set model_catalog_json, Codex prefers the profile value.
One-off overrides from the CLI
In addition to editing ~/.codex/config.toml, you can override configuration for a single run from the CLI:
- Prefer dedicated flags when they exist (for example,
--model). - Use
-c/--configwhen you need to override an arbitrary key.
Examples:
# Dedicated flag
codex --model gpt-5.4
# Generic key/value override (value is TOML, not JSON)
codex --config model='"gpt-5.4"'
codex --config sandbox_workspace_write.network_access=true
codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'
Notes:
- Keys can use dot notation to set nested values (for example,
mcp_servers.context7.enabled=false). --configvalues are parsed as TOML. When in doubt, quote the value so your shell doesn’t split it on spaces.- If the value can’t be parsed as TOML, Codex treats it as a string.
Config and state locations
Codex stores its local state under CODEX_HOME (defaults to ~/.codex).
Common files you may see there:
config.toml(your local configuration)auth.json(if you use file-based credential storage) or your OS keychain/keyringhistory.jsonl(if history persistence is enabled)- Other per-user state such as logs and caches
For authentication details (including credential storage modes), see Authentication. For the full list of configuration keys, see Configuration Reference.
For shared defaults, rules, and skills checked into repos or system paths, see Team Config.
If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set openai_base_url in config.toml instead of defining a new provider. This changes the base URL for the built-in openai provider without requiring a separate model_providers.<id> entry.
openai_base_url = "https://us.api.openai.com/v1"
Project config files (.codex/config.toml)
In addition to your user config, Codex reads project-scoped overrides from .codex/config.toml files inside your repo. Codex walks from the project root to your current working directory and loads every .codex/config.toml it finds. If multiple files define the same key, the closest file to your working directory wins.
For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores project .codex/ layers, including .codex/config.toml, project-local hooks, and project-local rules. User and system layers remain separate and still load.
Relative paths inside a project config (for example, model_instructions_file) are resolved relative to the .codex/ folder that contains the config.toml.
Hooks (experimental)
Codex can also load lifecycle hooks from either hooks.json files or inline
[hooks] tables in config.toml files that sit next to active config layers.
In practice, the two most useful locations are:
~/.codex/hooks.json~/.codex/config.toml<repo>/.codex/hooks.json<repo>/.codex/config.toml
Project-local hooks load only when the project .codex/ layer is trusted.
User-level hooks remain independent of project trust.
Turn hooks on with:
[features]
codex_hooks = true
Inline TOML hooks use the same event structure as hooks.json:
[[hooks.PreToolUse]]
matcher = "^Bash$"
[[hooks.PreToolUse.hooks]]
type = "command"
command = '/usr/bin/python3 "$(git rev-parse --show-toplevel)/.codex/hooks/pre_tool_use_policy.py"'
timeout = 30
statusMessage = "Checking Bash command"
If a single layer contains both hooks.json and inline [hooks], Codex loads
both and warns. Prefer one representation per layer.
For the current event list, input fields, output behavior, and limitations, see Hooks.
Agent roles ([agents] in config.toml)
For subagent role configuration ([agents] in config.toml), see Subagents.
Project root detection
Codex discovers project configuration (for example, .codex/ layers and AGENTS.md) by walking up from the working directory until it reaches a project root.
By default, Codex treats a directory containing .git as the project root. To customize this behavior, set project_root_markers in config.toml:
# Treat a directory as the project root when it contains any of these markers.
project_root_markers = [".git", ".hg", ".sl"]
Set project_root_markers = [] to skip searching parent directories and treat the current working directory as the project root.
Custom model providers
A model provider defines how Codex connects to a model (base URL, wire API, authentication, and optional HTTP headers). Custom providers can’t reuse the reserved built-in provider IDs: openai, ollama, and lmstudio.
Define additional providers and point model_provider at them:
model = "gpt-5.4"
model_provider = "proxy"
[model_providers.proxy]
name = "OpenAI using LLM proxy"
base_url = "http://proxy.example.com"
env_key = "OPENAI_API_KEY"
[model_providers.local_ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
[model_providers.mistral]
name = "Mistral"
base_url = "https://api.mistral.ai/v1"
env_key = "MISTRAL_API_KEY"
Add request headers when needed:
[model_providers.example]
http_headers = { "X-Example-Header" = "example-value" }
env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
Use command-backed authentication when a provider needs Codex to fetch bearer tokens from an external credential helper:
[model_providers.proxy]
name = "OpenAI using LLM proxy"
base_url = "https://proxy.example.com/v1"
wire_api = "responses"
[model_providers.proxy.auth]
command = "/usr/local/bin/fetch-codex-token"
args = ["--audience", "codex"]
timeout_ms = 5000
refresh_interval_ms = 300000
The auth command receives no stdin and must print the token to stdout. Codex trims surrounding whitespace, treats an empty token as an error, and refreshes proactively at refresh_interval_ms; set refresh_interval_ms = 0 to refresh only after an authentication retry. Don’t combine [model_providers.<id>.auth] with env_key, experimental_bearer_token, or requires_openai_auth.
Amazon Bedrock provider
Codex includes a built-in amazon-bedrock model provider. Set it directly as
model_provider; unlike custom providers, this built-in provider supports only
the nested AWS profile and region overrides.
model_provider = "amazon-bedrock"
model = "<bedrock-model-id>"
[model_providers.amazon-bedrock.aws]
profile = "default"
region = "eu-central-1"
If you omit profile, Codex uses the standard AWS credential chain. Set
region to the supported Bedrock region that should handle requests.
OSS mode (local providers)
Codex can run against a local “open source” provider (for example, Ollama or LM Studio) when you pass --oss. If you pass --oss without specifying a provider, Codex uses oss_provider as the default.
# Default local provider used with `--oss`
oss_provider = "ollama" # or "lmstudio"
Azure provider and per-provider tuning
[model_providers.azure]
name = "Azure"
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000
To change the base URL for the built-in OpenAI provider, use openai_base_url; don’t create [model_providers.openai], because you can’t override built-in provider IDs.
ChatGPT customers using data residency
Projects created with data residency enabled can create a model provider to update the base_url with the correct prefix.
model_provider = "openaidr"
[model_providers.openaidr]
name = "OpenAI Data Residency"
base_url = "https://us.api.openai.com/v1" # Replace 'us' with domain prefix
Model reasoning, verbosity, and limits
model_reasoning_summary = "none" # Disable summaries
model_verbosity = "low" # Shorten responses
model_supports_reasoning_summaries = true # Force reasoning
model_context_window = 128000 # Context window size
model_verbosity applies only to providers using the Responses API. Chat Completions providers will ignore the setting.
Approval policies and sandbox modes
Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).
For operational details to keep in mind while editing config.toml, see Common sandbox and approval combinations, Protected paths in writable roots, and Network access.
You can also use a granular approval policy (approval_policy = { granular = { ... } }) to allow or auto-reject individual prompt categories. This is useful when you want normal interactive approvals for some cases but want others, such as request_permissions or skill-script prompts, to fail closed automatically.
Set approvals_reviewer = "auto_review" to route eligible interactive approval
requests through automatic review. This changes the reviewer, not the sandbox
boundary.
Use [auto_review].policy for local reviewer policy instructions. Managed
guardian_policy_config takes precedence.
approval_policy = "untrusted" # Other options: on-request, never, or { granular = { ... } }
approvals_reviewer = "user" # Or "auto_review" for automatic review
sandbox_mode = "workspace-write"
allow_login_shell = false # Optional hardening: disallow login shells for shell tools
# Example granular approval policy:
# approval_policy = { granular = {
# sandbox_approval = true,
# rules = true,
# mcp_elicitations = true,
# request_permissions = false,
# skill_approval = false
# } }
[sandbox_workspace_write]
exclude_tmpdir_env_var = false # Allow $TMPDIR
exclude_slash_tmp = false # Allow /tmp
writable_roots = ["/Users/YOU/.pyenv/shims"]
network_access = false # Opt in to outbound network
[auto_review]
policy = """
Use your organization's automatic review policy.
"""
Named permission profiles
Set default_permissions to reuse a sandbox profile by name. Codex includes
the built-in profiles :read-only, :workspace, and :danger-no-sandbox:
default_permissions = ":workspace"
For custom profiles, point default_permissions at a name you define under
[permissions.<name>]:
default_permissions = "workspace"
[permissions.workspace.filesystem]
":project_roots" = { "." = "write", "**/*.env" = "none" }
glob_scan_max_depth = 3
[permissions.workspace.network]
enabled = true
mode = "limited"
[permissions.workspace.network.domains]
"api.openai.com" = "allow"
Use built-in names with a leading colon. Custom names don’t use a leading
colon and must have matching permissions tables.
Need the complete key list (including profile-scoped overrides and requirements constraints)? See Configuration Reference and Managed configuration.
In workspace-write mode, some environments keep .git/ and .codex/
read-only even when the rest of the workspace is writable. This is why
commands like git commit may still require approval to run outside the
sandbox. If you want Codex to skip specific commands (for example, block git commit outside the sandbox), use
rules.
Disable sandboxing entirely (use only if your environment already isolates processes):
sandbox_mode = "danger-full-access"
Shell environment policy
shell_environment_policy controls which environment variables Codex passes to any subprocess it launches (for example, when running a tool-command the model proposes). Start from a clean start (inherit = "none") or a trimmed set (inherit = "core"), then layer on excludes, includes, and overrides to avoid leaking secrets while still providing the paths, keys, or flags your tasks need.
[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
ignore_default_excludes = false
exclude = ["AWS_*", "AZURE_*"]
include_only = ["PATH", "HOME"]
Patterns are case-insensitive globs (*, ?, [A-Z]); ignore_default_excludes = false keeps the automatic KEY/SECRET/TOKEN filter before your includes/excludes run.
MCP servers
See the dedicated MCP documentation for configuration details.
Observability and telemetry
Enable OpenTelemetry (OTel) log export to track Codex runs (API requests, SSE/events, prompts, tool approvals/results). Disabled by default; opt in via [otel]:
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # set to otlp-http or otlp-grpc to send events
log_user_prompt = false # redact user prompts unless explicitly enabled
Choose an exporter:
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
If exporter = "none" Codex records events but sends nothing. Exporters batch asynchronously and flush on shutdown. Event metadata includes service name, CLI version, env tag, conversation id, model, sandbox/approval settings, and per-event fields (see Config Reference).
What gets emitted
Codex emits structured log events for runs and tool usage. Representative event types include:
codex.conversation_starts(model, reasoning settings, sandbox/approval policy)codex.api_request(attempt, status/success, duration, and error details)codex.sse_event(stream event kind, success/failure, duration, plus token counts onresponse.completed)codex.websocket_requestandcodex.websocket_event(request duration plus per-message kind/success/error)codex.user_prompt(length; content redacted unless explicitly enabled)codex.tool_decision(approved/denied and whether the decision came from config vs user)codex.tool_result(duration, success, output snippet)
OTel metrics emitted
When the OTel metrics pipeline is enabled, Codex emits counters and duration histograms for API, stream, and tool activity.
Each metric below also includes default metadata tags: auth_mode, originator, session_source, model, and app.version.
| Metric | Type | Fields | Description |
|---|---|---|---|
codex.api_request | counter | status, success | API request count by HTTP status and success/failure. |
codex.api_request.duration_ms | histogram | status, success | API request duration in milliseconds. |
codex.sse_event | counter | kind, success | SSE event count by event kind and success/failure. |
codex.sse_event.duration_ms | histogram | kind, success | SSE event processing duration in milliseconds. |
codex.websocket.request | counter | success | WebSocket request count by success/failure. |
codex.websocket.request.duration_ms | histogram | success | WebSocket request duration in milliseconds. |
codex.websocket.event | counter | kind, success | WebSocket message/event count by type and success/failure. |
codex.websocket.event.duration_ms | histogram | kind, success | WebSocket message/event processing duration in milliseconds. |
codex.tool.call | counter | tool, success | Tool invocation count by tool name and success/failure. |
codex.tool.call.duration_ms | histogram | tool, success | Tool execution duration in milliseconds by tool name and outcome. |
For more security and privacy guidance around telemetry, see Security.
Metrics
By default, Codex periodically sends a small amount of anonymous usage and health data back to OpenAI. This helps detect when Codex isn’t working correctly and shows what features and configuration options are being used, so the Codex team can focus on what matters most. These metrics don’t contain any personally identifiable information (PII). Metrics collection is independent of OTel log/trace export.
If you want to disable metrics collection entirely across Codex surfaces on a machine, set the analytics flag in your config:
[analytics]
enabled = false
Each metric includes its own fields plus the default context fields below.
Default context fields (applies to every event/metric)
auth_mode:swic|api|unknown.model: name of the model used.app.version: Codex version.
Metrics catalog
Each metric includes the required fields plus the default context fields above. Metric names below omit the codex. prefix.
Most metric names are centralized in codex-rs/otel/src/metrics/names.rs; feature-specific metrics emitted outside that file are included here too.
If a metric includes the tool field, it reflects the internal tool used (for example, apply_patch or shell) and doesn’t contain the actual shell command or patch codex is trying to apply.
Runtime and model transport
| Metric | Type | Fields | Description |
|---|---|---|---|
api_request | counter | status, success | API request count by HTTP status and success/failure. |
api_request.duration_ms | histogram | status, success | API request duration in milliseconds. |
sse_event | counter | kind, success | SSE event count by event kind and success/failure. |
sse_event.duration_ms | histogram | kind, success | SSE event processing duration in milliseconds. |
websocket.request | counter | success | WebSocket request count by success/failure. |
websocket.request.duration_ms | histogram | success | WebSocket request duration in milliseconds. |
websocket.event | counter | kind, success | WebSocket message/event count by type and success/failure. |
websocket.event.duration_ms | histogram | kind, success | WebSocket message/event processing duration in milliseconds. |
responses_api_overhead.duration_ms | histogram | Responses API overhead timing from websocket responses. | |
responses_api_inference_time.duration_ms | histogram | Responses API inference timing from websocket responses. | |
responses_api_engine_iapi_ttft.duration_ms | histogram | Responses API engine IAPI time-to-first-token timing. | |
responses_api_engine_service_ttft.duration_ms | histogram | Responses API engine service time-to-first-token timing. | |
responses_api_engine_iapi_tbt.duration_ms | histogram | Responses API engine IAPI time-between-token timing. | |
responses_api_engine_service_tbt.duration_ms | histogram | Responses API engine service time-between-token timing. | |
transport.fallback_to_http | counter | from_wire_api | WebSocket-to-HTTP fallback count. |
remote_models.fetch_update.duration_ms | histogram | Time to fetch remote model definitions. | |
remote_models.load_cache.duration_ms | histogram | Time to load the remote model cache. | |
startup_prewarm.duration_ms | histogram | status | Startup prewarm duration by outcome. |
startup_prewarm.age_at_first_turn_ms | histogram | status | Startup prewarm age when the first real turn resolves it. |
cloud_requirements.fetch.duration_ms | histogram | Workspace-managed cloud requirements fetch duration. | |
cloud_requirements.fetch_attempt | counter | See note | Workspace-managed cloud requirements fetch attempts. |
cloud_requirements.fetch_final | counter | See note | Final workspace-managed cloud requirements fetch outcome. |
cloud_requirements.load | counter | trigger, outcome | Workspace-managed cloud requirements load outcome. |
The cloud_requirements.fetch_attempt metric includes trigger, attempt, outcome, and status_code fields. The cloud_requirements.fetch_final metric includes trigger, outcome, reason, attempt_count, and status_code fields.
Turn and tool activity
| Metric | Type | Fields | Description |
|---|---|---|---|
turn.e2e_duration_ms | histogram | End-to-end time for a full turn. | |
turn.ttft.duration_ms | histogram | Time to first token for a turn. | |
turn.ttfm.duration_ms | histogram | Time to first model output item for a turn. | |
turn.network_proxy | counter | active, tmp_mem_enabled | Whether the managed network proxy was active for the turn. |
turn.memory | counter | read_allowed, feature_enabled, config_use_memories, has_citations | Per-turn memory read availability and memory citation usage. |
turn.tool.call | histogram | tmp_mem_enabled | Number of tool calls in the turn. |
turn.token_usage | histogram | token_type, tmp_mem_enabled | Per-turn token usage by token type (total, input, cached_input, output, or reasoning_output). |
tool.call | counter | tool, success | Tool invocation count by tool name and success/failure. |
tool.call.duration_ms | histogram | tool, success | Tool execution duration in milliseconds by tool name and outcome. |
tool.unified_exec | counter | tty | Unified exec tool calls by TTY mode. |
approval.requested | counter | tool, approved | Tool approval request result (approved, approved_with_amendment, approved_for_session, denied, abort). |
mcp.call | counter | See note | MCP tool invocation result. |
mcp.call.duration_ms | histogram | See note | MCP tool invocation duration. |
mcp.tools.list.duration_ms | histogram | cache | MCP tool-list duration, including cache hit/miss state. |
mcp.tools.fetch_uncached.duration_ms | histogram | Duration of uncached MCP tool fetches. | |
mcp.tools.cache_write.duration_ms | histogram | Duration of Codex Apps MCP tool-cache writes. | |
hooks.run | counter | hook_name, source, status | Hook run count by hook name, source, and status. |
hooks.run.duration_ms | histogram | hook_name, source, status | Hook run duration in milliseconds. |
The mcp.call and mcp.call.duration_ms metrics include status; normal tool-call emissions also include tool, plus connector_id and connector_name when available. Blocked Codex Apps MCP calls may emit mcp.call with only status.
Threads, tasks, and features
| Metric | Type | Fields | Description |
|---|---|---|---|
feature.state | counter | feature, value | Feature values that differ from defaults (emit one row per non-default). |
status_line | counter | Session started with a configured status line. | |
model_warning | counter | Warning sent to the model. | |
thread.started | counter | is_git | New thread created, tagged by whether the working directory is in a Git repo. |
conversation.turn.count | counter | User/assistant turns per thread, recorded at the end of the thread. | |
thread.fork | counter | source | New thread created by forking an existing thread. |
thread.rename | counter | Thread renamed. | |
thread.side | counter | source | Side conversation created. |
thread.skills.enabled_total | histogram | Number of skills enabled for a new thread. | |
thread.skills.kept_total | histogram | Number of enabled skills kept after prompt rendering. | |
thread.skills.truncated | histogram | Whether skill rendering truncated the enabled skills list (1 or 0). | |
task.compact | counter | type | Number of compactions per type (remote or local), including manual and auto. |
task.review | counter | Number of reviews triggered. | |
task.undo | counter | Number of undo actions triggered. | |
task.user_shell | counter | Number of user shell actions (! in the TUI for example). | |
shell_snapshot | counter | See note | Whether taking a shell snapshot succeeded. |
shell_snapshot.duration_ms | histogram | success | Time to take a shell snapshot. |
skill.injected | counter | status, skill | Skill injection outcomes by skill. |
plugins.startup_sync | counter | transport, status | Curated plugin startup sync attempts. |
plugins.startup_sync.final | counter | transport, status | Final curated plugin startup sync outcome. |
multi_agent.spawn | counter | role | Agent spawns by role. |
multi_agent.resume | counter | Agent resumes. | |
multi_agent.nickname_pool_reset | counter | Agent nickname pool resets. |
The shell_snapshot metric includes success and, on failures, failure_reason.
Memory and local state
| Metric | Type | Fields | Description |
|---|---|---|---|
memory.phase1 | counter | status | Memory phase 1 job counts by status. |
memory.phase1.e2e_ms | histogram | End-to-end duration for memory phase 1. | |
memory.phase1.output | counter | Memory phase 1 outputs written. | |
memory.phase1.token_usage | histogram | token_type | Memory phase 1 token usage by token type. |
memory.phase2 | counter | status | Memory phase 2 job counts by status. |
memory.phase2.e2e_ms | histogram | End-to-end duration for memory phase 2. | |
memory.phase2.input | counter | Memory phase 2 input count. | |
memory.phase2.token_usage | histogram | token_type | Memory phase 2 token usage by token type. |
memories.usage | counter | kind, tool, success | Memory usage by kind, tool, and success/failure. |
external_agent_config.detect | counter | See note | External agent config detections by migration item type. |
external_agent_config.import | counter | See note | External agent config imports by migration item type. |
db.backfill | counter | status | Initial state DB backfill results (upserted, failed). |
db.backfill.duration_ms | histogram | status | Duration of the initial state DB backfill. |
db.error | counter | stage | Errors during state DB operations. |
The external_agent_config.detect and external_agent_config.import metrics include migration_type; skills migrations also include skills_count.
Windows sandbox
| Metric | Type | Fields | Description |
|---|---|---|---|
windows_sandbox.setup_success | counter | originator, mode | Windows sandbox setup successes. |
windows_sandbox.setup_failure | counter | originator, mode | Windows sandbox setup failures. |
windows_sandbox.setup_duration_ms | histogram | result, originator, mode | Windows sandbox setup duration. |
windows_sandbox.elevated_setup_success | counter | Elevated Windows sandbox setup successes. | |
windows_sandbox.elevated_setup_failure | counter | See note | Elevated Windows sandbox setup failures. |
windows_sandbox.elevated_setup_canceled | counter | See note | Canceled elevated Windows sandbox setup attempts. |
windows_sandbox.elevated_setup_duration_ms | histogram | result | Elevated Windows sandbox setup duration. |
windows_sandbox.elevated_prompt_shown | counter | Elevated sandbox setup prompt shown. | |
windows_sandbox.elevated_prompt_accept | counter | Elevated sandbox setup prompt accepted. | |
windows_sandbox.elevated_prompt_use_legacy | counter | User chose legacy sandbox from the elevated prompt. | |
windows_sandbox.elevated_prompt_quit | counter | User quit from the elevated prompt. | |
windows_sandbox.fallback_prompt_shown | counter | Fallback sandbox prompt shown. | |
windows_sandbox.fallback_retry_elevated | counter | User retried elevated setup from the fallback prompt. | |
windows_sandbox.fallback_use_legacy | counter | User chose legacy sandbox from the fallback prompt. | |
windows_sandbox.fallback_prompt_quit | counter | User quit from the fallback prompt. | |
windows_sandbox.legacy_setup_preflight_failed | counter | See note | Legacy Windows sandbox setup preflight failure. |
windows_sandbox.setup_elevated_sandbox_command | counter | Elevated sandbox setup command invoked. | |
windows_sandbox.createprocessasuserw_failed | counter | error_code, path_kind, exe, level | Windows CreateProcessAsUserW failures. |
The elevated setup failure metrics include code and message when Windows setup failure details are available, and may include originator when emitted from the shared setup path. The windows_sandbox.legacy_setup_preflight_failed metric includes originator when emitted from the shared setup path, but fallback-prompt preflight failures may not include any fields.
Feedback controls
By default, Codex lets users send feedback from /feedback. To disable feedback collection across Codex surfaces on a machine, update your config:
[feedback]
enabled = false
When disabled, /feedback shows a disabled message and Codex rejects feedback submissions.
Hide or surface reasoning events
If you want to reduce noisy “reasoning” output (for example in CI logs), you can suppress it:
hide_agent_reasoning = true
If you want to surface raw reasoning content when a model emits it:
show_raw_agent_reasoning = true
Enable raw reasoning only if it’s acceptable for your workflow. Some models/providers (like gpt-oss) don’t emit raw reasoning; in that case, this setting has no visible effect.
Notifications
Use notify to trigger an external program whenever Codex emits supported events (currently only agent-turn-complete). This is handy for desktop toasts, chat webhooks, CI updates, or any side-channel alerting that the built-in TUI notifications don’t cover.
notify = ["python3", "/path/to/notify.py"]
Example notify.py (truncated) that reacts to agent-turn-complete:
#!/usr/bin/env python3
import json, subprocess, sys
def main() -> int:
notification = json.loads(sys.argv[1])
if notification.get("type") != "agent-turn-complete":
return 0
title = f"Codex: {notification.get('last-assistant-message', 'Turn Complete!')}"
message = " ".join(notification.get("input-messages", []))
subprocess.check_output([
"terminal-notifier",
"-title", title,
"-message", message,
"-group", "codex-" + notification.get("thread-id", ""),
"-activate", "com.googlecode.iterm2",
])
return 0
if __name__ == "__main__":
sys.exit(main())
The script receives a single JSON argument. Common fields include:
type(currentlyagent-turn-complete)thread-id(session identifier)turn-id(turn identifier)cwd(working directory)input-messages(user messages that led to the turn)last-assistant-message(last assistant message text)
Place the script somewhere on disk and point notify to it.
notify vs tui.notifications
notifyruns an external program (good for webhooks, desktop notifiers, CI hooks).tui.notificationsis built in to the TUI and can optionally filter by event type (for example,agent-turn-completeandapproval-requested).tui.notification_methodcontrols how the TUI emits terminal notifications (auto,osc9, orbel).tui.notification_conditioncontrols whether TUI notifications fire only when the terminal isunfocusedoralways.
In auto mode, Codex prefers OSC 9 notifications (a terminal escape sequence some terminals interpret as a desktop notification) and falls back to BEL (\x07) otherwise.
See Configuration Reference for the exact keys.
History persistence
By default, Codex saves local session transcripts under CODEX_HOME (for example, ~/.codex/history.jsonl). To disable local history persistence:
[history]
persistence = "none"
To cap the history file size, set history.max_bytes. When the file exceeds the cap, Codex drops the oldest entries and compacts the file while keeping the newest records.
[history]
max_bytes = 104857600 # 100 MiB
Clickable citations
If you use a terminal/editor integration that supports it, Codex can render file citations as clickable links. Configure file_opener to pick the URI scheme Codex uses:
file_opener = "vscode" # or cursor, windsurf, vscode-insiders, none
Example: a citation like /home/user/project/main.py:42 can be rewritten into a clickable vscode://file/...:42 link.
Project instructions discovery
Codex reads AGENTS.md (and related files) and includes a limited amount of project guidance in the first turn of a session. Two knobs control how this works:
project_doc_max_bytes: how much to read from eachAGENTS.mdfileproject_doc_fallback_filenames: additional filenames to try whenAGENTS.mdis missing at a directory level
For a detailed walkthrough, see Custom instructions with AGENTS.md.
TUI options
Running codex with no subcommand launches the interactive terminal UI (TUI). Codex exposes some TUI-specific configuration under [tui], including:
tui.notifications: enable/disable notifications (or restrict to specific types)tui.notification_method: chooseauto,osc9, orbelfor terminal notificationstui.notification_condition: chooseunfocusedoralwaysfor when notifications firetui.animations: enable/disable ASCII animations and shimmer effectstui.alternate_screen: control alternate screen usage (set toneverto keep terminal scrollback)tui.show_tooltips: show or hide onboarding tooltips on the welcome screen
tui.notification_method defaults to auto. In auto mode, Codex prefers OSC 9 notifications (a terminal escape sequence some terminals interpret as a desktop notification) when the terminal appears to support them, and falls back to BEL (\x07) otherwise.
See Configuration Reference for the full key list.