[{"content":"Both entrypoint and command define what runs when a container starts. They look similar, but they play different roles, and confusing them leads to surprising behavior.\nThe mental model When a container starts, Docker runs:\n&lt;entrypoint&gt; &lt;command&gt; entrypoint is the executable command is the default arguments passed to it If the image&rsquo;s Dockerfile has ENTRYPOINT [&quot;python&quot;] and CMD [&quot;app.py&quot;], the container runs python app.py.\nOverriding from Compose Both can be overridden in Compose:\nservices: app: image: python:3.12 entrypoint: [&#34;python&#34;] command: [&#34;app.py&#34;, &#34;--debug&#34;] This runs python app.py --debug regardless of what the image&rsquo;s Dockerfile defined.\nWhen you typically only set command Most of the time, you just want to run a different command on an image:\nservices: # Run tests instead of the default app tests: image: myapp command: [&#34;pytest&#34;, &#34;tests\/&#34;] # Run a one-off migration migrate: image: myapp command: [&#34;.\/migrate.sh&#34;] The image&rsquo;s ENTRYPOINT (often python, node, or \/entrypoint.sh) handles the executable, and you just pass different arguments.\nWhen you set entrypoint Override entrypoint when you want to bypass the image&rsquo;s default executable:\nservices: # Image normally starts the app, but here we want a shell debug: image: myapp entrypoint: [&#34;\/bin\/sh&#34;] command: [&#34;-c&#34;, &#34;env &amp;&amp; tail -f \/dev\/null&#34;] To completely disable the image&rsquo;s entrypoint and run a fresh command:\nservices: app: image: myapp entrypoint: [] command: [&#34;echo&#34;, &#34;hello from compose&#34;] Exec form vs shell form There are two syntaxes:\n# Exec form (recommended) \u2014 array, runs directly command: [&#34;python&#34;, &#34;app.py&#34;] # Shell form \u2014 string, runs through \/bin\/sh -c command: python app.py Exec form is preferred because:\nSignals (SIGTERM, etc.) reach your process directly (Tip #44) No shell expansion gotchas Faster startup (one less process) Use shell form only when you need shell features:\ncommand: bash -c &#34;wait-for-db &amp;&amp; python app.py&#34; Override at runtime docker compose run lets you override both at runtime:\n# Override command docker compose run --rm app pytest tests\/ # Override entrypoint with --entrypoint docker compose run --rm --entrypoint \/bin\/sh app This is the cleanest way to debug a misbehaving service (Tip #34).\nA common gotcha If you define command: in Compose but the image already has an ENTRYPOINT that doesn&rsquo;t expect arguments, things break:\n# Dockerfile ENTRYPOINT [&#34;\/start.sh&#34;] # compose.yml services: app: image: myapp command: [&#34;bash&#34;] # Becomes: \/start.sh bash \u2014 probably not what you want Either set entrypoint: [] to clear the image&rsquo;s entrypoint, or use the existing entrypoint as designed.\nFurther reading Compose specification: entrypoint Compose specification: command Related: Tip #34, Debugging with exec vs run Related: Tip #44, Signal handling in containers ","permalink":"https:\/\/lours.me\/posts\/compose-tip-059-entrypoint-vs-command\/","summary":"<p>Both <code>entrypoint<\/code> and <code>command<\/code> define what runs when a container starts. They look similar, but they play different roles, and confusing them leads to surprising behavior.<\/p>\n<h2 id=\"the-mental-model\">The mental model<\/h2>\n<p>When a container starts, Docker runs:<\/p>\n<pre tabindex=\"0\"><code>&lt;entrypoint&gt; &lt;command&gt;\n<\/code><\/pre><ul>\n<li><code>entrypoint<\/code> is the executable<\/li>\n<li><code>command<\/code> is the default arguments passed to it<\/li>\n<\/ul>\n<p>If the image&rsquo;s Dockerfile has <code>ENTRYPOINT [&quot;python&quot;]<\/code> and <code>CMD [&quot;app.py&quot;]<\/code>, the container runs <code>python app.py<\/code>.<\/p>\n<h2 id=\"overriding-from-compose\">Overriding from Compose<\/h2>\n<p>Both can be overridden in Compose:<\/p>","title":"Docker Compose Tip #59: entrypoint vs command"},{"content":"You&rsquo;ve used secrets for sensitive data (Tip #22). For non-sensitive config files like nginx.conf or prometheus.yml, there&rsquo;s a parallel feature: configs.\nBasic usage Define a config at the top level, reference it in a service:\nconfigs: nginx_conf: file: .\/nginx.conf services: web: image: nginx configs: - source: nginx_conf target: \/etc\/nginx\/nginx.conf The file is mounted read-only inside the container at the target path. No need to declare a volume mount manually.\nInline content Skip the external file and define content inline:\nconfigs: prometheus_conf: content: | global: scrape_interval: 15s scrape_configs: - job_name: &#39;app&#39; static_configs: - targets: [&#39;app:8080&#39;] services: prometheus: image: prom\/prometheus configs: - source: prometheus_conf target: \/etc\/prometheus\/prometheus.yml Great for short configs that don&rsquo;t need a separate file.\nSetting permissions and ownership Control file mode, owner, and group when mounting:\nconfigs: app_conf: file: .\/app.conf services: app: image: myapp configs: - source: app_conf target: \/etc\/app\/app.conf uid: &#34;1000&#34; gid: &#34;1000&#34; mode: 0440 Variable interpolation in inline content Inline content: supports interpolation just like other Compose values:\nconfigs: app_conf: content: | log_level=${LOG_LEVEL:-info} api_url=${API_URL:?API_URL is required} env=${ENV:-dev} services: app: image: myapp configs: - source: app_conf target: \/etc\/app\/app.conf External configs Reference a config that already exists outside the Compose project (created with docker config create):\nconfigs: shared_conf: external: true name: company-shared-config services: app: image: myapp configs: - shared_conf Useful in Swarm or when multiple Compose projects share the same config.\nconfigs vs volumes Both put files in containers, but they&rsquo;re not the same:\nconfigs Volume mounts Source of truth Declared in compose file External directory Read-only Yes (always) Configurable Inline content Supported Not supported Best for Config files, small text data Persistent data, large files, dev source code If a file rarely changes and is part of the application config, use configs. If it&rsquo;s data that changes at runtime or large files, use a volume.\nFurther reading Compose specification: configs Related: Tip #22, Using secrets in Compose files ","permalink":"https:\/\/lours.me\/posts\/compose-tip-058-configs\/","summary":"<p>You&rsquo;ve used <code>secrets<\/code> for sensitive data (<a href=\"\/posts\/compose-tip-022-secrets\/\">Tip #22<\/a>). For non-sensitive config files like <code>nginx.conf<\/code> or <code>prometheus.yml<\/code>, there&rsquo;s a parallel feature: <code>configs<\/code>.<\/p>\n<h2 id=\"basic-usage\">Basic usage<\/h2>\n<p>Define a config at the top level, reference it in a service:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">configs<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">nginx_conf<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">file<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/nginx.conf<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">configs<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"nt\">source<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx_conf<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">target<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">\/etc\/nginx\/nginx.conf<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>The file is mounted read-only inside the container at the target path. No need to declare a volume mount manually.<\/p>","title":"Docker Compose Tip #58: Using configs for config files"},{"content":"A container is running but your app feels sluggish. Is it CPU-bound? Leaking memory? Stuck on a runaway process? Compose gives you two essential commands to find out: docker compose top and docker compose stats.\ndocker compose top Show running processes inside each service&rsquo;s container:\ndocker compose top myapp-web-1 UID PID PPID C STIME TTY TIME CMD root 12345 12320 0 09:15 ? 00:00:02 nginx: master process nobody 12400 12345 0 09:15 ? 00:00:00 nginx: worker process myapp-api-1 UID PID PPID C STIME TTY TIME CMD node 12500 12480 2 09:15 ? 00:01:45 node server.js node 12600 12500 0 09:15 ? 00:00:12 node worker.js You see every process running inside each container with their CPU usage (C column) and CPU time. Perfect for spotting runaway workers or unexpected child processes.\nTarget a specific service if you don&rsquo;t want the full list:\ndocker compose top api docker compose stats Real-time CPU, memory, network, and disk I\/O for services in your Compose project, scoped automatically to the current project:\n# Live view, updates every second docker compose stats # One-shot snapshot instead of live view docker compose stats --no-stream # Focus on specific services docker compose stats api db Output:\nCONTAINER ID NAME CPU % MEM USAGE \/ LIMIT MEM % NET I\/O BLOCK I\/O a1b2c3d4e5f6 myapp-api-1 45.23% 312.5MiB \/ 512MiB 61.04% 1.2MB \/ 3.4MB 0B \/ 0B b2c3d4e5f6a7 myapp-web-1 0.12% 8.3MiB \/ 256MiB 3.24% 524kB \/ 1.1MB 0B \/ 0B c3d4e5f6a7b8 myapp-db-1 2.50% 98.4MiB \/ 1GiB 9.61% 245kB \/ 892kB 12MB \/ 45MB This tells you at a glance which service is hot. In the example above, api is using 45% CPU and near its memory limit, a good place to start investigating.\nUnlike docker stats (which shows every container on the host), docker compose stats shows only the containers in your current Compose project, no need to filter manually.\nCustom format Narrow down to just the columns you care about:\ndocker compose stats --no-stream \\ --format &#34;table {{.Name}}\\t{{.CPUPerc}}\\t{{.MemPerc}}&#34; NAME CPU % MEM % myapp-api-1 45.23% 61.04% myapp-web-1 0.12% 3.24% myapp-db-1 2.50% 9.61% Combining with resource limits These monitoring commands pair naturally with Tip #16 (resource limits). Set limits, then watch to see if services are hitting them:\nservices: api: image: myapp-api deploy: resources: limits: cpus: &#34;1.0&#34; memory: 512M # Watch the container approach its limits docker compose stats api A service consistently at 95%+ of its limits is a hint you need to either raise the limit or optimize the code.\nQuick debugging workflow When something feels off:\n# 1. See what&#39;s running docker compose ps # 2. Check resource usage docker compose stats --no-stream # 3. Drill into the hot service docker compose top api # 4. Check recent events for restarts or OOM kills docker compose events --since 5m This four-command sequence covers 80% of &ldquo;my stack feels slow&rdquo; investigations.\nFurther reading Docker Compose CLI: top Docker Compose CLI: stats Related: Tip #16, Setting resource limits Related: Tip #25, Using docker compose events ","permalink":"https:\/\/lours.me\/posts\/compose-tip-057-resource-monitoring\/","summary":"<p>A container is running but your app feels sluggish. Is it CPU-bound? Leaking memory? Stuck on a runaway process? Compose gives you two essential commands to find out: <code>docker compose top<\/code> and <code>docker compose stats<\/code>.<\/p>\n<h2 id=\"docker-compose-top\">docker compose top<\/h2>\n<p>Show running processes inside each service&rsquo;s container:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose top\n<\/span><\/span><\/code><\/pre><\/div><pre tabindex=\"0\"><code>myapp-web-1\nUID      PID     PPID    C    STIME   TTY   TIME       CMD\nroot     12345   12320   0    09:15   ?     00:00:02   nginx: master process\nnobody   12400   12345   0    09:15   ?     00:00:00   nginx: worker process\n\nmyapp-api-1\nUID      PID     PPID    C    STIME   TTY   TIME       CMD\nnode     12500   12480   2    09:15   ?     00:01:45   node server.js\nnode     12600   12500   0    09:15   ?     00:00:12   node worker.js\n<\/code><\/pre><p>You see every process running inside each container with their CPU usage (C column) and CPU time. Perfect for spotting runaway workers or unexpected child processes.<\/p>","title":"Docker Compose Tip #57: Container resource monitoring"},{"content":"The env_file key looks simple but has surprisingly rich behavior, you can load multiple files, mark some as optional, and control how they&rsquo;re parsed.\nThe two flavors of .env There are two completely different things called &ldquo;.env&rdquo; in Compose:\nThe project .env file at the same level as compose.yml, used for interpolation of ${VAR} in the Compose file itself (see Tip #42) env_file: at service level, loaded as runtime environment variables inside the container This post is about the second one.\nSingle file (basic form) services: app: image: myapp env_file: .env.app # .env.app DATABASE_URL=postgres:\/\/db:5432\/mydb LOG_LEVEL=info All key-value pairs end up as environment variables in the container.\nMultiple files with priority Chain multiple env files, later files override earlier ones:\nservices: app: image: myapp env_file: - .env.common # Shared defaults - .env.${ENV:-dev} # Environment-specific overrides - .env.local # Local developer tweaks (gitignored) Result for a missing key: nothing, as expected. For a key defined in .env.common and .env.local: the .env.local value wins.\nOptional files By default, a missing env file causes Compose to fail. Use required: false to make it optional:\nservices: app: image: myapp env_file: - path: .env.common required: true # Default - path: .env.local required: false # Skip if missing Great for .env.local files that each developer may or may not have.\nFormat option The default format treats each line as KEY=VALUE. For more complex values (multi-line, JSON blobs, strings with special characters), use format: raw to parse the file differently, or just stick to the default format and quote your values:\nservices: app: env_file: - path: .env.app format: raw For most cases, the default format is what you want.\nInterpolation inside env files Env files support ${VAR} substitution, pulling from the host environment or the project .env:\n# .env.app DATABASE_URL=postgres:\/\/${DB_USER}:${DB_PASSWORD}@db:5432\/${DB_NAME} API_KEY=${API_KEY} As long as DB_USER, DB_PASSWORD, etc. are available on the host (or in the project .env), Compose resolves them before passing to the container.\nPrecedence: env_file vs environment When both env_file and environment define the same variable, environment wins:\nservices: app: env_file: - .env.app # LOG_LEVEL=info environment: LOG_LEVEL: debug # \u2190 this wins The running container sees LOG_LEVEL=debug. Useful for one-off overrides without editing the env file.\nA practical multi-environment setup my-app\/ \u251c\u2500\u2500 compose.yml \u251c\u2500\u2500 .env.common \u251c\u2500\u2500 .env.dev \u251c\u2500\u2500 .env.staging \u251c\u2500\u2500 .env.prod \u2514\u2500\u2500 .env.local # gitignored services: app: image: myapp env_file: - .env.common - path: .env.${ENV:-dev} required: true - path: .env.local required: false docker compose up # uses .env.dev ENV=staging docker compose up # uses .env.staging ENV=prod docker compose up # uses .env.prod Further reading Compose specification: env_file Environment variables precedence Related: Tip #2, Using &ndash;env-file for different environments Related: Tip #42, Variable substitution and defaults ","permalink":"https:\/\/lours.me\/posts\/compose-tip-056-env-file-advanced\/","summary":"<p>The <code>env_file<\/code> key looks simple but has surprisingly rich behavior, you can load multiple files, mark some as optional, and control how they&rsquo;re parsed.<\/p>\n<h2 id=\"the-two-flavors-of-env\">The two flavors of .env<\/h2>\n<p>There are two completely different things called &ldquo;.env&rdquo; in Compose:<\/p>\n<ul>\n<li><strong>The project <code>.env<\/code> file<\/strong> at the same level as <code>compose.yml<\/code>, used for <strong>interpolation<\/strong> of <code>${VAR}<\/code> in the Compose file itself (see <a href=\"\/posts\/compose-tip-042-variable-substitution\/\">Tip #42<\/a>)<\/li>\n<li><strong><code>env_file:<\/code> at service level<\/strong>, loaded as <strong>runtime environment variables<\/strong> inside the container<\/li>\n<\/ul>\n<p>This post is about the second one.<\/p>","title":"Docker Compose Tip #56: env_file advanced patterns"},{"content":"Most people know docker compose config as &ldquo;the validate command&rdquo;. It&rsquo;s much more than that. The same command can list resources, output JSON, hash services for change detection, and pin image digests for reproducibility.\nListing resources Ask Compose what&rsquo;s actually in your project, after all overrides and interpolation:\n# List all services docker compose config --services # List all volumes docker compose config --volumes # List all images used docker compose config --images # List all defined profiles docker compose config --profiles These are great in scripts when you need to loop over each service:\nfor svc in $(docker compose config --services); do echo &#34;=== Logs for $svc ===&#34; docker compose logs &#34;$svc&#34; | tail -20 done Filtering to a specific service Show only the resolved configuration for one service:\ndocker compose config web Useful when your Compose file is large and you only care about one piece.\nOutput formats By default the resolved configuration comes back as YAML. Switch to JSON for programmatic use:\n# Default YAML docker compose config # JSON for jq and friends docker compose config --format json # Extract just the environment of a service docker compose config --format json | jq &#39;.services.web.environment&#39; Hashing for change detection --hash gives you a reproducible hash of a service&rsquo;s configuration. Perfect for CI caching or detecting whether a service needs to be rebuilt:\n# Hash every service docker compose config --hash=&#39;*&#39; # web: 8f4e2a1b... # db: 3c7d9e5f... # Hash specific services docker compose config --hash=web,api If the hash doesn&rsquo;t change, the service configuration is identical and you can skip expensive operations like rebuilding.\nPinning image digests --resolve-image-digests replaces image tags with their current digests:\n# Input: compose.yml services: web: image: nginx:latest docker compose config --resolve-image-digests # Output services: web: image: nginx:latest@sha256:a1b2c3... Great for generating a production-ready, reproducible version of your Compose file.\nRaw vs resolved By default, Compose interpolates all ${VAR} references. Sometimes you want to see the file before substitution:\n# Resolved (default) docker compose config # Raw, no interpolation docker compose config --no-interpolate This helps debug which variables are being substituted vs left as-is.\nCombining with other commands Pipe the resolved config to other tools:\n# Save a resolved copy for production docker compose config --resolve-image-digests &gt; compose.resolved.yml # Validate environment without starting services docker compose config &gt; \/dev\/null &amp;&amp; echo &#34;Config OK&#34; # See the full rendered config with overrides applied docker compose -f compose.yml -f compose.prod.yml config Further reading Docker Compose CLI: config Related: Tip #1, Debug your configuration with config ","permalink":"https:\/\/lours.me\/posts\/compose-tip-055-config-advanced\/","summary":"<p>Most people know <code>docker compose config<\/code> as &ldquo;the validate command&rdquo;. It&rsquo;s much more than that. The same command can list resources, output JSON, hash services for change detection, and pin image digests for reproducibility.<\/p>\n<h2 id=\"listing-resources\">Listing resources<\/h2>\n<p>Ask Compose what&rsquo;s actually in your project, after all overrides and interpolation:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># List all services<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose config --services\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># List all volumes<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose config --volumes\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># List all images used<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose config --images\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># List all defined profiles<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose config --profiles\n<\/span><\/span><\/code><\/pre><\/div><p>These are great in scripts when you need to loop over each service:<\/p>","title":"Docker Compose Tip #55: docker compose config advanced usage"},{"content":"This week, no Docker Compose tips, I&rsquo;ll be at Devoxx France 2026 at the Palais des Congr\u00e8s in Paris, from April 22 to April 24. I&rsquo;m lucky to be on stage three times, with topics ranging from AI agents to a text-based RPG powered by tiny language models, and as you&rsquo;d expect, containers and Compose are part of the story.\nHere&rsquo;s what I&rsquo;ll be talking about.\nWednesday, April 22 \u2014 17:50-18:20 (Tools-in-Action) Vos coding agents en mode YOLO&hellip; mais en toute s\u00e9curit\u00e9 There&rsquo;s a term for what happens when you use AI coding agents heavily: permission fatigue. You end up clicking through approval prompts without really reading them. To work around this, tools now offer YOLO modes (trust-all-tools, bypass permissions&hellip;) that give agents full autonomy.\nBut that autonomy comes at a cost. Agents can then do anything on your dev machine. Many developers have lost work or had their local environment silently altered.\nIn this Tools-in-Action, I&rsquo;ll present Docker Sandboxes, a solution for running AI coding agents autonomously without compromising developer workstation security. By combining isolated VMs and containers, it lets agents act freely while protecting the host, controlling network access, and securing keys and credentials.\n\ud83d\udc49 Session details\nThursday, April 23 \u2014 13:30-16:30 (3H Deep Dive, with Philippe Charri\u00e8re) Compose &amp; Dragons: le jeu de r\u00f4le des agents nourris aux Tiny Language Models Let&rsquo;s debunk some myths about tiny LLMs, the ones with fewer than 4 billion parameters. People often say they&rsquo;re useless, don&rsquo;t know anything, and are bad at function calling (so no MCP for them).\nThat&rsquo;s partly wrong, and we can do something about the rest.\nTogether with Philippe Charri\u00e8re, we&rsquo;ll build a text-mode dungeon crawler RPG live on stage, using Docker&rsquo;s AI capabilities. The whole thing runs on very small models (like Jan-nano-gguf 4b, qwen2.5 1.5b, granite-embedding 278m for RAG&hellip;). You&rsquo;ll see how to create NPCs with personalities, a dungeon master managing moves, combat, and conversations, all powered by models you can run on your laptop.\nOur game won&rsquo;t be Diablo, but you&rsquo;ll leave with recipes and tips to build your own lightweight AI agent systems.\n\ud83d\udc49 Session details\nFriday, April 24 \u2014 11:35-12:20 (Conference, with Nicolas De Loof) Docker Compose, votre Dev Toolkit pour AI &amp; Cloud Docker Compose is an essential tool for orchestrating local development environments. In the AI era, it takes another step forward.\nWith Nicolas De Loof, we&rsquo;ll show how the latest Docker Compose features let you declaratively define LLM-based applications. On the menu, live:\nDefining an LLM in a Compose file with the models key Accessing those models from application containers Using provider services to prepare or connect external resources Offloading heavy workloads (GPU) remotely with Docker Offload Using a single Compose file to orchestrate code, models, and execution context A concrete, fast-paced session for developers working on AI, agents, or modern multi-service architectures who want to see how Docker adapts to these new use cases.\n\ud83d\udc49 Session details\nCome say hi! If you&rsquo;re attending Devoxx France, come say hi between sessions or at one of these talks. Docker will also have a booth at the conference, and I&rsquo;ll be there quite often, so feel free to stop by for a chat. And if you&rsquo;re not coming this year, all sessions are recorded and will be available on the Devoxx YouTube channel afterwards.\nDocker Compose tips will be back the following week. See you soon!\n","permalink":"https:\/\/lours.me\/posts\/devoxx-france-2026\/","summary":"<p>This week, no Docker Compose tips, I&rsquo;ll be at <a href=\"https:\/\/devoxx.fr\/\">Devoxx France 2026<\/a> at the Palais des Congr\u00e8s in Paris, from April 22 to April 24. I&rsquo;m lucky to be on stage three times, with topics ranging from AI agents to a text-based RPG powered by tiny language models, and as you&rsquo;d expect, containers and Compose are part of the story.<\/p>\n<p>Here&rsquo;s what I&rsquo;ll be talking about.<\/p>\n<h2 id=\"wednesday-april-22--1750-1820-tools-in-action\">Wednesday, April 22 \u2014 17:50-18:20 (Tools-in-Action)<\/h2>\n<h3 id=\"vos-coding-agents-en-mode-yolo-mais-en-toute-s\u00e9curit\u00e9\">Vos coding agents en mode YOLO&hellip; mais en toute s\u00e9curit\u00e9<\/h3>\n<p>There&rsquo;s a term for what happens when you use AI coding agents heavily: <strong>permission fatigue<\/strong>. You end up clicking through approval prompts without really reading them. To work around this, tools now offer YOLO modes (<code>trust-all-tools<\/code>, <code>bypass permissions<\/code>&hellip;) that give agents full autonomy.<\/p>","title":"See you at Devoxx France 2026!"},{"content":"Not sure what docker compose up will actually do? Add --dry-run to preview every action without executing anything.\nBasic usage docker compose up --dry-run Compose shows exactly what it would do:\nDRY-RUN MODE - Container myapp-db-1 Creating DRY-RUN MODE - Container myapp-db-1 Created DRY-RUN MODE - Container myapp-web-1 Creating DRY-RUN MODE - Container myapp-web-1 Created DRY-RUN MODE - Container myapp-db-1 Starting DRY-RUN MODE - Container myapp-db-1 Started DRY-RUN MODE - Container myapp-web-1 Starting DRY-RUN MODE - Container myapp-web-1 Started Nothing is created, started, or modified. You just see the plan.\nWorks with most commands --dry-run isn&rsquo;t limited to up. It works with many Compose commands:\n# What would be stopped? docker compose down --dry-run # What would be removed? docker compose rm --dry-run # What would be pulled? docker compose pull --dry-run # What would be restarted? docker compose restart --dry-run Catching unexpected changes Dry-run is especially useful when you&rsquo;ve modified a Compose file and want to see what changed before applying:\n# You just edited compose.yml \u2014 what will happen? docker compose up -d --dry-run DRY-RUN MODE - Container myapp-db-1 Running DRY-RUN MODE - Container myapp-web-1 Recreating DRY-RUN MODE - Container myapp-web-1 Recreated DRY-RUN MODE - Container myapp-web-1 Starting DRY-RUN MODE - Container myapp-web-1 Started You can see that only web will be recreated \u2014 db stays running. No surprises.\nValidating override files When stacking multiple Compose files, dry-run helps verify the result before applying:\n# What does the CI override actually change? docker compose -f compose.yml -f compose.ci.yml up --dry-run # What does the production override do? docker compose -f compose.yml -f compose.prod.yml up --dry-run This is a great safety check before running overrides in unfamiliar environments.\nCombining with &ndash;verbose For even more detail, combine --dry-run with the --verbose flag on the docker compose command:\ndocker compose --verbose up --dry-run Dry-run vs config Both are useful for debugging, but they serve different purposes:\ndocker compose config \u2014 shows the resolved Compose file after interpolation and merging. It answers: &ldquo;what does my configuration look like?&rdquo; docker compose up --dry-run \u2014 shows what actions Compose would take given the current state. It answers: &ldquo;what will actually happen if I run this?&rdquo; Use config to debug your YAML. Use --dry-run to debug the execution plan.\nFurther reading Docker Compose CLI: &ndash;dry-run ","permalink":"https:\/\/lours.me\/posts\/compose-tip-054-dry-run\/","summary":"<p>Not sure what <code>docker compose up<\/code> will actually do? Add <code>--dry-run<\/code> to preview every action without executing anything.<\/p>\n<h2 id=\"basic-usage\">Basic usage<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose up --dry-run\n<\/span><\/span><\/code><\/pre><\/div><p>Compose shows exactly what it <em>would<\/em> do:<\/p>\n<pre tabindex=\"0\"><code>DRY-RUN MODE -  Container myapp-db-1  Creating\nDRY-RUN MODE -  Container myapp-db-1  Created\nDRY-RUN MODE -  Container myapp-web-1  Creating\nDRY-RUN MODE -  Container myapp-web-1  Created\nDRY-RUN MODE -  Container myapp-db-1  Starting\nDRY-RUN MODE -  Container myapp-db-1  Started\nDRY-RUN MODE -  Container myapp-web-1  Starting\nDRY-RUN MODE -  Container myapp-web-1  Started\n<\/code><\/pre><p>Nothing is created, started, or modified. You just see the plan.<\/p>","title":"Docker Compose Tip #54: Preview changes with --dry-run"},{"content":"Every Compose stack gets a project name. It prefixes all resource names such as containers, networks, volumes. Understanding how it works avoids naming conflicts and makes multi-environment setups cleaner.\nDefault behavior By default, the project name is the directory name where your compose.yml lives:\nmy-app\/ compose.yml docker compose up -d # Creates: my-app-web-1, my-app-db-1, my-app_default network Setting the project name Three ways to set it, in order of precedence:\n# 1. CLI flag (highest precedence) docker compose -p myproject up # 2. Environment variable COMPOSE_PROJECT_NAME=myproject docker compose up # 3. In the Compose file itself # compose.yml name: myproject services: web: image: nginx The name key in the Compose file is the recommended approach, it&rsquo;s versioned with your code and everyone on the team gets the same project name.\nWhy it matters Without an explicit name, the project name depends on the directory. This causes problems:\n# Two developers clone to different paths \/home\/alice\/projects\/app\/ \u2192 project name: &#34;app&#34; \/home\/bob\/work\/app\/ \u2192 project name: &#34;app&#34; \u2705 same # But... \/home\/alice\/projects\/my-app\/ \u2192 project name: &#34;my-app&#34; \/home\/bob\/work\/app\/ \u2192 project name: &#34;app&#34; \u274c different Different project names mean different volumes \u2014 so docker compose down --volumes on one won&rsquo;t clean up the other&rsquo;s data.\nRunning multiple instances Use -p to run the same stack multiple times on the same host:\n# Staging environment docker compose -p staging up -d # Production environment docker compose -p production up -d # Both running simultaneously with isolated networks and volumes docker compose -p staging ps docker compose -p production ps Changing the working directory Use --project-directory to tell Compose where to resolve relative paths:\n# Run from anywhere, resolve paths relative to .\/deploy docker compose --project-directory .\/deploy up This is useful when your Compose file is not in the project root, or when you want to run Compose from a CI script that&rsquo;s in a different directory.\nListing all projects See all running Compose projects on the host:\ndocker compose ls NAME STATUS CONFIG FILES myapp running(3) \/home\/user\/myapp\/compose.yml staging running(3) \/home\/user\/myapp\/compose.yml production running(3) \/home\/user\/myapp\/compose.yml Pro tip Combine name with variable substitution for flexible naming:\nname: myapp-${ENV:-dev} services: web: image: nginx ENV=staging docker compose up -d # project: myapp-staging ENV=prod docker compose up -d # project: myapp-prod docker compose up -d # project: myapp-dev (default) Further reading Compose specification: name Compose environment variables: COMPOSE_PROJECT_NAME ","permalink":"https:\/\/lours.me\/posts\/compose-tip-053-project-name-workdir\/","summary":"<p>Every Compose stack gets a project name. It prefixes all resource names such as containers, networks, volumes. Understanding how it works avoids naming conflicts and makes multi-environment setups cleaner.<\/p>\n<h2 id=\"default-behavior\">Default behavior<\/h2>\n<p>By default, the project name is the directory name where your <code>compose.yml<\/code> lives:<\/p>\n<pre tabindex=\"0\"><code>my-app\/\n  compose.yml\n<\/code><\/pre><div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose up -d\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Creates: my-app-web-1, my-app-db-1, my-app_default network<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"setting-the-project-name\">Setting the project name<\/h2>\n<p>Three ways to set it, in order of precedence:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 1. CLI flag (highest precedence)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose -p myproject up\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 2. Environment variable<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">COMPOSE_PROJECT_NAME<\/span><span class=\"o\">=<\/span>myproject docker compose up\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 3. In the Compose file itself<\/span>\n<\/span><\/span><\/code><\/pre><\/div><div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">name<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myproject<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>The <code>name<\/code> key in the Compose file is the recommended approach, it&rsquo;s versioned with your code and everyone on the team gets the same project name.<\/p>","title":"Docker Compose Tip #53: Compose project name and working directory"},{"content":"Your development Compose file isn&rsquo;t your CI Compose file. A dedicated CI configuration ensures tests run against a clean, seeded database with no leftover state.\nThe development stack Take a typical full-stack project like dockersamples\/sbx-quickstart \u2014 a FastAPI backend with a Next.js frontend and PostgreSQL:\n# compose.yml services: backend: build: .\/backend ports: - &#34;8000:8000&#34; environment: DATABASE_URL: postgresql:\/\/postgres:postgres@db:5432\/devboard depends_on: db: condition: service_healthy frontend: build: .\/frontend ports: - &#34;3000:3000&#34; db: image: postgres:16 environment: POSTGRES_PASSWORD: postgres POSTGRES_DB: devboard volumes: - db-data:\/var\/lib\/postgresql\/data healthcheck: test: [&#34;CMD&#34;, &#34;pg_isready&#34;] interval: 5s retries: 5 volumes: db-data: Adding a CI override Create a compose.ci.yml that adapts the stack for testing:\n# compose.ci.yml services: db: # No persistent volume in CI \u2014 fresh database every run volumes: - .\/backend\/tests\/seed.sql:\/docker-entrypoint-initdb.d\/seed.sql:ro backend: # No port exposure needed \u2014 tests run inside the network ports: !reset [] environment: TESTING: &#34;true&#34; frontend: # Not needed for API tests profiles: [&#34;frontend&#34;] tests: build: .\/backend command: pytest tests\/ -v --tb=short environment: DATABASE_URL: postgresql:\/\/postgres:postgres@db:5432\/devboard depends_on: backend: condition: service_started db: condition: service_healthy Key decisions:\nDatabase seeding: mount a SQL file into \/docker-entrypoint-initdb.d\/ \u2014 Postgres runs it automatically on init No volumes: the db-data volume from the base file is not mounted, so every run starts fresh Frontend disabled: moved to a profile so it&rsquo;s not started during API tests Test service added: runs pytest against the backend through the Compose network The seed file -- backend\/tests\/seed.sql INSERT INTO users (username, email) VALUES (&#39;testuser1&#39;, &#39;test1@example.com&#39;), (&#39;testuser2&#39;, &#39;test2@example.com&#39;); INSERT INTO projects (name, owner_id) VALUES (&#39;Test Project&#39;, 1); Running in CI # Start stack, wait for health, run tests, return test exit code docker compose -f compose.yml -f compose.ci.yml up \\ --build \\ --exit-code-from tests # Clean teardown \u2014 remove volumes for a fresh next run docker compose -f compose.yml -f compose.ci.yml down --volumes The --exit-code-from tests flag makes the command return the exit code of the tests service \u2014 so your CI pipeline fails if tests fail.\nUsing &ndash;wait for multi-step workflows If you need to run different test suites sequentially, use --wait (Tip #51) to start the stack first, then run tests separately:\n# Start and wait for everything to be healthy docker compose -f compose.yml -f compose.ci.yml up -d --build --wait --wait-timeout 120 # Run API tests docker compose -f compose.yml -f compose.ci.yml exec backend pytest tests\/api\/ -v # Run integration tests docker compose -f compose.yml -f compose.ci.yml exec backend pytest tests\/integration\/ -v # Clean up docker compose -f compose.yml -f compose.ci.yml down --volumes Pro tip Use --project-name to isolate parallel CI runs on the same host:\ndocker compose -p &#34;ci-${BUILD_ID}&#34; -f compose.yml -f compose.ci.yml up \\ --build --exit-code-from tests docker compose -p &#34;ci-${BUILD_ID}&#34; -f compose.yml -f compose.ci.yml down --volumes Each pipeline run gets its own networks and containers \u2014 no conflicts.\nFurther reading Compose documentation: Multiple Compose files dockersamples\/sbx-quickstart \u2014 the example project used in this post ","permalink":"https:\/\/lours.me\/posts\/compose-tip-052-ci-test-environment\/","summary":"<p>Your development Compose file isn&rsquo;t your CI Compose file. A dedicated CI configuration ensures tests run against a clean, seeded database with no leftover state.<\/p>\n<h2 id=\"the-development-stack\">The development stack<\/h2>\n<p>Take a typical full-stack project like <a href=\"https:\/\/github.com\/dockersamples\/sbx-quickstart\">dockersamples\/sbx-quickstart<\/a> \u2014 a FastAPI backend with a Next.js frontend and PostgreSQL:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">backend<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/backend<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;8000:8000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DATABASE_URL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgresql:\/\/postgres:postgres@db:5432\/devboard<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">depends_on<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">condition<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">service_healthy<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">frontend<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/frontend<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;3000:3000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:16<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">POSTGRES_PASSWORD<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">POSTGRES_DB<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">devboard<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">db-data:\/var\/lib\/postgresql\/data<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">healthcheck<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">test<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"s2\">&#34;CMD&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;pg_isready&#34;<\/span><span class=\"p\">]<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">interval<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">5s<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">retries<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"m\">5<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">db-data<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"adding-a-ci-override\">Adding a CI override<\/h2>\n<p>Create a <code>compose.ci.yml<\/code> that adapts the stack for testing:<\/p>","title":"Docker Compose Tip #52: Setting up a CI test environment"},{"content":"docker compose up -d starts services in the background, but it returns immediately \u2014 before services are actually ready. --wait solves this by blocking until all services are healthy.\nThe problem Without --wait, you often end up with fragile sleep-based scripts:\n# Fragile: how long is enough? docker compose up -d sleep 10 npm test The solution docker compose up --wait npm test --wait starts services in detached mode and blocks until every service with a healthcheck reports healthy. If a service fails to become healthy, the command exits with a non-zero status.\nIt requires healthchecks --wait relies on healthchecks to know when services are ready. Services without healthchecks are considered ready immediately after starting:\nservices: postgres: image: postgres:16 environment: POSTGRES_PASSWORD: ${DB_PASSWORD} healthcheck: test: [&#34;CMD&#34;, &#34;pg_isready&#34;] interval: 5s timeout: 3s retries: 5 redis: image: redis:7 healthcheck: test: [&#34;CMD&#34;, &#34;redis-cli&#34;, &#34;ping&#34;] interval: 5s timeout: 3s retries: 5 app: image: myapp depends_on: postgres: condition: service_healthy redis: condition: service_healthy healthcheck: test: [&#34;CMD&#34;, &#34;wget&#34;, &#34;-qO-&#34;, &#34;http:\/\/localhost:3000\/health&#34;] interval: 5s timeout: 3s retries: 5 # Blocks until postgres, redis, AND app are all healthy docker compose up --wait Setting a timeout Use --wait-timeout to avoid waiting forever if something goes wrong:\n# Wait up to 60 seconds, then fail docker compose up --wait --wait-timeout 60 This is essential in CI where you don&rsquo;t want a broken service to hang the pipeline indefinitely.\nCI pipeline example A typical CI workflow:\n# Start the full stack and wait for health docker compose up --wait --wait-timeout 120 # Run tests against the healthy stack docker compose exec app npm test # Tear down docker compose down No sleep, no polling, no flaky timing issues. The stack is guaranteed to be healthy before tests run.\nCombining with &ndash;exit-code-from For test services that should run and exit, combine with --exit-code-from:\nservices: postgres: image: postgres:16 healthcheck: test: [&#34;CMD&#34;, &#34;pg_isready&#34;] interval: 5s tests: image: myapp-tests depends_on: postgres: condition: service_healthy command: npm test # Start postgres, wait for health, run tests, return test exit code docker compose up --exit-code-from tests echo $? # 0 if tests passed, non-zero otherwise Further reading Docker Compose CLI: up Compose specification: healthcheck ","permalink":"https:\/\/lours.me\/posts\/compose-tip-051-up-wait\/","summary":"<p><code>docker compose up -d<\/code> starts services in the background, but it returns immediately \u2014 before services are actually ready. <code>--wait<\/code> solves this by blocking until all services are healthy.<\/p>\n<h2 id=\"the-problem\">The problem<\/h2>\n<p>Without <code>--wait<\/code>, you often end up with fragile sleep-based scripts:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Fragile: how long is enough?<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose up -d\n<\/span><\/span><span class=\"line\"><span class=\"cl\">sleep <span class=\"m\">10<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">npm <span class=\"nb\">test<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"the-solution\">The solution<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose up --wait\n<\/span><\/span><span class=\"line\"><span class=\"cl\">npm <span class=\"nb\">test<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p><code>--wait<\/code> starts services in detached mode and blocks until every service with a healthcheck reports healthy. If a service fails to become healthy, the command exits with a non-zero status.<\/p>","title":"Docker Compose Tip #51: docker compose up --wait for scripting and CI"},{"content":"Running ML models, video processing, or any GPU-accelerated workload? Compose lets you reserve GPU devices for specific services.\nBasic GPU access Give a service access to all available GPUs:\nservices: ml-training: image: pytorch\/pytorch deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] Limiting GPU count Reserve a specific number of GPUs instead of all:\nservices: inference: image: mymodel:latest deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] Selecting specific GPUs by ID Target specific GPU devices when you have multiple:\nservices: training: image: pytorch\/pytorch deploy: resources: reservations: devices: - driver: nvidia device_ids: [&#34;0&#34;, &#34;1&#34;] capabilities: [gpu] inference: image: mymodel:latest deploy: resources: reservations: devices: - driver: nvidia device_ids: [&#34;2&#34;] capabilities: [gpu] This lets you dedicate GPUs to different workloads on the same machine.\nCombining GPU with memory limits For ML workloads, you often want to limit both GPU and system memory:\nservices: training: image: pytorch\/pytorch deploy: resources: reservations: memory: 8G devices: - driver: nvidia count: 2 capabilities: [gpu] limits: memory: 16G A complete ML stack services: # Jupyter notebook with GPU notebook: image: jupyter\/pytorch-notebook ports: - &#34;8888:8888&#34; volumes: - .\/notebooks:\/home\/jovyan\/work - model-data:\/home\/jovyan\/models deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] # Model serving serving: image: mymodel-server:latest expose: - &#34;8080&#34; volumes: - model-data:\/models:ro deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] # Standard API gateway \u2014 no GPU needed api: image: nginx ports: - &#34;80:80&#34; depends_on: - serving volumes: model-data: Only the services that need GPUs get them \u2014 the API gateway runs as a regular container without any GPU reservation.\nPro tip Check GPU availability inside a container:\n# Verify GPU is accessible docker compose exec training nvidia-smi # Check CUDA version docker compose exec training nvcc --version Further reading Compose specification: devices GPU support in Docker ","permalink":"https:\/\/lours.me\/posts\/compose-tip-050-gpu-support\/","summary":"<p>Running ML models, video processing, or any GPU-accelerated workload? Compose lets you reserve GPU devices for specific services.<\/p>\n<h2 id=\"basic-gpu-access\">Basic GPU access<\/h2>\n<p>Give a service access to all available GPUs:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">ml-training<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">pytorch\/pytorch<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">deploy<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">resources<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">reservations<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">devices<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">            <\/span>- <span class=\"nt\">driver<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nvidia<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">              <\/span><span class=\"nt\">count<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">all<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">              <\/span><span class=\"nt\">capabilities<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"l\">gpu]<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"limiting-gpu-count\">Limiting GPU count<\/h2>\n<p>Reserve a specific number of GPUs instead of all:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">inference<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">mymodel:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">deploy<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">resources<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">reservations<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">devices<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">            <\/span>- <span class=\"nt\">driver<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nvidia<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">              <\/span><span class=\"nt\">count<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"m\">1<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">              <\/span><span class=\"nt\">capabilities<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"l\">gpu]<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"selecting-specific-gpus-by-id\">Selecting specific GPUs by ID<\/h2>\n<p>Target specific GPU devices when you have multiple:<\/p>","title":"Docker Compose Tip #50: GPU support with deploy.resources"},{"content":"Compose can orchestrate more than just Linux containers. By combining platform and runtime, you can run traditional containers alongside WebAssembly (Wasm) modules in the same stack.\nThe platform key The platform key at service level tells Docker which platform the service targets:\nservices: # Regular Linux container (default) web: image: nginx platform: linux\/amd64 # WebAssembly module api: image: wasmedge\/example-wasi-http runtime: io.containerd.wasmedge.v1 Mixing Linux and Wasm services Here&rsquo;s a practical stack where a traditional nginx reverse proxy sits in front of a Wasm-based API:\nservices: # Standard Linux container \u2014 reverse proxy nginx: image: nginx ports: - &#34;80:80&#34; volumes: - .\/nginx.conf:\/etc\/nginx\/nginx.conf:ro depends_on: - api # Wasm module \u2014 lightweight HTTP echo server api: image: wasmedge\/example-wasi-http runtime: io.containerd.wasmedge.v1 expose: - &#34;1234&#34; # Standard Linux container \u2014 database postgres: image: postgres:16 environment: POSTGRES_PASSWORD: ${DB_PASSWORD} volumes: - db-data:\/var\/lib\/postgresql\/data volumes: db-data: The nginx and postgres services run as regular Linux containers. The api service runs as a Wasm module using the WasmEdge runtime \u2014 much lighter and with a smaller attack surface. The wasmedge\/example-wasi-http image is a simple HTTP echo server listening on port 1234.\nThe runtime key The runtime key specifies which OCI runtime executes the container. For Wasm workloads, you need a Wasm-compatible runtime:\nservices: # Using Wasmtime app-wasmtime: image: myapp:latest platform: wasi\/wasm runtime: io.containerd.wasmtime.v1 # Using WasmEdge app-wasmedge: image: myapp:latest platform: wasi\/wasm runtime: io.containerd.wasmedge.v1 # Using Spin app-spin: image: myapp:latest platform: wasi\/wasm runtime: io.containerd.spin.v2 Why mix platforms? Wasm services offer some advantages over traditional containers:\nCold start in milliseconds \u2014 great for event-driven workloads Smaller images \u2014 Wasm binaries are typically much smaller Sandboxed by default \u2014 no filesystem or network access unless explicitly granted Cross-platform \u2014 the same Wasm binary runs anywhere But not everything can run as Wasm today. Databases, reverse proxies, and many existing tools still need traditional Linux containers. Compose lets you mix both seamlessly.\nPrerequisites To run Wasm workloads with Docker Desktop, enable the Wasm integration in Settings &gt; Features in development &gt; Enable Wasm.\nFurther reading Compose specification: platform Compose specification: runtime Docker and Wasm ","permalink":"https:\/\/lours.me\/posts\/compose-tip-049-mixed-platforms-wasm\/","summary":"<p>Compose can orchestrate more than just Linux containers. By combining <code>platform<\/code> and <code>runtime<\/code>, you can run traditional containers alongside WebAssembly (Wasm) modules in the same stack.<\/p>\n<h2 id=\"the-platform-key\">The platform key<\/h2>\n<p>The <code>platform<\/code> key at service level tells Docker which platform the service targets:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Regular Linux container (default)<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">platform<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">linux\/amd64<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># WebAssembly module<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">wasmedge\/example-wasi-http<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">runtime<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">io.containerd.wasmedge.v1<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"mixing-linux-and-wasm-services\">Mixing Linux and Wasm services<\/h2>\n<p>Here&rsquo;s a practical stack where a traditional nginx reverse proxy sits in front of a Wasm-based API:<\/p>","title":"Docker Compose Tip #49: Mixed platforms with Linux containers and Wasm"},{"content":"When using dynamic port mapping or multiple services, it&rsquo;s not always obvious which host port maps to which container port. docker compose port tells you exactly.\nBasic usage # Which host port maps to container port 80 on the web service? docker compose port web 80 # Output: 0.0.0.0:8080 Why dynamic ports matter When you let Docker assign ports automatically, the host port changes on every docker compose up:\nservices: web: image: nginx ports: - &#34;80&#34; # Dynamic host port, container port 80 docker compose port web 80 # Output: 0.0.0.0:55432 (assigned dynamically) This is common in CI or when running multiple instances of the same project.\nScaled services with &ndash;index When you scale a service, each replica gets its own dynamic port. Use --index to query a specific replica:\nservices: web: image: nginx ports: - &#34;80&#34; # Dynamic host port for each replica # Scale to 3 replicas docker compose up -d --scale web=3 # Get the port for each replica docker compose port --index=1 web 80 # Output: 0.0.0.0:55001 docker compose port --index=2 web 80 # Output: 0.0.0.0:55002 docker compose port --index=3 web 80 # Output: 0.0.0.0:55003 Without --index, the command returns the port for the first replica.\nSpecifying protocol By default, docker compose port looks for TCP mappings. If a service exposes both TCP and UDP on the same port, use --protocol to pick the right one:\n# TCP (default) docker compose port myservice 53 # UDP docker compose port --protocol=udp myservice 53 Combining with other debug commands docker compose port is one of several useful commands for network debugging:\n# List all port mappings for all services docker compose ps --format &#34;table {{.Name}}\\t{{.Ports}}&#34; # Get port mappings as JSON for a specific service docker compose ps web --format json | jq &#39;.[].Ports&#39; # Check connectivity from one container to another docker compose exec app wget -qO- http:\/\/web:80\/health Further reading Docker Compose CLI: port Compose specification: ports ","permalink":"https:\/\/lours.me\/posts\/compose-tip-048-network-debugging-port\/","summary":"<p>When using dynamic port mapping or multiple services, it&rsquo;s not always obvious which host port maps to which container port. <code>docker compose port<\/code> tells you exactly.<\/p>\n<h2 id=\"basic-usage\">Basic usage<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Which host port maps to container port 80 on the web service?<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose port web <span class=\"m\">80<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Output: 0.0.0.0:8080<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"why-dynamic-ports-matter\">Why dynamic ports matter<\/h2>\n<p>When you let Docker assign ports automatically, the host port changes on every <code>docker compose up<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;80&#34;<\/span><span class=\"w\">   <\/span><span class=\"c\"># Dynamic host port, container port 80<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose port web <span class=\"m\">80<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Output: 0.0.0.0:55432  (assigned dynamically)<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>This is common in CI or when running multiple instances of the same project.<\/p>","title":"Docker Compose Tip #48: Network debugging with docker compose port"},{"content":"A sidecar is a helper container that runs alongside your main application, adding capabilities without modifying the application itself. Compose has specific features to make sidecars work seamlessly.\nShared network namespace with network_mode Use network_mode: service:&lt;name&gt; to share the network stack with another container, they share the same IP address and can communicate over localhost:\nservices: app: image: myapp # No ports needed \u2014 proxy handles public traffic proxy: image: nginx network_mode: service:app ports: - &#34;443:443&#34; volumes: - .\/nginx.conf:\/etc\/nginx\/nginx.conf:ro - .\/certs:\/etc\/nginx\/certs:ro Since both containers share the same network namespace, nginx can proxy to localhost:3000 without any DNS resolution. The app doesn&rsquo;t need to know about TLS at all, the proxy sidecar handles it.\nSharing volumes with volumes_from Use volumes_from to mount all volumes from another service, no need to declare shared named volumes manually:\nservices: app: image: myapp volumes: - \/var\/log\/app - \/app\/data log-forwarder: image: fluent\/fluent-bit volumes_from: - app:ro depends_on: - app The log-forwarder gets read-only access to all of app&rsquo;s volumes. This is simpler than declaring named volumes when the sidecar just needs to see everything the main container writes.\nCombining both for a pod-like pattern Use network_mode and volumes_from together to create a Kubernetes pod-like setup where containers are tightly coupled:\nservices: app: image: myapp ports: - &#34;8080:8080&#34; volumes: - \/var\/log\/app log-shipper: image: fluent\/fluent-bit network_mode: service:app volumes_from: - app:ro depends_on: - app The log shipper shares the app&rsquo;s network (can scrape localhost metrics endpoints) and filesystem (can read log files) \u2014 just like containers in the same Kubernetes pod.\nSidecar with depends_on and healthcheck Ensure sidecars wait for the main service to be ready:\nservices: postgres: image: postgres:16 environment: POSTGRES_PASSWORD: ${DB_PASSWORD} volumes: - db-data:\/var\/lib\/postgresql\/data healthcheck: test: [&#34;CMD&#34;, &#34;pg_isready&#34;] interval: 10s postgres-exporter: image: prometheuscommunity\/postgres-exporter environment: DATA_SOURCE_NAME: &#34;postgresql:\/\/postgres:${DB_PASSWORD}@postgres:5432\/postgres?sslmode=disable&#34; ports: - &#34;9187:9187&#34; depends_on: postgres: condition: service_healthy volumes: db-data: When to use sidecars Sidecars are a good fit when:\nYou can&rsquo;t or don&rsquo;t want to modify the main application image The concern (logging, metrics, debugging) is orthogonal to the app You want to reuse the same sidecar across multiple projects You need to share network or filesystem context tightly between containers Further reading Compose specification: network_mode Compose specification: volumes_from Compose specification: depends_on ","permalink":"https:\/\/lours.me\/posts\/compose-tip-047-sidecar-patterns\/","summary":"<p>A sidecar is a helper container that runs alongside your main application, adding capabilities without modifying the application itself. Compose has specific features to make sidecars work seamlessly.<\/p>\n<h2 id=\"shared-network-namespace-with-network_mode\">Shared network namespace with network_mode<\/h2>\n<p>Use <code>network_mode: service:&lt;name&gt;<\/code> to share the network stack with another container, they share the same IP address and can communicate over <code>localhost<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># No ports needed \u2014 proxy handles public traffic<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">proxy<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">network_mode<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">service:app<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;443:443&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/nginx.conf:\/etc\/nginx\/nginx.conf:ro<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/certs:\/etc\/nginx\/certs:ro<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Since both containers share the same network namespace, nginx can proxy to <code>localhost:3000<\/code> without any DNS resolution. The app doesn&rsquo;t need to know about TLS at all, the proxy sidecar handles it.<\/p>","title":"Docker Compose Tip #47: Sidecar container patterns"},{"content":"Build args and environment variables both pass values to your containers, but they work at different times and serve different purposes. Mixing them up is a common source of confusion.\nBuild args: build-time only Build args are available during docker build and are not present in the running container:\nservices: app: build: context: . args: NODE_VERSION: &#34;20&#34; APP_VERSION: &#34;2.1.0&#34; In the Dockerfile, they&rsquo;re consumed with ARG:\nARG NODE_VERSION=20 FROM node:${NODE_VERSION}-slim ARG APP_VERSION RUN echo &#34;Building version ${APP_VERSION}&#34; After the build, these values are gone \u2014 they don&rsquo;t exist in the running container.\nEnvironment variables: runtime only Environment variables are set in the running container but are not available during the build:\nservices: app: image: myapp environment: DATABASE_URL: postgres:\/\/db:5432\/myapp LOG_LEVEL: info NODE_ENV: production When you need both Sometimes you need a value at both build-time and runtime. Pass it as a build arg, then convert it to an environment variable in the Dockerfile:\nARG NODE_ENV=production ENV NODE_ENV=${NODE_ENV} services: app: build: context: . args: NODE_ENV: production environment: NODE_ENV: production Build args from the host environment Build args can reference host environment variables, just like other Compose values:\nservices: app: build: context: . args: # From host environment or .env file GIT_COMMIT: ${GIT_COMMIT:-unknown} BUILD_DATE: ${BUILD_DATE:-} Where does interpolation fit in? There&rsquo;s a third level that&rsquo;s easy to confuse with the other two: Compose interpolation. When you write ${VAR} in a Compose file, Compose resolves it from your host environment or .env file at parse time, before any build or container starts:\nservices: app: image: myapp:${TAG:-latest} # Interpolation: resolved before anything runs build: args: VERSION: ${VERSION} # Interpolation \u2192 then passed as build arg environment: DATABASE_URL: ${DB_URL} # Interpolation \u2192 then passed as container env var STATIC_VALUE: &#34;hello&#34; # No interpolation, just a runtime value The subtle part: DATABASE_URL: ${DB_URL} involves both interpolation and a container env var. Compose resolves ${DB_URL} from the host, then passes the result to the container. The container never sees the ${DB_URL} syntax \u2014 only the resolved value.\nThis means ${HOME} in environment: uses the host&rsquo;s $HOME, not the container&rsquo;s. Use $$HOME if you want the literal string $HOME passed to the container for shell expansion at runtime.\nFor more on interpolation syntax and precedence, see Tip #42.\nSecurity considerations Build args are visible in the image history:\ndocker history myapp Never pass secrets as build args \u2014 they&rsquo;ll be baked into image layers. Use secrets or mounted files instead:\nservices: app: build: context: . secrets: - npm_token environment: # Runtime secrets are fine here API_KEY: ${API_KEY} secrets: npm_token: file: .\/secrets\/npm_token.txt Quick reference Build args Environment variables When Build time Runtime Dockerfile ARG ENV Compose build.args environment In container No Yes In image layers Yes (visible!) No Use for Base image version, build config App config, credentials Further reading Compose specification: build args Compose specification: environment Dockerfile ARG vs ENV ","permalink":"https:\/\/lours.me\/posts\/compose-tip-046-build-args-vs-env\/","summary":"<p>Build args and environment variables both pass values to your containers, but they work at different times and serve different purposes. Mixing them up is a common source of confusion.<\/p>\n<h2 id=\"build-args-build-time-only\">Build args: build-time only<\/h2>\n<p>Build args are available during <code>docker build<\/code> and are not present in the running container:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">context<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">args<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">NODE_VERSION<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;20&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">APP_VERSION<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;2.1.0&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>In the Dockerfile, they&rsquo;re consumed with <code>ARG<\/code>:<\/p>","title":"Docker Compose Tip #46: Build args vs environment variables"},{"content":"Multi-stage Dockerfiles let you define multiple build stages. With the target option in Compose, you can choose which stage to build \u2014 giving you different images from the same Dockerfile.\nA multi-stage Dockerfile # Stage 1: dependencies FROM node:20-slim AS deps WORKDIR \/app COPY package*.json .\/ RUN npm ci # Stage 2: development (with dev dependencies and tools) FROM deps AS dev RUN npm install --include=dev COPY . . CMD [&#34;npm&#34;, &#34;run&#34;, &#34;dev&#34;] # Stage 3: build FROM deps AS build COPY . . RUN npm run build # Stage 4: production (minimal) FROM node:20-slim AS production WORKDIR \/app COPY --from=build \/app\/dist .\/dist COPY --from=deps \/app\/node_modules .\/node_modules CMD [&#34;node&#34;, &#34;dist\/index.js&#34;] Targeting stages in Compose Use target to pick which stage to build:\nservices: app: build: context: . target: dev volumes: - .:\/app ports: - &#34;3000:3000&#34; Different targets per environment This is where target really shines \u2014 use override files to build different stages:\n# compose.yml services: app: build: context: . target: production ports: - &#34;3000:3000&#34; # compose.override.yml (local dev) services: app: build: target: dev volumes: - .:\/app Running docker compose up locally builds the dev stage with source mounting. In CI or production, docker compose -f compose.yml up builds the production stage.\nMultiple services from one Dockerfile Use target to build different services from the same Dockerfile:\nservices: api: build: context: . target: production command: [&#34;node&#34;, &#34;dist\/api.js&#34;] worker: build: context: . target: production command: [&#34;node&#34;, &#34;dist\/worker.js&#34;] tests: build: context: . target: dev command: [&#34;npm&#34;, &#34;test&#34;] profiles: [&#34;test&#34;] The api and worker share the same production image, while tests uses the dev stage with test dependencies included.\nCombining target with build args Use build args to further customize stages:\nservices: app: build: context: . target: production args: NODE_ENV: production VERSION: ${VERSION:-dev} Pro tip Use docker compose build --progress=plain to see which stages are built and which are cached:\n# See full build output including cache hits docker compose build --progress=plain # Build a specific service docker compose build --progress=plain app Stages that aren&rsquo;t needed for the target are skipped entirely \u2014 multi-stage builds are efficient by design.\nFurther reading Compose specification: build target Multi-stage builds ","permalink":"https:\/\/lours.me\/posts\/compose-tip-045-multi-stage-target\/","summary":"<p>Multi-stage Dockerfiles let you define multiple build stages. With the <code>target<\/code> option in Compose, you can choose which stage to build \u2014 giving you different images from the same Dockerfile.<\/p>\n<h2 id=\"a-multi-stage-dockerfile\">A multi-stage Dockerfile<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-dockerfile\" data-lang=\"dockerfile\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Stage 1: dependencies<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">FROM<\/span><span class=\"w\"> <\/span><span class=\"s\">node:20-slim<\/span><span class=\"w\"> <\/span><span class=\"k\">AS<\/span><span class=\"w\"> <\/span><span class=\"s\">deps<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">WORKDIR<\/span><span class=\"w\"> <\/span><span class=\"s\">\/app<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> package*.json .\/<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">RUN<\/span> npm ci<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Stage 2: development (with dev dependencies and tools)<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">FROM<\/span><span class=\"w\"> <\/span><span class=\"s\">deps<\/span><span class=\"w\"> <\/span><span class=\"k\">AS<\/span><span class=\"w\"> <\/span><span class=\"s\">dev<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">RUN<\/span> npm install --include<span class=\"o\">=<\/span>dev<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> . .<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">CMD<\/span> <span class=\"p\">[<\/span><span class=\"s2\">&#34;npm&#34;<\/span><span class=\"p\">,<\/span> <span class=\"s2\">&#34;run&#34;<\/span><span class=\"p\">,<\/span> <span class=\"s2\">&#34;dev&#34;<\/span><span class=\"p\">]<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Stage 3: build<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">FROM<\/span><span class=\"w\"> <\/span><span class=\"s\">deps<\/span><span class=\"w\"> <\/span><span class=\"k\">AS<\/span><span class=\"w\"> <\/span><span class=\"s\">build<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> . .<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">RUN<\/span> npm run build<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Stage 4: production (minimal)<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">FROM<\/span><span class=\"w\"> <\/span><span class=\"s\">node:20-slim<\/span><span class=\"w\"> <\/span><span class=\"k\">AS<\/span><span class=\"w\"> <\/span><span class=\"s\">production<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">WORKDIR<\/span><span class=\"w\"> <\/span><span class=\"s\">\/app<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> --from<span class=\"o\">=<\/span>build \/app\/dist .\/dist<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> --from<span class=\"o\">=<\/span>deps \/app\/node_modules .\/node_modules<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">CMD<\/span> <span class=\"p\">[<\/span><span class=\"s2\">&#34;node&#34;<\/span><span class=\"p\">,<\/span> <span class=\"s2\">&#34;dist\/index.js&#34;<\/span><span class=\"p\">]<\/span><span class=\"err\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"targeting-stages-in-compose\">Targeting stages in Compose<\/h2>\n<p>Use <code>target<\/code> to pick which stage to build:<\/p>","title":"Docker Compose Tip #45: Multi-stage builds with target"},{"content":"When you run docker compose down or docker compose stop, Compose sends a signal to your containers. Understanding which signal is sent and how your application handles it is key to graceful shutdowns.\nDefault behavior By default, Compose sends SIGTERM to the main process (PID 1), waits 10 seconds, then sends SIGKILL:\nservices: app: image: myapp # Default: SIGTERM, 10s grace period, then SIGKILL Changing the stop signal Some applications expect a different signal. Nginx, for example, uses SIGQUIT for graceful shutdown:\nservices: nginx: image: nginx stop_signal: SIGQUIT stop_grace_period: 30s Common signals and their typical use:\nservices: # SIGTERM (default) - most applications app: image: myapp stop_signal: SIGTERM # SIGQUIT - nginx graceful shutdown web: image: nginx stop_signal: SIGQUIT # SIGINT - same as Ctrl+C, useful for dev tools dev: image: node:20 stop_signal: SIGINT # SIGUSR1 - custom reload\/shutdown in some apps custom: image: custom-app stop_signal: SIGUSR1 Adjusting the grace period The stop_grace_period controls how long Compose waits before sending SIGKILL:\nservices: # Quick shutdown for stateless services cache: image: redis stop_grace_period: 5s # More time for database connections to drain api: image: myapp-api stop_grace_period: 30s # Long-running jobs need more time worker: image: myapp-worker stop_grace_period: 120s The PID 1 problem If your container runs a shell script as entrypoint, the shell (PID 1) may not forward signals to your application:\n# Bad: shell doesn&#39;t forward signals ENTRYPOINT \/start.sh # Good: exec replaces shell with your process ENTRYPOINT [&#34;\/start.sh&#34;] Or use init: true in Compose to add a proper init process that handles signal forwarding:\nservices: app: image: myapp init: true # Adds tini as PID 1 stop_signal: SIGTERM stop_grace_period: 30s The init process (tini) becomes PID 1, properly forwards signals to your application, and reaps zombie processes.\nCombining with lifecycle hooks For maximum control, combine stop_signal with pre_stop hooks (Tip #41):\nservices: api: image: myapp-api pre_stop: - command: \/bin\/sh -c &#34;curl -sf -X POST http:\/\/localhost:8080\/drain&#34; stop_signal: SIGTERM stop_grace_period: 30s The sequence is: pre_stop runs first, then stop_signal is sent, then stop_grace_period countdown, then SIGKILL if still running.\nPro tip Use docker compose stop -t to override the grace period at runtime:\n# Quick stop with 5 second timeout docker compose stop -t 5 # Patient stop for long-running tasks docker compose stop -t 300 Further reading Compose specification: stop_signal Compose specification: stop_grace_period Compose specification: init ","permalink":"https:\/\/lours.me\/posts\/compose-tip-044-signal-handling\/","summary":"<p>When you run <code>docker compose down<\/code> or <code>docker compose stop<\/code>, Compose sends a signal to your containers. Understanding which signal is sent and how your application handles it is key to graceful shutdowns.<\/p>\n<h2 id=\"default-behavior\">Default behavior<\/h2>\n<p>By default, Compose sends <code>SIGTERM<\/code> to the main process (PID 1), waits 10 seconds, then sends <code>SIGKILL<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Default: SIGTERM, 10s grace period, then SIGKILL<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"changing-the-stop-signal\">Changing the stop signal<\/h2>\n<p>Some applications expect a different signal. Nginx, for example, uses <code>SIGQUIT<\/code> for graceful shutdown:<\/p>","title":"Docker Compose Tip #44: Signal handling in containers"},{"content":"Making a container&rsquo;s root filesystem read-only is one of the simplest and most effective hardening measures. If an attacker gets in, they can&rsquo;t modify binaries or drop malicious files.\nBasic usage services: app: image: myapp read_only: true That&rsquo;s it. The container&rsquo;s filesystem is now immutable. But most applications need to write somewhere \u2014 logs, temp files, caches. That&rsquo;s where tmpfs comes in.\nRead-only with tmpfs for writable directories Combine read_only with tmpfs to allow writes only where needed:\nservices: app: image: myapp read_only: true tmpfs: - \/tmp:size=50M - \/var\/run:size=10M A web server typically needs a few writable paths:\nservices: nginx: image: nginx read_only: true tmpfs: - \/tmp - \/var\/cache\/nginx - \/var\/run ports: - &#34;80:80&#34; volumes: - .\/nginx.conf:\/etc\/nginx\/nginx.conf:ro Common writable paths by application Different applications need different writable directories:\nservices: # Node.js application node-app: image: node:20-slim read_only: true tmpfs: - \/tmp # Python application python-app: image: python:3.12-slim read_only: true tmpfs: - \/tmp - \/root\/.cache # PostgreSQL (data on volume, rest read-only) postgres: image: postgres:16 read_only: true tmpfs: - \/tmp - \/var\/run\/postgresql volumes: - db-data:\/var\/lib\/postgresql\/data volumes: db-data: Full hardening pattern Combine read_only with other security options for defense in depth:\nservices: web: image: dhi.io\/nginx:1.28-alpine3.23 # Docker Hardened Image read_only: true tmpfs: - \/tmp - \/var\/cache\/nginx - \/var\/run cap_drop: - ALL security_opt: - no-new-privileges:true ports: - &#34;8080:8080&#34; volumes: - .\/nginx.conf:\/etc\/nginx\/nginx.conf:ro This combines Docker Hardened Images (minimal attack surface, no shell, fewer CVEs), read-only filesystem, capability dropping (Tip #29), and an unprivileged image that runs as non-root by default \u2014 multiple layers of protection.\nDebugging read-only issues When switching to read-only, you might see errors like:\nRead-only file system: &#39;\/var\/log\/app.log&#39; Use docker compose exec to find which paths need to be writable:\n# Check which files the process tries to write docker compose exec app find \/ -writable 2&gt;\/dev\/null # Or run with read_only disabled temporarily and monitor writes docker compose exec app sh -c &#34;inotifywait -mr \/ 2&gt;&amp;1 | grep -i &#39;create\\|modify&#39;&#34; Then add only those paths as tmpfs mounts.\nFurther reading Compose specification: read_only Docker security best practices ","permalink":"https:\/\/lours.me\/posts\/compose-tip-043-read-only-rootfs\/","summary":"<p>Making a container&rsquo;s root filesystem read-only is one of the simplest and most effective hardening measures. If an attacker gets in, they can&rsquo;t modify binaries or drop malicious files.<\/p>\n<h2 id=\"basic-usage\">Basic usage<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">read_only<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"kc\">true<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>That&rsquo;s it. The container&rsquo;s filesystem is now immutable. But most applications need to write <em>somewhere<\/em> \u2014 logs, temp files, caches. That&rsquo;s where <code>tmpfs<\/code> comes in.<\/p>\n<h2 id=\"read-only-with-tmpfs-for-writable-directories\">Read-only with tmpfs for writable directories<\/h2>\n<p>Combine <code>read_only<\/code> with <code>tmpfs<\/code> to allow writes only where needed:<\/p>","title":"Docker Compose Tip #43: Read-only root filesystems"},{"content":"Docker Compose supports shell-style variable substitution in your Compose files. Combined with defaults and error messages, it makes your configurations flexible and safe.\nBasic substitution Reference environment variables or .env file values:\nservices: app: image: myapp:${TAG} environment: DATABASE_URL: postgres:\/\/${DB_USER}:${DB_PASS}@db\/${DB_NAME} # .env TAG=2.1.0 DB_USER=admin DB_PASS=secret DB_NAME=myapp Default values Provide fallback values when a variable is unset or empty:\nservices: app: image: myapp:${TAG:-latest} environment: LOG_LEVEL: ${LOG_LEVEL:-info} PORT: ${PORT:-3000} deploy: replicas: ${REPLICAS:-1} Two syntaxes with a subtle difference:\n# ${VAR:-default} - use default if VAR is unset OR empty image: myapp:${TAG:-latest} # ${VAR-default} - use default only if VAR is unset (empty string is kept) image: myapp:${TAG-latest} Most of the time, :- (with colon) is what you want.\nRequired values with error messages Force a variable to be set, with a clear error if it&rsquo;s missing:\nservices: app: image: myapp:${TAG:?TAG must be set to deploy} environment: DATABASE_URL: ${DATABASE_URL:?Missing DATABASE_URL - check your .env file} API_KEY: ${API_KEY:?API_KEY is required} Running without TAG set will produce:\ninvalid interpolation format for services.app.image. required variable TAG is missing a value: TAG must be set to deploy Combining patterns Use defaults for development, require values in production:\n# compose.yml - development defaults services: app: image: myapp:${TAG:-latest} environment: DB_HOST: ${DB_HOST:-localhost} DB_PORT: ${DB_PORT:-5432} LOG_LEVEL: ${LOG_LEVEL:-debug} # compose.prod.yml - strict requirements services: app: image: myapp:${TAG:?TAG is required for production} environment: DB_HOST: ${DB_HOST:?DB_HOST must be set} DB_PORT: ${DB_PORT:-5432} LOG_LEVEL: ${LOG_LEVEL:-warn} Escaping dollar signs If you need a literal $ in your Compose file, use $$:\nservices: app: image: myapp environment: # Literal dollar sign (not interpolated) PRICE: &#34;$$9.99&#34; command: \/bin\/sh -c &#34;echo $$HOME&#34; Variable substitution scope Variables are substituted in the Compose file itself, not inside containers. These are different:\nservices: app: image: myapp environment: # Substituted by Compose at parse time (from .env or host env) DB_HOST: ${DB_HOST:-localhost} # NOT substituted by Compose - passed as-is to the container PATH: &#34;\/usr\/local\/bin:\/usr\/bin&#34; Pro tip Use docker compose config to see the resolved values after substitution:\n# See what Compose resolves docker compose config # Check a specific service docker compose config --format json | jq &#39;.services.app.environment&#39; This is especially useful for debugging when you&rsquo;re not sure which .env file or environment variable is being picked up.\nWhen using variables from multiple sources (.env file, host environment, environment key, env_file), keep in mind that Compose follows a specific precedence order \u2014 host environment variables always override .env file values, for example. Check the documentation for the full priority chain.\nFurther reading Compose specification: interpolation Compose environment variables Environment variables precedence ","permalink":"https:\/\/lours.me\/posts\/compose-tip-042-variable-substitution\/","summary":"<p>Docker Compose supports shell-style variable substitution in your Compose files. Combined with defaults and error messages, it makes your configurations flexible and safe.<\/p>\n<h2 id=\"basic-substitution\">Basic substitution<\/h2>\n<p>Reference environment variables or <code>.env<\/code> file values:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:${TAG}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DATABASE_URL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:\/\/${DB_USER}:${DB_PASS}@db\/${DB_NAME}<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># .env<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">TAG<\/span><span class=\"o\">=<\/span>2.1.0\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">DB_USER<\/span><span class=\"o\">=<\/span>admin\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">DB_PASS<\/span><span class=\"o\">=<\/span>secret\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">DB_NAME<\/span><span class=\"o\">=<\/span>myapp\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"default-values\">Default values<\/h2>\n<p>Provide fallback values when a variable is unset or empty:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:${TAG:-latest}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">LOG_LEVEL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${LOG_LEVEL:-info}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">PORT<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${PORT:-3000}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">deploy<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">replicas<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${REPLICAS:-1}<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Two syntaxes with a subtle difference:<\/p>","title":"Docker Compose Tip #42: Variable substitution and defaults"},{"content":"Docker Compose supports lifecycle hooks that let you run commands at specific points in a container&rsquo;s lifecycle, right after it starts and just before it stops.\npost_start hook Run a command inside the container right after it starts \u2014 perfect for initialization tasks like database migrations:\nservices: api: image: myapp-api post_start: - command: \/app\/migrate.sh depends_on: postgres: condition: service_healthy Or warming up caches so the service is ready to handle traffic:\nservices: web: image: myapp-web post_start: - command: \/bin\/sh -c &#34;curl -sf http:\/\/localhost:3000\/warmup&#34; pre_stop hook Run a command just before the container receives the stop signal \u2014 useful for draining connections gracefully:\nservices: api: image: myapp-api pre_stop: - command: \/bin\/sh -c &#34;curl -sf -X POST http:\/\/localhost:8080\/drain&#34; stop_grace_period: 30s Or notifying other services that this one is going away:\nservices: worker: image: myapp-worker pre_stop: - command: \/bin\/sh -c &#34;curl -sf -X POST http:\/\/api:8080\/workers\/deregister?id=$HOSTNAME&#34; stop_grace_period: 15s Combining both hooks services: api: image: myapp-api post_start: - command: \/bin\/sh -c &#34;curl -sf http:\/\/localhost:8080\/ready || exit 1&#34; pre_stop: - command: \/bin\/sh -c &#34;curl -sf -X POST http:\/\/localhost:8080\/drain&#34; stop_grace_period: 30s healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:8080\/health&#34;] interval: 10s Hook behavior A few things to keep in mind:\npost_start runs inside the container, not on the host pre_stop runs before the stop signal is sent to the main process If a post_start hook fails, the container is marked as unhealthy Hooks run sequentially if you define multiple commands: services: app: image: myapp post_start: - command: \/app\/migrate.sh - command: \/app\/seed.sh - command: \/app\/warm-cache.sh Hooks vs entrypoint Don&rsquo;t confuse hooks with entrypoint or command:\nentrypoint\/command: the main process \u2014 it IS the container post_start: runs alongside the main process, after the container starts pre_stop: runs before the main process receives the stop signal Hooks are for side tasks; the main process should still be defined by the image.\nFurther reading Compose specification: post_start Compose specification: pre_stop ","permalink":"https:\/\/lours.me\/posts\/compose-tip-041-lifecycle-hooks\/","summary":"<p>Docker Compose supports lifecycle hooks that let you run commands at specific points in a container&rsquo;s lifecycle, right after it starts and just before it stops.<\/p>\n<h2 id=\"post_start-hook\">post_start hook<\/h2>\n<p>Run a command inside the container right after it starts \u2014 perfect for initialization tasks like database migrations:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp-api<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">post_start<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"nt\">command<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">\/app\/migrate.sh<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">depends_on<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">postgres<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">condition<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">service_healthy<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Or warming up caches so the service is ready to handle traffic:<\/p>","title":"Docker Compose Tip #41: Container lifecycle hooks"},{"content":"Labels are key-value metadata attached to containers. They cost nothing at runtime but unlock powerful filtering, organization, and tool integrations.\nAdding labels to services services: api: image: myapp-api labels: com.example.team: &#34;backend&#34; com.example.env: &#34;production&#34; com.example.version: &#34;2.1.0&#34; worker: image: myapp-worker labels: com.example.team: &#34;backend&#34; com.example.env: &#34;production&#34; com.example.role: &#34;async-processing&#34; You can also use the list syntax:\nservices: api: image: myapp-api labels: - &#34;com.example.team=backend&#34; - &#34;com.example.env=production&#34; Filtering with labels Labels become powerful with docker compose ps:\n# Find all containers from the backend team docker compose ps --filter &#34;label=com.example.team=backend&#34; # Find all production containers docker compose ps --filter &#34;label=com.example.env=production&#34; # Combine filters docker compose ps --filter &#34;label=com.example.team=backend&#34; --filter &#34;label=com.example.env=production&#34; Traefik integration Traefik uses labels for automatic routing configuration:\nservices: web: image: myapp labels: traefik.enable: &#34;true&#34; traefik.http.routers.web.rule: &#34;Host(`app.example.com`)&#34; traefik.http.routers.web.tls: &#34;true&#34; traefik.http.services.web.loadbalancer.server.port: &#34;3000&#34; traefik: image: traefik:v3 ports: - &#34;80:80&#34; - &#34;443:443&#34; volumes: - \/var\/run\/docker.sock:\/var\/run\/docker.sock:ro Prometheus integration Prometheus can discover targets using container labels:\nservices: api: image: myapp-api labels: prometheus.scrape: &#34;true&#34; prometheus.port: &#34;9090&#34; prometheus.path: &#34;\/metrics&#34; worker: image: myapp-worker labels: prometheus.scrape: &#34;true&#34; prometheus.port: &#34;9091&#34; Organization patterns Use a consistent naming convention across your team:\nservices: api: image: myapp-api labels: # Ownership com.example.team: &#34;platform&#34; com.example.contact: &#34;platform-team@example.com&#34; # Environment com.example.env: &#34;${ENV:-dev}&#34; com.example.version: &#34;${VERSION:-latest}&#34; # Operational com.example.backup: &#34;true&#34; com.example.log-level: &#34;info&#34; Pro tip Labels on containers are different from labels on networks and volumes:\nservices: app: image: myapp labels: app.tier: &#34;frontend&#34; # Container label networks: frontend: labels: network.purpose: &#34;public-facing&#34; # Network label volumes: data: labels: volume.backup: &#34;daily&#34; # Volume label Each resource type has its own labels, and you can filter each independently.\nFurther reading Compose specification: labels Docker object labels ","permalink":"https:\/\/lours.me\/posts\/compose-tip-040-labels\/","summary":"<p>Labels are key-value metadata attached to containers. They cost nothing at runtime but unlock powerful filtering, organization, and tool integrations.<\/p>\n<h2 id=\"adding-labels-to-services\">Adding labels to services<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp-api<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">labels<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">com.example.team<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;backend&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">com.example.env<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;production&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">com.example.version<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;2.1.0&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">worker<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp-worker<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">labels<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">com.example.team<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;backend&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">com.example.env<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;production&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">com.example.role<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;async-processing&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>You can also use the list syntax:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp-api<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">labels<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;com.example.team=backend&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;com.example.env=production&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"filtering-with-labels\">Filtering with labels<\/h2>\n<p>Labels become powerful with <code>docker compose ps<\/code>:<\/p>","title":"Docker Compose Tip #40: Using labels for service organization and monitoring"},{"content":"The real power comes from using all three mechanisms together, each doing what it does best.\nThe scenario A team maintaining a web application with:\nShared infrastructure (database, monitoring) reused across projects Common service configuration (logging, labels) applied to all services Different settings for local development vs CI vs production Project structure my-project\/ \u251c\u2500\u2500 compose.yml # Main entry point \u251c\u2500\u2500 compose.override.yml # Local dev overrides \u251c\u2500\u2500 compose.ci.yml # CI-specific overrides \u251c\u2500\u2500 base\/ \u2502 \u2514\u2500\u2500 service-base.yml # Shared service config (extends) \u2514\u2500\u2500 infra\/ \u251c\u2500\u2500 database.yml # Postgres stack (include) \u2514\u2500\u2500 monitoring.yml # Prometheus + Grafana (include) Step 1: Shared service config with extends Define common configuration once:\n# base\/service-base.yml services: base: restart: unless-stopped logging: driver: json-file options: max-size: &#34;10m&#34; max-file: &#34;3&#34; labels: com.company.project: ${COMPOSE_PROJECT_NAME} com.company.env: ${ENV:-dev} Step 2: Modular infrastructure with include Self-contained stacks that can be reused across projects:\n# infra\/database.yml services: postgres: image: postgres:16 environment: POSTGRES_PASSWORD: ${DB_PASSWORD} volumes: - db-data:\/var\/lib\/postgresql\/data healthcheck: test: [&#34;CMD&#34;, &#34;pg_isready&#34;] interval: 10s volumes: db-data: Step 3: Main compose file Bring it all together:\n# compose.yml include: - path: .\/infra\/database.yml - path: .\/infra\/monitoring.yml services: api: extends: file: .\/base\/service-base.yml service: base image: myapp-api:${TAG:-latest} environment: DATABASE_URL: postgres:\/\/postgres:${DB_PASSWORD}@postgres\/myapp depends_on: postgres: condition: service_healthy worker: extends: file: .\/base\/service-base.yml service: base image: myapp-worker:${TAG:-latest} environment: DATABASE_URL: postgres:\/\/postgres:${DB_PASSWORD}@postgres\/myapp Step 4: Environment-specific overrides # compose.override.yml (local dev - auto-loaded) services: api: build: .\/api # Build locally volumes: - .\/api:\/app # Hot reload environment: DEBUG: &#34;true&#34; # compose.ci.yml (CI - explicit: docker compose -f compose.yml -f compose.ci.yml up) services: api: image: myapp-api:${CI_COMMIT_SHA} worker: image: myapp-worker:${CI_COMMIT_SHA} The result Each mechanism handles its concern independently:\ninclude: infrastructure stacks are isolated and reusable extends: service config is DRY and consistent Override files: environment differences are explicit and targeted Changing the logging config? Update base\/service-base.yml once. Swapping the database stack? Replace one include line. Adjusting dev ports? Edit compose.override.yml without touching anything else.\nFurther reading Compose documentation: Multiple Compose files Watch: Managing multiple Compose files - a complete walkthrough of these patterns in practice ","permalink":"https:\/\/lours.me\/posts\/compose-tip-039-combining-include-extends-overrides\/","summary":"<p>The real power comes from using all three mechanisms together, each doing what it does best.<\/p>\n<h2 id=\"the-scenario\">The scenario<\/h2>\n<p>A team maintaining a web application with:<\/p>\n<ul>\n<li>Shared infrastructure (database, monitoring) reused across projects<\/li>\n<li>Common service configuration (logging, labels) applied to all services<\/li>\n<li>Different settings for local development vs CI vs production<\/li>\n<\/ul>\n<h2 id=\"project-structure\">Project structure<\/h2>\n<pre tabindex=\"0\"><code>my-project\/\n\u251c\u2500\u2500 compose.yml              # Main entry point\n\u251c\u2500\u2500 compose.override.yml     # Local dev overrides\n\u251c\u2500\u2500 compose.ci.yml           # CI-specific overrides\n\u251c\u2500\u2500 base\/\n\u2502   \u2514\u2500\u2500 service-base.yml     # Shared service config (extends)\n\u2514\u2500\u2500 infra\/\n    \u251c\u2500\u2500 database.yml         # Postgres stack (include)\n    \u2514\u2500\u2500 monitoring.yml       # Prometheus + Grafana (include)\n<\/code><\/pre><h2 id=\"step-1-shared-service-config-with-extends\">Step 1: Shared service config with extends<\/h2>\n<p>Define common configuration once:<\/p>","title":"Docker Compose Tip #39: Combining include, extends, and overrides"},{"content":"Now that you understand how include, extends, and override files work, how do you pick the right one? Here&rsquo;s a practical guide.\nUse override files for environment-specific configuration Override files are the right choice when you need to adapt the same stack to different environments or developer setups:\n# compose.yml - base definition, committed to git services: app: image: myapp:${TAG:-latest} environment: NODE_ENV: production # compose.override.yml - local dev tweaks, optionally gitignored services: app: build: . # Build locally instead of pulling image environment: NODE_ENV: development volumes: - .:\/app # Mount source for hot reload Good fit for:\nDev vs prod differences (volumes, build vs image, ports) Local developer customizations that shouldn&rsquo;t be committed CI-specific overrides (no volumes, specific image tags) Use extends for shared service configuration extends shines when multiple services share a common base configuration and you want a single source of truth:\n# base.yml services: service-base: restart: unless-stopped logging: driver: json-file options: max-size: &#34;10m&#34; max-file: &#34;3&#34; labels: com.company.env: ${ENV:-dev} com.company.version: ${VERSION} # compose.yml services: api: extends: file: base.yml service: service-base image: myapp-api worker: extends: file: base.yml service: service-base image: myapp-worker Good fit for:\nCommon restart policies, logging, labels across services Shared resource limits or security options Base image variants (e.g., a debug version extending a production service) Use include for reusable service groups include is the right choice when you have a self-contained group of services that you want to treat as a unit and potentially reuse across multiple projects:\n# compose.yml include: - path: .\/infra\/observability.yml # Prometheus + Grafana stack - path: .\/infra\/database.yml # Postgres + migrations services: app: image: myapp depends_on: - postgres - prometheus Good fit for:\nShared infrastructure stacks (monitoring, databases, message queues) Team-maintained service libraries reused across projects Keeping a large compose file split into logical modules Quick decision guide Need to adapt existing services per environment? \u2192 Override files Multiple services sharing the same base config? \u2192 extends Importing a self-contained group of services? \u2192 include Further reading Compose documentation: Multiple Compose files Watch: Managing multiple Compose files - real-world examples for each approach ","permalink":"https:\/\/lours.me\/posts\/compose-tip-038-when-to-use-which\/","summary":"<p>Now that you understand how <code>include<\/code>, <code>extends<\/code>, and override files work, how do you pick the right one? Here&rsquo;s a practical guide.<\/p>\n<h2 id=\"use-override-files-for-environment-specific-configuration\">Use override files for environment-specific configuration<\/h2>\n<p>Override files are the right choice when you need to adapt the same stack to different environments or developer setups:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.yml - base definition, committed to git<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:${TAG:-latest}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.override.yml - local dev tweaks, optionally gitignored<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.          <\/span><span class=\"w\"> <\/span><span class=\"c\"># Build locally instead of pulling image<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">development<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.:\/app        <\/span><span class=\"w\"> <\/span><span class=\"c\"># Mount source for hot reload<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Good fit for:<\/p>","title":"Docker Compose Tip #38: When to use include vs extends vs overrides"},{"content":"Docker Compose gives you three ways to split and reuse configurations. They look similar but work at different levels and serve different purposes.\nOverride files: project-level merge Override files are merged with your main compose.yml at the project level. compose.override.yml is loaded automatically; additional files require the -f flag:\n# compose.override.yml is loaded automatically docker compose up # Explicit merge docker compose -f compose.yml -f compose.prod.yml up Mappings are merged (override wins), arrays are concatenated:\n# compose.yml services: app: image: myapp environment: LOG_LEVEL: debug ports: - &#34;3000:3000&#34; # compose.override.yml services: app: environment: LOG_LEVEL: info # Overrides debug ports: - &#34;9229:9229&#34; # Appended, not replaced Result: app gets LOG_LEVEL=info and both ports exposed.\nextends: service-level inheritance extends lets a service inherit configuration from another service definition, in the same file or another file:\n# base.yml services: base: image: myapp environment: LOG_FORMAT: json METRICS_ENABLED: &#34;true&#34; labels: com.company.team: platform # compose.yml services: api: extends: file: base.yml service: base environment: PORT: &#34;8080&#34; # Added on top of inherited env vars worker: extends: file: base.yml service: base environment: PORT: &#34;8081&#34; api and worker both get the base environment and labels, each adding their own PORT.\nNote: extends does not inherit depends_on, links, or volumes_from \u2014 you need to redeclare those in each service.\ninclude: isolated sub-projects include pulls in another Compose file as a self-contained unit. Unlike override files, the included file is parsed in isolation with its own working directory and its own .env file:\n# compose.yml include: - path: .\/monitoring\/compose.yml - path: .\/database\/compose.yml services: app: image: myapp depends_on: - postgres # Service defined in database\/compose.yml The included file&rsquo;s services are merged into the project, but the included file cannot see or override services in the parent. It&rsquo;s a one-way import.\nKey differences at a glance Mechanism Scope Context Override files Project (all services) Shared extends Single service Shared include Full sub-project Isolated Further reading Compose documentation: Merge and override Compose documentation: extend Compose documentation: include Watch: Managing multiple Compose files - a deep dive into all three approaches ","permalink":"https:\/\/lours.me\/posts\/compose-tip-037-include-extends-overrides\/","summary":"<p>Docker Compose gives you three ways to split and reuse configurations. They look similar but work at different levels and serve different purposes.<\/p>\n<h2 id=\"override-files-project-level-merge\">Override files: project-level merge<\/h2>\n<p>Override files are merged with your main <code>compose.yml<\/code> at the project level. <code>compose.override.yml<\/code> is loaded automatically; additional files require the <code>-f<\/code> flag:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># compose.override.yml is loaded automatically<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose up\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Explicit merge<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose -f compose.yml -f compose.prod.yml up\n<\/span><\/span><\/code><\/pre><\/div><p>Mappings are merged (override wins), arrays are concatenated:<\/p>","title":"Docker Compose Tip #37: Understanding include, extends, and override files"},{"content":"Need custom DNS resolution in containers? Use extra_hosts to add hostname mappings without touching system files!\nBasic extra_hosts usage Add custom host entries to containers:\nservices: app: image: myapp extra_hosts: - &#34;api.local:192.168.1.100&#34; - &#34;db.local:192.168.1.101&#34; - &#34;cache.local:192.168.1.102&#34; Inside the container:\ndocker compose exec app cat \/etc\/hosts # 127.0.0.1 localhost # 192.168.1.100 api.local # 192.168.1.101 db.local # 192.168.1.102 cache.local Dynamic host resolution Use host machine&rsquo;s IP dynamically:\nservices: app: image: myapp extra_hosts: - &#34;host.docker.internal:host-gateway&#34; # Magic value! This maps to:\nOn Linux: Host&rsquo;s IP on default bridge On Mac\/Windows: Special DNS name to host Environment-based hosts Different hosts per environment:\nservices: app: image: myapp extra_hosts: - &#34;api.service:${API_HOST:-127.0.0.1}&#34; - &#34;auth.service:${AUTH_HOST:-127.0.0.1}&#34; - &#34;cdn.service:${CDN_HOST:-127.0.0.1}&#34; .env.development:\nAPI_HOST=localhost AUTH_HOST=localhost CDN_HOST=localhost .env.production:\nAPI_HOST=10.0.1.50 AUTH_HOST=10.0.1.51 CDN_HOST=cdn.example.com Testing against production APIs Route specific domains to local\/mock services:\nservices: app: image: myapp extra_hosts: # Override production APIs - &#34;api.example.com:127.0.0.1&#34; - &#34;auth.example.com:127.0.0.1&#34; ports: - &#34;3000:3000&#34; # Mock API server mock-api: image: mockserver extra_hosts: - &#34;api.example.com:127.0.0.1&#34; ports: - &#34;80:8080&#34; Multiple service aliases Create multiple hostnames for the same IP:\nservices: web: image: nginx extra_hosts: # All pointing to the same service - &#34;app.local:172.20.0.5&#34; - &#34;www.local:172.20.0.5&#34; - &#34;admin.local:172.20.0.5&#34; - &#34;api.local:172.20.0.5&#34; app: image: myapp networks: default: ipv4_address: 172.20.0.5 networks: default: ipam: config: - subnet: 172.20.0.0\/16 Development with external services Connect to services outside Docker:\nservices: frontend: image: node:18 working_dir: \/app volumes: - .:\/app extra_hosts: # Local development servers - &#34;backend.local:host-gateway&#34; # Your host machine - &#34;database.local:192.168.1.50&#34; # Another machine - &#34;redis.local:192.168.1.51&#34; # Another machine command: npm run dev environment: API_URL: http:\/\/backend.local:8080 DB_HOST: database.local REDIS_HOST: redis.local Complete example: Microservices testing services: # API Gateway gateway: image: nginx ports: - &#34;80:80&#34; extra_hosts: - &#34;users.service:172.25.0.10&#34; - &#34;orders.service:172.25.0.11&#34; - &#34;inventory.service:172.25.0.12&#34; volumes: - .\/nginx.conf:\/etc\/nginx\/nginx.conf depends_on: - users - orders - inventory # Microservices users: image: users-service extra_hosts: - &#34;auth.provider:${AUTH_SERVER:-host-gateway}&#34; - &#34;email.service:${EMAIL_SERVER:-host-gateway}&#34; networks: default: ipv4_address: 172.25.0.10 orders: image: orders-service extra_hosts: - &#34;users.service:172.25.0.10&#34; - &#34;inventory.service:172.25.0.12&#34; - &#34;payment.gateway:${PAYMENT_HOST:-sandbox.paypal.com}&#34; networks: default: ipv4_address: 172.25.0.11 inventory: image: inventory-service extra_hosts: - &#34;warehouse.api:${WAREHOUSE_HOST:-mock-warehouse}&#34; networks: default: ipv4_address: 172.25.0.12 # Mock external service mock-warehouse: image: mockserver container_name: mock-warehouse networks: default: ipam: config: - subnet: 172.25.0.0\/16 Debugging DNS resolution Test hostname resolution:\nservices: dns-debug: image: alpine extra_hosts: - &#34;test1.local:1.2.3.4&#34; - &#34;test2.local:5.6.7.8&#34; - &#34;host.machine:host-gateway&#34; command: | sh -c &#34; echo &#39;=== \/etc\/hosts ===&#39; cat \/etc\/hosts echo echo &#39;=== DNS Resolution ===&#39; nslookup test1.local || echo &#39;nslookup not available&#39; echo echo &#39;=== Ping Tests ===&#39; ping -c 1 test1.local || true ping -c 1 host.machine || true &#34; YAML anchors for shared hosts Reuse common host mappings:\nx-common-hosts: &amp;common-hosts - &#34;auth.local:10.0.0.1&#34; - &#34;cache.local:10.0.0.2&#34; - &#34;db.local:10.0.0.3&#34; services: app1: image: app1 extra_hosts: *common-hosts app2: image: app2 extra_hosts: &lt;&lt;: *common-hosts - &#34;special.local:10.0.0.4&#34; # Additional host app3: image: app3 extra_hosts: *common-hosts Further reading Docker networking documentation Compose networking ","permalink":"https:\/\/lours.me\/posts\/compose-tip-036-extra-hosts\/","summary":"<p>Need custom DNS resolution in containers? Use <code>extra_hosts<\/code> to add hostname mappings without touching system files!<\/p>\n<h2 id=\"basic-extra_hosts-usage\">Basic extra_hosts usage<\/h2>\n<p>Add custom host entries to containers:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">extra_hosts<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;api.local:192.168.1.100&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;db.local:192.168.1.101&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;cache.local:192.168.1.102&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Inside the container:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> app cat \/etc\/hosts\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 127.0.0.1       localhost<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 192.168.1.100   api.local<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 192.168.1.101   db.local<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 192.168.1.102   cache.local<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"dynamic-host-resolution\">Dynamic host resolution<\/h2>\n<p>Use host machine&rsquo;s IP dynamically:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">extra_hosts<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;host.docker.internal:host-gateway&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Magic value!<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>This maps to:<\/p>","title":"Docker Compose Tip #36: Using extra_hosts for custom DNS entries"},{"content":"Speed up I\/O operations and enhance security by using tmpfs for temporary data. RAM-based storage that vanishes on restart!\nWhat is tmpfs? Tmpfs is a temporary filesystem that resides in memory:\n\u26a1 Ultra-fast (RAM speed) \ud83d\udd12 Secure (data doesn&rsquo;t persist) \ud83e\uddf9 Self-cleaning (cleared on restart) Basic tmpfs usage Simple tmpfs mount:\nservices: app: image: myapp tmpfs: - \/tmp - \/app\/cache - \/var\/run With size limits:\nservices: app: image: myapp tmpfs: - \/tmp:size=100M - \/app\/cache:size=500M - \/var\/run:size=10M Advanced tmpfs options Fine-tuned configuration:\nservices: app: image: myapp tmpfs: - type: tmpfs target: \/tmp tmpfs: size: 100M # Size limit mode: 1770 # File permissions uid: 1000 # User ID gid: 1000 # Group ID Using volumes syntax:\nservices: app: image: myapp volumes: - type: tmpfs target: \/app\/temp tmpfs: size: 200000000 # 200MB in bytes Common use cases 1. Build cache Speed up compilation:\nservices: builder: image: node:18 working_dir: \/app volumes: - .:\/app - type: tmpfs target: \/app\/.cache tmpfs: size: 1G command: npm run build 2. Session storage Fast session management:\nservices: web: image: nginx tmpfs: - \/var\/cache\/nginx:size=100M - \/var\/run:size=10M app: image: myapp tmpfs: - \/app\/sessions:size=500M,mode=1770 environment: SESSION_STORE: \/app\/sessions 3. Temporary uploads Process files in memory:\nservices: upload-processor: image: processor tmpfs: - \/tmp\/uploads:size=2G environment: UPLOAD_DIR: \/tmp\/uploads MAX_UPLOAD_SIZE: 100M Read-only root with tmpfs Secure pattern with writable temp areas:\nservices: secure-app: image: myapp read_only: true # Entire filesystem read-only tmpfs: - \/tmp:size=100M # Writable temp - \/var\/run:size=10M - \/app\/cache:size=50M volumes: - app_logs:\/var\/log:rw # Persistent logs Database with tmpfs Speed up test databases:\nservices: # Test database (data doesn&#39;t persist!) test-db: image: postgres:15 environment: POSTGRES_PASSWORD: test tmpfs: - \/var\/lib\/postgresql\/data:size=1G profiles: [&#34;test&#34;] # Production database (persistent) prod-db: image: postgres:15 environment: POSTGRES_PASSWORD_FILE: \/run\/secrets\/db_password volumes: - postgres_data:\/var\/lib\/postgresql\/data profiles: [&#34;prod&#34;] volumes: postgres_data: Performance comparison Test I\/O performance:\nservices: benchmark: image: alpine command: | sh -c &#34; echo &#39;=== Tmpfs Performance ===&#39; time dd if=\/dev\/zero of=\/tmp\/test bs=1M count=100 echo echo &#39;=== Volume Performance ===&#39; time dd if=\/dev\/zero of=\/data\/test bs=1M count=100 &#34; tmpfs: - \/tmp:size=200M volumes: - benchmark_data:\/data volumes: benchmark_data: Complete example: CI runner services: runner: image: gitlab-runner volumes: - \/var\/run\/docker.sock:\/var\/run\/docker.sock - runner_config:\/etc\/gitlab-runner tmpfs: # Build artifacts (temporary) - \/builds:size=5G # Package caches - \/cache:size=2G # Docker layer cache - \/var\/lib\/docker:size=10G # Test environment test-runner: image: node:18 working_dir: \/app volumes: - .:\/app:ro # Source code (read-only) tmpfs: # Dependencies - \/app\/node_modules:size=1G # Test output - \/app\/coverage:size=100M # Build output - \/app\/dist:size=500M command: | sh -c &#34; cp -r \/app \/tmp\/app-copy cd \/tmp\/app-copy npm ci npm test npm run build &#34; Monitoring tmpfs usage Check memory usage:\n# Check tmpfs usage in a specific container docker compose exec app df -h | grep tmpfs # Monitor all containers&#39; tmpfs usage docker compose ps -q | xargs -I {} docker exec {} df -h 2&gt;\/dev\/null | grep tmpfs # Check system memory to see tmpfs impact docker stats --no-stream Pro tip Dynamic tmpfs sizing based on available memory:\nservices: app: image: myapp environment: TMPFS_SIZE: ${TMPFS_SIZE:-100M} tmpfs: - \/tmp:size=${TMPFS_SIZE:-100M} deploy: resources: limits: memory: 2G reservations: memory: 1G And set size based on environment:\n# .env.development TMPFS_SIZE=500M # .env.production TMPFS_SIZE=2G # .env.test TMPFS_SIZE=100M Fast, secure, and self-cleaning - tmpfs for the win!\nFurther reading tmpfs documentation Storage drivers overview ","permalink":"https:\/\/lours.me\/posts\/compose-tip-035-tmpfs-storage\/","summary":"<p>Speed up I\/O operations and enhance security by using tmpfs for temporary data. RAM-based storage that vanishes on restart!<\/p>\n<h2 id=\"what-is-tmpfs\">What is tmpfs?<\/h2>\n<p>Tmpfs is a temporary filesystem that resides in memory:<\/p>\n<ul>\n<li>\u26a1 Ultra-fast (RAM speed)<\/li>\n<li>\ud83d\udd12 Secure (data doesn&rsquo;t persist)<\/li>\n<li>\ud83e\uddf9 Self-cleaning (cleared on restart)<\/li>\n<\/ul>\n<h2 id=\"basic-tmpfs-usage\">Basic tmpfs usage<\/h2>\n<p>Simple tmpfs mount:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">tmpfs<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/tmp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/app\/cache<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/var\/run<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>With size limits:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">tmpfs<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/tmp:size=100M<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/app\/cache:size=500M<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/var\/run:size=10M<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"advanced-tmpfs-options\">Advanced tmpfs options<\/h2>\n<p>Fine-tuned configuration:<\/p>","title":"Docker Compose Tip #35: Using tmpfs for ephemeral storage"},{"content":"Know the difference between exec and run! Each has its place in your debugging toolkit.\nThe key difference exec: Runs commands in an existing container run: Creates a new container # Exec: enters running container docker compose exec web bash # Run: starts new container docker compose run web bash When to use exec Use exec for debugging running services:\n# Debug a running web server docker compose exec web bash # Check logs inside container docker compose exec web tail -f \/var\/log\/app.log # Run database queries docker compose exec db psql -U postgres # Check process list docker compose exec web ps aux # Test connectivity from inside docker compose exec web curl http:\/\/api:3000\/health Important: Container must be running!\n# This fails if web is stopped docker compose exec web bash # Error: No container found for web_1 When to use run Use run for one-off tasks:\n# Run database migrations docker compose run migrate npm run migrate:up # Run tests docker compose run --rm test npm test # Execute scripts docker compose run --rm app python manage.py createsuperuser # Start interactive session with overrides docker compose run --rm -e DEBUG=true web bash Key differences 1. Container lifecycle # exec: uses existing container docker compose up -d web docker compose exec web echo &#34;Existing PID: $$&#34; # Output: Existing PID: 1 # run: creates new container docker compose run web echo &#34;New PID: $$&#34; # Output: New PID: 1 (different container!) 2. Port mapping services: web: image: nginx ports: - &#34;8080:80&#34; # exec: uses existing port mapping docker compose exec web curl localhost:80 # Works # run: NO port mapping by default docker compose run web curl localhost:80 # Works # But from host: curl localhost:8080 # Doesn&#39;t work! # run with ports: docker compose run --service-ports web # Now ports are mapped 3. Dependencies services: web: image: myapp depends_on: - db - redis # exec: dependencies already running docker compose exec web bash # db and redis are up # run: doesn&#39;t start dependencies by default docker compose run web bash # db and redis NOT started! # run with dependencies: docker compose run --deps web bash # Starts db and redis first Useful flags For exec: # Run as different user docker compose exec -u root web bash docker compose exec -u 1000 web bash # Set working directory docker compose exec -w \/app web ls # Set environment variable docker compose exec -e DEBUG=true web npm start # Disable TTY docker compose exec -T web cat \/etc\/hosts &gt; hosts.txt # Run in detached mode docker compose exec -d web long-running-script.sh For run: # Remove container after exit docker compose run --rm web bash # Run with service ports docker compose run --service-ports web # Run with dependencies docker compose run --deps web bash # Override entrypoint docker compose run --entrypoint \/bin\/sh web # Set name docker compose run --name my-debug-container web bash # Run in detached mode docker compose run -d web background-job.sh Real-world debugging scenarios Scenario 1: Debug production issue # 1. Check running containers docker compose ps # 2. Enter the problematic container docker compose exec web bash # 3. Inside container: check processes ps aux | grep node # 4. Check environment env | grep NODE_ # 5. Test internal connectivity curl http:\/\/api:3000\/health # 6. Review logs tail -f \/app\/logs\/error.log Scenario 2: Run maintenance tasks # Database backup (new container) docker compose run --rm db pg_dump -U postgres mydb &gt; backup.sql # Clear cache (existing container) docker compose exec redis redis-cli FLUSHALL # Run migrations (new container with cleanup) docker compose run --rm migrate npm run migrate:up # Seed database (new container) docker compose run --rm --env SEED_USERS=100 seeder Choose wisely: exec for running containers, run for fresh starts!\nFurther reading docker compose exec reference docker compose run reference ","permalink":"https:\/\/lours.me\/posts\/compose-tip-034-exec-vs-run\/","summary":"<p>Know the difference between <code>exec<\/code> and <code>run<\/code>! Each has its place in your debugging toolkit.<\/p>\n<h2 id=\"the-key-difference\">The key difference<\/h2>\n<ul>\n<li><strong><code>exec<\/code><\/strong>: Runs commands in an <strong>existing<\/strong> container<\/li>\n<li><strong><code>run<\/code><\/strong>: Creates a <strong>new<\/strong> container<\/li>\n<\/ul>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Exec: enters running container<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> web bash\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Run: starts new container<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose run web bash\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"when-to-use-exec\">When to use <code>exec<\/code><\/h2>\n<p>Use <code>exec<\/code> for debugging running services:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Debug a running web server<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> web bash\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Check logs inside container<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> web tail -f \/var\/log\/app.log\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Run database queries<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> db psql -U postgres\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Check process list<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> web ps aux\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Test connectivity from inside<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"nb\">exec<\/span> web curl http:\/\/api:3000\/health\n<\/span><\/span><\/code><\/pre><\/div><p><strong>Important<\/strong>: Container must be running!<\/p>","title":"Docker Compose Tip #34: Debugging with exec vs run"},{"content":"Take control of your container logs! Configure different logging drivers for better management, rotation, and analysis.\nDefault logging: json-file By default, Docker uses the json-file driver:\nservices: app: image: myapp logging: driver: json-file options: max-size: &#34;10m&#34; # Rotate after 10MB max-file: &#34;3&#34; # Keep 3 rotated files compress: &#34;true&#34; # Compress rotated files Without rotation, logs can fill your disk!\nCommon logging drivers 1. Local driver (efficient storage) Optimized for performance and disk usage:\nservices: app: image: myapp logging: driver: local options: max-size: &#34;20m&#34; max-file: &#34;5&#34; compress: &#34;true&#34; 2. Syslog (centralized logging) Send logs to syslog server:\nservices: app: image: myapp logging: driver: syslog options: syslog-address: &#34;tcp:\/\/192.168.1.100:514&#34; syslog-format: &#34;rfc5424&#34; tag: &#34;{{.ImageName}}\/{{.Name}}\/{{.ID}}&#34; 3. Journald (systemd integration) For systemd-based systems:\nservices: app: image: myapp logging: driver: journald options: tag: &#34;compose-{{.Name}}&#34; labels: &#34;env,version&#34; View with: journalctl -u docker.service -f\n4. Fluentd (log aggregation) Forward to Fluentd collector:\nservices: app: image: myapp logging: driver: fluentd options: fluentd-address: &#34;localhost:24224&#34; tag: &#34;app.{{.Name}}&#34; fluentd-async: &#34;true&#34; fluentd-buffer-limit: &#34;1MB&#34; 5. AWS CloudWatch Direct to CloudWatch:\nservices: app: image: myapp logging: driver: awslogs options: awslogs-region: &#34;us-east-1&#34; awslogs-group: &#34;myapp-logs&#34; awslogs-stream: &#34;{{.FullID}}&#34; awslogs-create-group: &#34;true&#34; No logging Disable logging entirely:\nservices: noisy-service: image: chatty-app logging: driver: none Mixed logging strategies Different drivers per service:\nservices: # Critical service - centralized logging api: image: api logging: driver: syslog options: syslog-address: &#34;tcp:\/\/log-server:514&#34; tag: &#34;api\/{{.ID}}&#34; # High-volume service - local with rotation worker: image: worker logging: driver: local options: max-size: &#34;100m&#34; max-file: &#34;10&#34; # Debug service - json for easy reading debug: image: debug-tool logging: driver: json-file options: max-size: &#34;5m&#34; max-file: &#34;2&#34; labels: &#34;service_name,version&#34; env: &#34;NODE_ENV,LOG_LEVEL&#34; # Metrics collector - no logs needed metrics: image: prometheus logging: driver: none Log labels and metadata Add metadata to logs:\nservices: app: image: myapp labels: - &#34;com.example.version=1.0&#34; - &#34;com.example.environment=production&#34; environment: - LOG_LEVEL=info logging: driver: json-file options: labels: &#34;com.example.version,com.example.environment&#34; env: &#34;LOG_LEVEL,NODE_ENV&#34; env-regex: &#34;^LOG_&#34; Logs will include these labels and env vars!\nComplete example: ELK stack integration services: # Application with structured logging app: image: myapp depends_on: - elasticsearch logging: driver: json-file options: max-size: &#34;10m&#34; max-file: &#34;5&#34; labels: &#34;service,version,environment&#34; labels: service: &#34;api&#34; version: &#34;2.0&#34; environment: &#34;production&#34; # Log shipper filebeat: image: elastic\/filebeat:8.11.0 volumes: - \/var\/lib\/docker\/containers:\/var\/lib\/docker\/containers:ro - \/var\/run\/docker.sock:\/var\/run\/docker.sock:ro - .\/filebeat.yml:\/usr\/share\/filebeat\/filebeat.yml:ro depends_on: - elasticsearch - kibana # Log storage elasticsearch: image: elasticsearch:8.11.0 environment: - discovery.type=single-node - xpack.security.enabled=false logging: driver: local # Don&#39;t log ES to itself options: max-size: &#34;50m&#34; # Log visualization kibana: image: kibana:8.11.0 ports: - &#34;5601:5601&#34; environment: - ELASTICSEARCH_HOSTS=http:\/\/elasticsearch:9200 logging: driver: local options: max-size: &#34;10m&#34; Global logging configuration Set defaults for all services:\n# docker-compose.yml x-logging: &amp;default-logging driver: json-file options: max-size: &#34;10m&#34; max-file: &#34;3&#34; compress: &#34;true&#34; services: app1: image: app1 logging: *default-logging app2: image: app2 logging: *default-logging app3: image: app3 logging: &lt;&lt;: *default-logging options: max-size: &#34;20m&#34; # Override size for this service Pro tip View logs with filters and formatting:\n# Follow logs with timestamps docker compose logs -f --timestamps # Last 100 lines from specific services docker compose logs --tail=100 web worker # Logs since specific time docker compose logs --since=&#34;2024-01-01T10:00:00&#34; # Logs until specific time docker compose logs --until=&#34;2024-01-01T11:00:00&#34; # No log prefix (service names) docker compose logs --no-log-prefix # Export logs for analysis docker compose logs --no-color &gt; logs.txt Proper logging is crucial for production debugging!\nFurther reading Configure logging drivers Logging driver details ","permalink":"https:\/\/lours.me\/posts\/compose-tip-033-logging-drivers\/","summary":"<p>Take control of your container logs! Configure different logging drivers for better management, rotation, and analysis.<\/p>\n<h2 id=\"default-logging-json-file\">Default logging: json-file<\/h2>\n<p>By default, Docker uses the json-file driver:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">logging<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">driver<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">json-file<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">options<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">max-size<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;10m&#34;<\/span><span class=\"w\">    <\/span><span class=\"c\"># Rotate after 10MB<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">max-file<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;3&#34;<\/span><span class=\"w\">      <\/span><span class=\"c\"># Keep 3 rotated files<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">compress<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;true&#34;<\/span><span class=\"w\">   <\/span><span class=\"c\"># Compress rotated files<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Without rotation, logs can fill your disk!<\/p>\n<h2 id=\"common-logging-drivers\">Common logging drivers<\/h2>\n<h3 id=\"1-local-driver-efficient-storage\">1. Local driver (efficient storage)<\/h3>\n<p>Optimized for performance and disk usage:<\/p>","title":"Docker Compose Tip #33: Using logging drivers and options"},{"content":"Speed up builds and reduce image size by managing build contexts effectively. Don&rsquo;t send unnecessary files to the Docker daemon!\nUnderstanding build context The build context is what gets sent to Docker daemon:\nservices: app: build: . # Current directory is the context # Everything in . gets sent to daemon! Check your context size:\n# See what&#39;s being sent docker build --no-cache . 2&gt;&amp;1 | grep &#34;Sending build context&#34; # Output: Sending build context to Docker daemon 458.2MB \ud83d\ude31 Custom build contexts Specify different contexts for different services:\nservices: frontend: build: context: .\/frontend # Only frontend\/ directory dockerfile: Dockerfile backend: build: context: .\/backend # Only backend\/ directory dockerfile: Dockerfile shared-lib: build: context: . # Root for accessing multiple dirs dockerfile: .\/shared\/Dockerfile The power of .dockerignore Create .dockerignore files to exclude unnecessary files:\n# .dockerignore # Version control .git .gitignore # Dependencies node_modules vendor __pycache__ *.pyc # IDE files .idea .vscode *.swp *.swo # OS files .DS_Store Thumbs.db # Build artifacts dist build *.o *.exe # Local env files .env .env.local *.env # Logs *.log logs\/ # Tests test\/ tests\/ coverage\/ .coverage # Documentation docs\/ *.md !README.md # Exception: include README # Development files docker-compose.override.yml Makefile Multiple dockerignore patterns Different contexts can have different .dockerignore files:\nproject\/ \u251c\u2500\u2500 .dockerignore # Root ignore \u251c\u2500\u2500 frontend\/ \u2502 \u251c\u2500\u2500 .dockerignore # Frontend-specific ignore \u2502 \u2514\u2500\u2500 Dockerfile \u251c\u2500\u2500 backend\/ \u2502 \u251c\u2500\u2500 .dockerignore # Backend-specific ignore \u2502 \u2514\u2500\u2500 Dockerfile \u2514\u2500\u2500 docker-compose.yml Each service uses its context&rsquo;s .dockerignore:\nservices: frontend: build: .\/frontend # Uses .\/frontend\/.dockerignore backend: build: .\/backend # Uses .\/backend\/.dockerignore Advanced context with multiple sources Use bind mounts for selective file inclusion:\nservices: app: build: context: . dockerfile: Dockerfile additional_contexts: - shared=..\/shared-libs - configs=.\/configurations In Dockerfile:\n# Copy from additional contexts COPY --from=shared . \/app\/shared COPY --from=configs . \/app\/config Git-based contexts Build directly from Git repositories:\nservices: app: build: https:\/\/github.com\/user\/repo.git#branch specific-commit: build: https:\/\/github.com\/user\/repo.git#v1.0.0 subdirectory: build: https:\/\/github.com\/user\/repo.git#main:subdirectory Named contexts for reuse Share contexts between services:\nx-backend-build: &amp;backend-build context: .\/backend dockerfile: Dockerfile args: BUILD_ENV: production services: api: build: *backend-build worker: build: &lt;&lt;: *backend-build target: worker # Different stage scheduler: build: &lt;&lt;: *backend-build target: scheduler Pro tip Use BuildKit&rsquo;s cache mounts to speed up builds:\nservices: app: build: context: . dockerfile: Dockerfile cache_from: - type=local,src=\/tmp\/buildcache cache_to: - type=local,dest=\/tmp\/buildcache,mode=max And in your Dockerfile:\n# syntax=docker\/dockerfile:1 FROM node:18 WORKDIR \/app # Cache package manager downloads RUN --mount=type=cache,target=\/root\/.npm \\ npm set cache \/root\/.npm # Cache dependencies COPY package*.json . RUN --mount=type=cache,target=\/root\/.npm \\ npm ci --only=production # Copy only necessary files (respecting .dockerignore) COPY . . RUN npm run build Smaller contexts = faster builds = happier developers!\nFurther reading Dockerfile best practices BuildKit documentation ","permalink":"https:\/\/lours.me\/posts\/compose-tip-032-build-context-dockerignore\/","summary":"<p>Speed up builds and reduce image size by managing build contexts effectively. Don&rsquo;t send unnecessary files to the Docker daemon!<\/p>\n<h2 id=\"understanding-build-context\">Understanding build context<\/h2>\n<p>The build context is what gets sent to Docker daemon:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">. <\/span><span class=\"w\"> <\/span><span class=\"c\"># Current directory is the context<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Everything in . gets sent to daemon!<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Check your context size:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># See what&#39;s being sent<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker build --no-cache . 2&gt;<span class=\"p\">&amp;<\/span><span class=\"m\">1<\/span> <span class=\"p\">|<\/span> grep <span class=\"s2\">&#34;Sending build context&#34;<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Output: Sending build context to Docker daemon  458.2MB \ud83d\ude31<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"custom-build-contexts\">Custom build contexts<\/h2>\n<p>Specify different contexts for different services:<\/p>","title":"Docker Compose Tip #32: Build contexts and dockerignore patterns"},{"content":"Secure your application architecture by isolating services in separate networks. Not every service needs to talk to every other service!\nDefault behavior: All connected By default, all services share the same network:\n# All services can communicate services: web: image: nginx api: image: myapi database: image: postgres Problem: web can directly access database - potential security risk!\nNetwork isolation pattern Create separate networks for different tiers:\nservices: # Frontend tier web: image: nginx networks: - frontend - backend depends_on: - api # Application tier api: image: myapi networks: - backend - database environment: DB_HOST: postgres # Data tier postgres: image: postgres networks: - database environment: POSTGRES_PASSWORD: ${DB_PASSWORD} networks: frontend: name: frontend-network backend: name: backend-network database: name: database-network Now:\nweb can reach api (via backend network) api can reach postgres (via database network) web CANNOT reach postgres directly \u2705 Internal networks Use internal networks to isolate from host network interfaces:\nservices: cache: image: redis networks: - internal worker: image: worker networks: - internal - public networks: internal: internal: true # No connection to host network interfaces public: # Regular network connected to host The internal: true flag creates a network without a connection to the host&rsquo;s network interfaces - it has no default gateway for external connectivity. Containers can still reach the internet if they&rsquo;re also connected to other non-internal networks (like the worker service above via the public network).\nService discovery Services can only discover each other on shared networks:\nservices: api1: image: api:v1 networks: - api-network # Can ping: api2 # Cannot ping: db1, db2 api2: image: api:v2 networks: - api-network # Can ping: api1 # Cannot ping: db1, db2 db1: image: postgres networks: - db-network # Can ping: db2 # Cannot ping: api1, api2 db2: image: postgres networks: - db-network # Can ping: db1 # Cannot ping: api1, api2 networks: api-network: db-network: Complete example: Microservices services: # Public-facing services nginx: image: nginx ports: - &#34;80:80&#34; networks: - dmz - application volumes: - .\/nginx.conf:\/etc\/nginx\/nginx.conf:ro # Application services user-service: image: user-service networks: - application - user-db order-service: image: order-service networks: - application - order-db - messaging # Databases (isolated) user-db: image: postgres networks: - user-db volumes: - user-data:\/var\/lib\/postgresql\/data order-db: image: postgres networks: - order-db volumes: - order-data:\/var\/lib\/postgresql\/data # Message queue rabbitmq: image: rabbitmq:management networks: - messaging # Monitoring (observes all) prometheus: image: prom\/prometheus networks: - application - user-db - order-db - messaging volumes: - .\/prometheus.yml:\/etc\/prometheus\/prometheus.yml networks: dmz: name: dmz-network application: name: app-network user-db: name: user-db-network internal: true order-db: name: order-db-network internal: true messaging: name: messaging-network internal: true volumes: user-data: order-data: Pro tip Use docker network inspect to verify isolation:\n# List all networks used by your compose project docker compose ps --format json | jq -r &#39;.[].Networks | keys[]&#39; | sort -u # Inspect which services share a network docker network inspect &lt;network-name&gt; --format &#39;{{json .Containers}}&#39; | jq # Quick connectivity test between services docker compose exec web ping api -c 1 # Should work if on same network docker compose exec web ping postgres -c 1 # Should fail if isolated Defense in depth starts with network segmentation!\nFurther reading Docker networking overview Compose networking ","permalink":"https:\/\/lours.me\/posts\/compose-tip-031-network-isolation\/","summary":"<p>Secure your application architecture by isolating services in separate networks. Not every service needs to talk to every other service!<\/p>\n<h2 id=\"default-behavior-all-connected\">Default behavior: All connected<\/h2>\n<p>By default, all services share the same network:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># All services can communicate<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapi<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">database<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Problem: <code>web<\/code> can directly access <code>database<\/code> - potential security risk!<\/p>\n<h2 id=\"network-isolation-pattern\">Network isolation pattern<\/h2>\n<p>Create separate networks for different tiers:<\/p>","title":"Docker Compose Tip #31: Network isolation between services"},{"content":"Keep configurations DRY! The include directive enables modular, reusable Compose setups.\nBasic include usage Split configurations into logical modules:\n# compose.yml include: - path: .\/services\/database.yml - path: .\/services\/cache.yml - path: .\/services\/monitoring.yml services: app: image: myapp:latest depends_on: - postgres - redis # services\/database.yml services: postgres: image: postgres:15 volumes: - postgres_data:\/var\/lib\/postgresql\/data volumes: postgres_data: Project-wide organization Structure complex projects:\nproject\/ \u251c\u2500\u2500 compose.yml # Main entry point \u251c\u2500\u2500 common\/ \u2502 \u251c\u2500\u2500 networks.yml # Shared networks \u2502 \u2514\u2500\u2500 volumes.yml # Shared volumes \u251c\u2500\u2500 services\/ \u2502 \u251c\u2500\u2500 frontend.yml # Frontend services \u2502 \u251c\u2500\u2500 backend.yml # Backend services \u2502 \u2514\u2500\u2500 database.yml # Data layer \u2514\u2500\u2500 environments\/ \u251c\u2500\u2500 dev.yml # Development overrides \u2514\u2500\u2500 prod.yml # Production config # compose.yml include: - path: .\/common\/networks.yml - path: .\/common\/volumes.yml - path: .\/services\/frontend.yml - path: .\/services\/backend.yml - path: .\/services\/database.yml - path: ${COMPOSE_ENV:-.\/environments\/dev.yml} Conditional includes Include files based on environment:\n# compose.yml include: - path: .\/base.yml - path: .\/monitoring.yml env_file: .env.monitoring # Only if file exists - path: ${EXTRA_SERVICES:-\/dev\/null} required: false # Don&#39;t fail if missing services: app: image: myapp Run with optional services:\n# Basic setup docker compose up # With monitoring touch .env.monitoring docker compose up # With extra services EXTRA_SERVICES=.\/debug.yml docker compose up Team collaboration Share common configurations:\n# team\/shared.yml x-default-logging: &amp;default-logging logging: driver: json-file options: max-size: &#34;10m&#34; max-file: &#34;3&#34; services: shared-db: image: postgres:15 &lt;&lt;: *default-logging volumes: - shared_data:\/var\/lib\/postgresql\/data volumes: shared_data: # compose.yml include: - path: .\/team\/shared.yml services: app: image: myapp depends_on: - shared-db Service libraries Create reusable service definitions:\n# lib\/elasticsearch.yml services: elasticsearch: image: elasticsearch:8.11.0 environment: - discovery.type=single-node - xpack.security.enabled=false volumes: - es_data:\/usr\/share\/elasticsearch\/data volumes: es_data: # lib\/kibana.yml services: kibana: image: kibana:8.11.0 environment: ELASTICSEARCH_HOSTS: http:\/\/elasticsearch:9200 depends_on: - elasticsearch ports: - &#34;5601:5601&#34; profiles: [&#34;monitoring&#34;] # Service-level profile # compose.yml include: - path: .\/lib\/elasticsearch.yml - path: .\/lib\/kibana.yml services: app: image: myapp environment: ES_HOST: elasticsearch Override with includes Layer configurations:\n# base.yml services: app: image: myapp:latest environment: LOG_LEVEL: info # dev-overrides.yml services: app: environment: LOG_LEVEL: debug volumes: - .:\/app # compose.yml include: - path: .\/base.yml - path: .\/dev-overrides.yml # Merges with base # Result: app has LOG_LEVEL=debug and volume mount Include with variables Parameterize included paths:\n# compose.yml include: - path: .\/services\/${SERVICE_SET:-standard}.yml - path: .\/configs\/${REGION:-us-east}.yml services: app: image: myapp:${VERSION:-latest} Usage:\n# Default configuration docker compose up # Custom service set and region SERVICE_SET=premium REGION=eu-west docker compose up Pro tip Validate complex include structures:\n#!\/bin\/bash # validate-compose.sh echo &#34;Validating Compose configuration...&#34; # Check all included files exist for file in $(grep -E &#39;^\\s*- path:&#39; compose.yml | awk &#39;{print $3}&#39;); do if [ ! -f &#34;$file&#34; ]; then echo &#34;\u274c Missing include: $file&#34; exit 1 fi echo &#34;\u2713 Found: $file&#34; done # Validate final configuration if docker compose config &gt; \/dev\/null 2&gt;&amp;1; then echo &#34;\u2705 Configuration valid&#34; # Show final service list echo &#34;Services configured:&#34; docker compose config --services | sed &#39;s\/^\/ - \/&#39; else echo &#34;\u274c Configuration invalid&#34; docker compose config exit 1 fi Modular configurations scale with your project!\nFurther reading Compose include specification Compose file merging ","permalink":"https:\/\/lours.me\/posts\/compose-tip-030-include\/","summary":"<p>Keep configurations DRY! The <code>include<\/code> directive enables modular, reusable Compose setups.<\/p>\n<h2 id=\"basic-include-usage\">Basic include usage<\/h2>\n<p>Split configurations into logical modules:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">include<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/services\/database.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/services\/cache.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/services\/monitoring.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">depends_on<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">postgres<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">redis<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># services\/database.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">postgres<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:15<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">postgres_data:\/var\/lib\/postgresql\/data<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">postgres_data<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"project-wide-organization\">Project-wide organization<\/h2>\n<p>Structure complex projects:<\/p>\n<pre tabindex=\"0\"><code>project\/\n\u251c\u2500\u2500 compose.yml           # Main entry point\n\u251c\u2500\u2500 common\/\n\u2502   \u251c\u2500\u2500 networks.yml     # Shared networks\n\u2502   \u2514\u2500\u2500 volumes.yml      # Shared volumes\n\u251c\u2500\u2500 services\/\n\u2502   \u251c\u2500\u2500 frontend.yml     # Frontend services\n\u2502   \u251c\u2500\u2500 backend.yml      # Backend services\n\u2502   \u2514\u2500\u2500 database.yml     # Data layer\n\u2514\u2500\u2500 environments\/\n    \u251c\u2500\u2500 dev.yml          # Development overrides\n    \u2514\u2500\u2500 prod.yml         # Production config\n<\/code><\/pre><div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">include<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/common\/networks.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/common\/volumes.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/services\/frontend.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/services\/backend.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/services\/database.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${COMPOSE_ENV:-.\/environments\/dev.yml}<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"conditional-includes\">Conditional includes<\/h2>\n<p>Include files based on environment:<\/p>","title":"Docker Compose Tip #30: Compose include for modular configurations"},{"content":"Secure containers with principle of least privilege! Control exactly what your containers can do.\nUnderstanding capabilities Linux capabilities break down root privileges into distinct units:\nservices: # Drop all capabilities, then add only what&#39;s needed secure-app: image: myapp cap_drop: - ALL cap_add: - NET_BIND_SERVICE # Bind to ports &lt; 1024 - CHOWN # Change file ownership # Default Docker capabilities (for reference) default-app: image: myapp # Implicitly has: CHOWN, DAC_OVERRIDE, FSETID, FOWNER, # MKNOD, NET_RAW, SETGID, SETUID, SETFCAP, SETPCAP, # NET_BIND_SERVICE, SYS_CHROOT, KILL, AUDIT_WRITE Common capability patterns Web server (needs port 80\/443):\nservices: nginx: image: nginx cap_drop: - ALL cap_add: - NET_BIND_SERVICE # Bind to privileged ports - CHOWN # Change file ownership - SETUID # Switch users - SETGID # Switch groups ports: - &#34;80:80&#34; - &#34;443:443&#34; Network tools:\nservices: tcpdump: image: tcpdump cap_drop: - ALL cap_add: - NET_RAW # Raw socket access - NET_ADMIN # Network configuration network_mode: host Time synchronization:\nservices: chrony: image: chrony cap_drop: - ALL cap_add: - SYS_TIME # Set system time Read-only root filesystem Prevent modifications to the container filesystem:\nservices: api: image: api:latest read_only: true tmpfs: - \/tmp # Writable temp directory - \/var\/run # Runtime data volumes: - type: tmpfs target: \/app\/cache tmpfs: size: 100M Security options Additional security controls:\nservices: app: image: myapp security_opt: - no-new-privileges:true # Prevent privilege escalation - apparmor:docker-default # AppArmor profile - seccomp:unconfined # Seccomp profile - label:type:container_t # SELinux label # Custom seccomp profile restricted: image: restricted-app security_opt: - seccomp:.\/security\/seccomp-profile.json Privileged mode (use cautiously) Sometimes needed for system-level tools:\nservices: # Docker-in-Docker dind: image: docker:dind privileged: true # Full host capabilities volumes: - \/var\/lib\/docker # System monitoring monitoring: image: sysdig\/sysdig privileged: true volumes: - \/dev:\/host\/dev - \/proc:\/host\/proc:ro - \/sys:\/host\/sys:ro User namespace remapping Run as non-root with user namespaces:\nservices: app: image: myapp user: &#34;1000:1000&#34; # Run as specific user userns_mode: host # Use host user namespace # Or with custom mapping isolated: image: isolated-app user: &#34;5000:5000&#34; cap_drop: - ALL cap_add: - NET_BIND_SERVICE Complete security example Defense in depth approach:\nservices: secure-api: image: api:production # User settings user: &#34;1000:1000&#34; # Capabilities cap_drop: - ALL cap_add: - NET_BIND_SERVICE # Security options security_opt: - no-new-privileges:true - apparmor:docker-default # Filesystem read_only: true tmpfs: - \/tmp:size=10M,mode=1770,uid=1000,gid=1000 # Resource limits deploy: resources: limits: cpus: &#34;1&#34; memory: 256M # Network isolation networks: - internal # Health monitoring healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:8080\/health&#34;] interval: 30s networks: internal: internal: true # No external access Pro tip Audit container capabilities:\n#!\/bin\/bash # audit-capabilities.sh for service in $(docker compose ps --services); do echo &#34;=== Service: $service ===&#34; # Get capabilities container=$(docker compose ps -q $service) if [ -n &#34;$container&#34; ]; then echo &#34;Current capabilities:&#34; docker inspect $container | jq &#39;.[0].HostConfig.CapAdd \/\/ []&#39; echo &#34;Dropped capabilities:&#34; docker inspect $container | jq &#39;.[0].HostConfig.CapDrop \/\/ []&#39; echo &#34;Security options:&#34; docker inspect $container | jq &#39;.[0].HostConfig.SecurityOpt \/\/ []&#39; # Check if running as root user=$(docker inspect $container | jq -r &#39;.[0].Config.User \/\/ &#34;root&#34;&#39;) if [ &#34;$user&#34; = &#34;root&#34; ] || [ &#34;$user&#34; = &#34;&#34; ]; then echo &#34;\u26a0\ufe0f WARNING: Running as root user&#34; else echo &#34;\u2705 Running as user: $user&#34; fi fi echo done Minimal privileges, maximum security!\nFurther reading Linux capabilities Docker security ","permalink":"https:\/\/lours.me\/posts\/compose-tip-029-container-capabilities\/","summary":"<p>Secure containers with principle of least privilege! Control exactly what your containers can do.<\/p>\n<h2 id=\"understanding-capabilities\">Understanding capabilities<\/h2>\n<p>Linux capabilities break down root privileges into distinct units:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Drop all capabilities, then add only what&#39;s needed<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">secure-app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">cap_drop<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">ALL<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">cap_add<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">NET_BIND_SERVICE <\/span><span class=\"w\"> <\/span><span class=\"c\"># Bind to ports &lt; 1024<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">CHOWN            <\/span><span class=\"w\"> <\/span><span class=\"c\"># Change file ownership<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Default Docker capabilities (for reference)<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">default-app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Implicitly has: CHOWN, DAC_OVERRIDE, FSETID, FOWNER,<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># MKNOD, NET_RAW, SETGID, SETUID, SETFCAP, SETPCAP,<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># NET_BIND_SERVICE, SYS_CHROOT, KILL, AUDIT_WRITE<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"common-capability-patterns\">Common capability patterns<\/h2>\n<p><strong>Web server (needs port 80\/443):<\/strong><\/p>","title":"Docker Compose Tip #29: Container capabilities and security options"},{"content":"Stop managing long docker run commands! Convert them to maintainable Compose files.\nBasic conversions Common flag mappings:\n# Docker run command docker run -d \\ --name myapp \\ -p 3000:3000 \\ -e NODE_ENV=production \\ -e API_KEY=secret123 \\ -v $(pwd)\/data:\/app\/data \\ -v \/var\/run\/docker.sock:\/var\/run\/docker.sock \\ --restart unless-stopped \\ myapp:latest Becomes:\nservices: myapp: image: myapp:latest container_name: myapp ports: - &#34;3000:3000&#34; environment: NODE_ENV: production API_KEY: secret123 volumes: - .\/data:\/app\/data - \/var\/run\/docker.sock:\/var\/run\/docker.sock restart: unless-stopped Network configurations # Host network docker run --network host nginx # Custom network docker run --network mynet --ip 172.20.0.5 app # Network alias docker run --network mynet --network-alias db postgres Compose equivalent:\nservices: nginx: image: nginx network_mode: host app: image: app networks: mynet: ipv4_address: 172.20.0.5 postgres: image: postgres networks: mynet: aliases: - db networks: mynet: driver: bridge ipam: config: - subnet: 172.20.0.0\/16 Resource limits docker run -d \\ --memory=&#34;2g&#34; \\ --memory-swap=&#34;4g&#34; \\ --cpus=&#34;1.5&#34; \\ --cpu-shares=&#34;512&#34; \\ myapp Becomes:\nservices: myapp: image: myapp deploy: resources: limits: cpus: &#34;1.5&#34; memory: 2G reservations: cpus: &#34;0.5&#34; memory: 1G mem_swappiness: 60 cpu_shares: 512 User and working directory docker run \\ --user 1000:1000 \\ --workdir \/app \\ --entrypoint \/custom-entrypoint.sh \\ myapp npm start Becomes:\nservices: myapp: image: myapp user: &#34;1000:1000&#34; working_dir: \/app entrypoint: \/custom-entrypoint.sh command: npm start Advanced security options docker run \\ --privileged \\ --cap-add SYS_ADMIN \\ --cap-drop ALL \\ --security-opt apparmor=unconfined \\ --read-only \\ --tmpfs \/tmp:size=100M \\ myapp Becomes:\nservices: myapp: image: myapp privileged: true cap_add: - SYS_ADMIN cap_drop: - ALL security_opt: - apparmor:unconfined read_only: true tmpfs: - \/tmp:size=100M Complex real-world example Converting a database with initialization:\ndocker run -d \\ --name postgres \\ -e POSTGRES_PASSWORD=secret \\ -e POSTGRES_DB=mydb \\ -e POSTGRES_USER=admin \\ -v postgres_data:\/var\/lib\/postgresql\/data \\ -v $(pwd)\/init.sql:\/docker-entrypoint-initdb.d\/init.sql \\ -p 5432:5432 \\ --health-cmd &#34;pg_isready -U admin&#34; \\ --health-interval 10s \\ --health-timeout 5s \\ --health-retries 5 \\ --restart always \\ --log-driver json-file \\ --log-opt max-size=10m \\ --log-opt max-file=3 \\ postgres:15 Becomes:\nservices: postgres: image: postgres:15 container_name: postgres environment: POSTGRES_PASSWORD: secret POSTGRES_DB: mydb POSTGRES_USER: admin volumes: - postgres_data:\/var\/lib\/postgresql\/data - .\/init.sql:\/docker-entrypoint-initdb.d\/init.sql ports: - &#34;5432:5432&#34; healthcheck: test: [&#34;CMD-SHELL&#34;, &#34;pg_isready -U admin&#34;] interval: 10s timeout: 5s retries: 5 restart: always logging: driver: json-file options: max-size: &#34;10m&#34; max-file: &#34;3&#34; volumes: postgres_data: Common gotchas # WRONG: Using command for everything services: app: image: ubuntu command: \/bin\/bash -c &#34;apt update &amp;&amp; apt install -y curl &amp;&amp; curl http:\/\/example.com&#34; # RIGHT: Use entrypoint for shell, command for args services: app: image: ubuntu entrypoint: [&#34;\/bin\/bash&#34;, &#34;-c&#34;] command: [&#34;apt update &amp;&amp; apt install -y curl &amp;&amp; curl http:\/\/example.com&#34;] # BETTER: Use custom image services: app: build: . command: [&#34;curl&#34;, &#34;http:\/\/example.com&#34;] Pro tip Use Docker Compose&rsquo;s built-in conversion tool:\n# Convert a running container to Compose format docker compose alpha generate [container-name-or-id] # Example: Convert a running container docker run -d --name myapp -p 3000:3000 -e NODE_ENV=production myapp:latest docker compose alpha generate myapp &gt; compose.yml # Generate from multiple containers docker compose alpha generate web db cache &gt; stack.yml # With project name docker compose alpha generate --name myproject web db &gt; compose.yml This experimental command automatically converts running containers to Compose format!\nClean, maintainable, and version-controlled!\nFurther reading Compose file reference Docker run reference ","permalink":"https:\/\/lours.me\/posts\/compose-tip-028-docker-run-to-compose\/","summary":"<p>Stop managing long docker run commands! Convert them to maintainable Compose files.<\/p>\n<h2 id=\"basic-conversions\">Basic conversions<\/h2>\n<p>Common flag mappings:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Docker run command<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker run -d <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  --name myapp <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  -p 3000:3000 <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  -e <span class=\"nv\">NODE_ENV<\/span><span class=\"o\">=<\/span>production <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  -e <span class=\"nv\">API_KEY<\/span><span class=\"o\">=<\/span>secret123 <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  -v <span class=\"k\">$(<\/span><span class=\"nb\">pwd<\/span><span class=\"k\">)<\/span>\/data:\/app\/data <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  -v \/var\/run\/docker.sock:\/var\/run\/docker.sock <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  --restart unless-stopped <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  myapp:latest\n<\/span><\/span><\/code><\/pre><\/div><p>Becomes:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">myapp<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">container_name<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;3000:3000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">API_KEY<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">secret123<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/data:\/app\/data<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/var\/run\/docker.sock:\/var\/run\/docker.sock<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">restart<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">unless-stopped<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"network-configurations\">Network configurations<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Host network<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker run --network host nginx\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Custom network<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker run --network mynet --ip 172.20.0.5 app\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Network alias<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker run --network mynet --network-alias db postgres\n<\/span><\/span><\/code><\/pre><\/div><p>Compose equivalent:<\/p>","title":"Docker Compose Tip #28: Converting docker run commands to Compose"},{"content":"Extension fields aren&rsquo;t just for YAML reusability - they&rsquo;re powerful metadata carriers that tools can leverage for platform-specific configurations!\nExtension fields as metadata Any key starting with x- is ignored by Compose but preserved in the configuration:\n# Top-level metadata x-project-version: &#34;2.1.0&#34; x-team: &#34;platform-engineering&#34; x-environment: &#34;production&#34; x-region: &#34;us-east-1&#34; services: api: image: myapi:latest # Service-level metadata x-tier: &#34;frontend&#34; x-cost-center: &#34;engineering&#34; x-sla: &#34;99.9&#34; x-owner: &#34;api-team@company.com&#34; Compose Bridge and Kubernetes integration Extension fields can provide hints for Kubernetes deployment:\n# Kubernetes-specific metadata x-kubernetes: namespace: production ingress-class: nginx storage-class: fast-ssd services: web: image: webapp:v2 x-kubernetes: replicas: 3 node-selector: zone: us-east-1a annotations: prometheus.io\/scrape: &#34;true&#34; prometheus.io\/port: &#34;8080&#34; x-deploy: update-strategy: &#34;RollingUpdate&#34; max-surge: 1 max-unavailable: 0 Platform-specific configurations Different deployment platforms can read their own extension fields:\n# Multi-platform metadata services: database: image: postgres:15 # AWS-specific x-aws: instance-type: &#34;db.r5.large&#34; backup-retention: 7 multi-az: true # Azure-specific x-azure: sku: &#34;GP_Gen5_4&#34; backup-redundancy: &#34;Geo&#34; # GCP-specific x-gcp: machine-type: &#34;db-n1-standard-4&#34; backup-location: &#34;us-central1&#34; high-availability: true Tool integration examples CI\/CD pipelines:\nservices: app: build: . x-ci: test-command: &#34;npm test&#34; coverage-threshold: 80 deploy-branch: &#34;main&#34; rollback-on-failure: true Monitoring and observability:\nservices: api: image: api:latest x-monitoring: alert-threshold-cpu: 80 alert-threshold-memory: 90 dashboard-url: &#34;https:\/\/grafana.company.com\/d\/api-metrics&#34; slo-target: 99.95 Cost tracking:\nservices: worker: image: worker:latest x-cost: center: &#34;CC-1234&#34; project: &#34;data-processing&#34; environment: &#34;production&#34; estimated-monthly: 450 Using extension fields programmatically Read and process metadata in your tools:\n#!\/bin\/bash # extract-metadata.sh # Get service owner docker compose config | yq &#39;.services.api[&#34;x-owner&#34;]&#39; # List all services with their tier docker compose config | yq &#39;.services | to_entries | .[] | select(.value[&#34;x-tier&#34;]) | {service: .key, tier: .value[&#34;x-tier&#34;]}&#39; # Extract Kubernetes annotations docker compose config | yq &#39;.services.web[&#34;x-kubernetes&#34;].annotations&#39; Compose Bridge example When using Compose Bridge to deploy to Kubernetes:\nx-default-resources: &amp;resources limits: cpu: &#34;1&#34; memory: &#34;512Mi&#34; requests: cpu: &#34;0.5&#34; memory: &#34;256Mi&#34; services: frontend: image: frontend:v1 x-kubernetes: service-type: &#34;LoadBalancer&#34; ingress: enabled: true host: &#34;app.example.com&#34; tls: true deploy: resources: &lt;&lt;: *resources backend: image: backend:v1 x-kubernetes: service-type: &#34;ClusterIP&#34; pod-annotations: linkerd.io\/inject: enabled deploy: replicas: 3 Validation and schemas Define schemas for your extension fields:\n# compose-schema.yml x-schema: required-fields: - x-owner - x-environment environments: - development - staging - production services: app: image: app:latest x-owner: &#34;platform-team&#34; x-environment: &#34;production&#34; x-compliance: gdpr: true pci-dss: false sox: true Pro tip: Automated documentation Generate documentation from extension fields:\n#!\/usr\/bin\/env python3 # generate-docs.py import yaml import json def extract_service_metadata(compose_file): with open(compose_file, &#39;r&#39;) as f: config = yaml.safe_load(f) docs = { &#34;project&#34;: { &#34;version&#34;: config.get(&#39;x-project-version&#39;, &#39;unknown&#39;), &#34;team&#34;: config.get(&#39;x-team&#39;, &#39;unknown&#39;), &#34;environment&#34;: config.get(&#39;x-environment&#39;, &#39;unknown&#39;) }, &#34;services&#34;: {} } for name, service in config.get(&#39;services&#39;, {}).items(): metadata = {k: v for k, v in service.items() if k.startswith(&#39;x-&#39;)} if metadata: docs[&#39;services&#39;][name] = metadata return docs # Generate markdown documentation metadata = extract_service_metadata(&#39;compose.yml&#39;) print(f&#34;# Service Catalog\\n&#34;) print(f&#34;**Version:** {metadata[&#39;project&#39;][&#39;version&#39;]}&#34;) print(f&#34;**Team:** {metadata[&#39;project&#39;][&#39;team&#39;]}&#34;) print(f&#34;**Environment:** {metadata[&#39;project&#39;][&#39;environment&#39;]}\\n&#34;) for service, data in metadata[&#39;services&#39;].items(): print(f&#34;## {service}&#34;) for key, value in data.items(): print(f&#34;- **{key[2:]}:** {value}&#34;) Extension fields: Your bridge between Compose and the wider ecosystem!\nFurther reading Compose specification - Extension Compose Bridge documentation ","permalink":"https:\/\/lours.me\/posts\/compose-tip-027-extension-metadata\/","summary":"<p>Extension fields aren&rsquo;t just for YAML reusability - they&rsquo;re powerful metadata carriers that tools can leverage for platform-specific configurations!<\/p>\n<h2 id=\"extension-fields-as-metadata\">Extension fields as metadata<\/h2>\n<p>Any key starting with <code>x-<\/code> is ignored by Compose but preserved in the configuration:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Top-level metadata<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">x-project-version<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;2.1.0&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">x-team<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;platform-engineering&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">x-environment<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;production&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">x-region<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;us-east-1&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapi:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Service-level metadata<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">x-tier<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;frontend&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">x-cost-center<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;engineering&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">x-sla<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;99.9&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">x-owner<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;api-team@company.com&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"compose-bridge-and-kubernetes-integration\">Compose Bridge and Kubernetes integration<\/h2>\n<p>Extension fields can provide hints for Kubernetes deployment:<\/p>","title":"Docker Compose Tip #27: Extension fields as metadata for tools and platforms"},{"content":"Keep your services running! Restart policies ensure containers recover from crashes automatically.\nAvailable restart policies Docker Compose offers four restart options:\nservices: # Never restart (default) dev-tool: image: debug-tools restart: &#34;no&#34; # Restart only on failure (non-zero exit) api: image: api:latest restart: on-failure # Always restart unless manually stopped web: image: nginx restart: unless-stopped # Always restart, even after Docker daemon restarts database: image: postgres:15 restart: always Choosing the right policy Development services:\nservices: # One-off tasks migrator: image: migrate\/migrate restart: &#34;no&#34; command: -path=\/migrations -database=$DB_URL up # Development tools adminer: image: adminer restart: unless-stopped # Survives crashes, not Docker restarts Production services:\nservices: # Critical services app: image: myapp:prod restart: always redis: image: redis:alpine restart: always # Less critical metrics: image: prom\/node-exporter restart: unless-stopped Restart with limits Control restart behavior with on-failure:\nservices: worker: image: worker:latest restart: on-failure:5 # Max 5 restart attempts flaky-service: image: unstable-api restart: on-failure:3 healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost\/health&#34;] interval: 30s retries: 3 The counter resets after 10 minutes of successful running.\nRestart and depends_on Restart policies work independently of dependencies:\nservices: db: image: postgres restart: always app: image: myapp restart: always depends_on: - db # App restarts even if db is down Better approach with health checks:\nservices: db: image: postgres restart: always healthcheck: test: [&#34;CMD-SHELL&#34;, &#34;pg_isready -U postgres&#34;] interval: 10s timeout: 5s retries: 5 app: image: myapp restart: always depends_on: db: condition: service_healthy Testing restart behavior Simulate failures to test policies:\n# Force container to exit with error docker compose exec app kill -TERM 1 # Check restart count docker compose ps # NAME STATUS RESTARTS # app-1 Up 2 seconds 1 # View restart events docker compose events --since 5m | grep restart # Force immediate restart docker compose restart app Common patterns Database with init scripts:\nservices: postgres: image: postgres:15 restart: always environment: POSTGRES_DB: mydb volumes: - .\/init.sql:\/docker-entrypoint-initdb.d\/init.sql # Restarts don&#39;t re-run init scripts Queue workers:\nservices: worker: image: worker restart: on-failure:10 deploy: replicas: 3 # Each replica has independent restart counter Choose policies that match your availability requirements!\nFurther reading Restart policies documentation Docker restart policies ","permalink":"https:\/\/lours.me\/posts\/compose-tip-026-restart-policies\/","summary":"<p>Keep your services running! Restart policies ensure containers recover from crashes automatically.<\/p>\n<h2 id=\"available-restart-policies\">Available restart policies<\/h2>\n<p>Docker Compose offers four restart options:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Never restart (default)<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">dev-tool<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">debug-tools<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">restart<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;no&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Restart only on failure (non-zero exit)<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">api:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">restart<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"kc\">on<\/span>-<span class=\"l\">failure<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Always restart unless manually stopped<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">restart<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">unless-stopped<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Always restart, even after Docker daemon restarts<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">database<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:15<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">restart<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">always<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"choosing-the-right-policy\">Choosing the right policy<\/h2>\n<p><strong>Development services:<\/strong><\/p>","title":"Docker Compose Tip #26: Using restart policies effectively"},{"content":"Track everything happening in your Compose stack! Events provide real-time insights into container lifecycle changes.\nBasic event monitoring Watch events as they happen:\n# Stream all events docker compose events # JSON format for parsing docker compose events --json # Specific services only docker compose events web worker # Since a specific time docker compose events --since &#34;2026-02-06T10:00:00&#34; Event types Common events you&rsquo;ll see:\ncontainer create # Container created container start # Container started container stop # Stop initiated container die # Container exited container destroy # Container removed health_status # Health check changed network connect # Network attached network disconnect # Network detached Processing events Parse with jq:\n# Watch for container deaths docker compose events --json | \\ jq &#39;select(.action==&#34;die&#34;)&#39; # Filter by service docker compose events --json | \\ jq &#39;select(.service==&#34;web&#34;)&#39; # Extract specific fields docker compose events --json | \\ jq &#39;{time, service, action, attributes}&#39; Automation examples Auto-restart on failure:\n#!\/bin\/bash docker compose events --json | while read event; do action=$(echo $event | jq -r &#39;.action&#39;) service=$(echo $event | jq -r &#39;.service&#39;) if [ &#34;$action&#34; = &#34;die&#34; ]; then exit_code=$(echo $event | jq -r &#39;.attributes.exitCode&#39;) if [ &#34;$exit_code&#34; != &#34;0&#34; ]; then echo &#34;Service $service died with code $exit_code&#34; docker compose restart $service fi fi done Health monitoring:\ndocker compose events --json | \\ jq &#39;select(.action==&#34;health_status&#34;)&#39; | \\ while read event; do service=$(echo $event | jq -r &#39;.service&#39;) health=$(echo $event | jq -r &#39;.attributes.health_status&#39;) if [ &#34;$health&#34; = &#34;unhealthy&#34; ]; then notify-slack &#34;Service $service is unhealthy!&#34; fi done Logging events Save events for analysis:\n# Log to file docker compose events --json &gt;&gt; compose-events.log &amp; # Rotate logs daily docker compose events --json | \\ rotatelogs -l compose-events-%Y%m%d.log 86400 &amp; # Send to syslog docker compose events --json | \\ logger -t docker-compose-events Debugging with events Track service startup sequence:\n# See startup order docker compose events --json | \\ jq &#39;select(.action==&#34;start&#34;) | {time: .time, service: .service}&#39; # Measure startup time docker compose events --json | \\ jq &#39;select(.action==&#34;start&#34; or .action==&#34;die&#34;) | {service, action, time}&#39; | \\ awk &#39;\/start\/{start[$2]=$4} \/die\/{if(start[$2]) print $2, $4-start[$2], &#34;seconds&#34;}&#39; Filtering events Target specific scenarios:\n# Only container events docker compose events --json | \\ jq &#39;select(.scope==&#34;container&#34;)&#39; # Exclude health checks docker compose events --json | \\ jq &#39;select(.action != &#34;health_status&#34;)&#39; # Errors only docker compose events --json | \\ jq &#39;select(.attributes.exitCode != &#34;0&#34;)&#39; Pro tip Create a monitoring dashboard:\n#!\/bin\/bash # compose-monitor.sh clear echo &#34;=== Compose Stack Monitor ===&#34; docker compose events --json | while read event; do time=$(echo $event | jq -r &#39;.time&#39; | xargs -I {} date -d @{}) service=$(echo $event | jq -r &#39;.service&#39;) action=$(echo $event | jq -r &#39;.action&#39;) case $action in start) color=&#34;\\033[32m&#34; ;; # Green die) color=&#34;\\033[31m&#34; ;; # Red *) color=&#34;\\033[33m&#34; ;; # Yellow esac printf &#34;${color}[%s] %-15s %s\\033[0m\\n&#34; &#34;$time&#34; &#34;$service&#34; &#34;$action&#34; done Example output:\n=== Compose Stack Monitor === [Thu Feb 6 10:15:23] web start # Green [Thu Feb 6 10:15:24] database start # Green [Thu Feb 6 10:15:25] web health_status # Yellow [Thu Feb 6 10:15:28] worker start # Green [Thu Feb 6 10:16:45] worker die # Red [Thu Feb 6 10:16:46] worker stop # Yellow [Thu Feb 6 10:16:48] worker create # Yellow [Thu Feb 6 10:16:49] worker start # Green Real-time, color-coded visibility into your stack&rsquo;s behavior!\nFurther reading Docker events documentation Compose CLI reference ","permalink":"https:\/\/lours.me\/posts\/compose-tip-025-events\/","summary":"<p>Track everything happening in your Compose stack! Events provide real-time insights into container lifecycle changes.<\/p>\n<h2 id=\"basic-event-monitoring\">Basic event monitoring<\/h2>\n<p>Watch events as they happen:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Stream all events<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose events\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># JSON format for parsing<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose events --json\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Specific services only<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose events web worker\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Since a specific time<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose events --since <span class=\"s2\">&#34;2026-02-06T10:00:00&#34;<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"event-types\">Event types<\/h2>\n<p>Common events you&rsquo;ll see:<\/p>\n<pre tabindex=\"0\"><code>container create    # Container created\ncontainer start     # Container started\ncontainer stop      # Stop initiated\ncontainer die       # Container exited\ncontainer destroy   # Container removed\nhealth_status       # Health check changed\nnetwork connect     # Network attached\nnetwork disconnect  # Network detached\n<\/code><\/pre><h2 id=\"processing-events\">Processing events<\/h2>\n<p><strong>Parse with jq:<\/strong><\/p>","title":"Docker Compose Tip #25: Using docker compose events for monitoring"},{"content":"Keep your Compose stack flexible! Profiles let you include or exclude services based on your current needs.\nBasic profiles Define optional services with profiles:\nservices: app: image: myapp:latest ports: - &#34;3000:3000&#34; # No profile - always starts debug: image: debug-tools profiles: - debug # Only starts with --profile debug test-db: image: postgres:15 profiles: - test environment: POSTGRES_DB: test_db Starting with profiles Choose which services to include:\n# Start only core services (no profiles) docker compose up # Include debug tools docker compose --profile debug up # Run tests with test database docker compose --profile test up # Multiple profiles docker compose --profile debug --profile test up Common use cases Development tools:\nservices: app: image: node:20 volumes: - .:\/app adminer: image: adminer profiles: [&#34;debug&#34;, &#34;dev&#34;] ports: - &#34;8080:8080&#34; mailhog: image: mailhog\/mailhog profiles: [&#34;dev&#34;] ports: - &#34;8025:8025&#34; Testing services:\nservices: tests: image: test-runner profiles: [&#34;test&#34;] depends_on: - app - test-db command: pytest test-db: image: postgres:15 profiles: [&#34;test&#34;] environment: POSTGRES_DB: test Monitoring stack Enable monitoring on demand:\nservices: app: image: myapp labels: - &#34;prometheus.io\/scrape=true&#34; prometheus: image: prom\/prometheus profiles: [&#34;monitoring&#34;] volumes: - .\/prometheus.yml:\/etc\/prometheus\/prometheus.yml ports: - &#34;9090:9090&#34; grafana: image: grafana\/grafana profiles: [&#34;monitoring&#34;] ports: - &#34;3001:3000&#34; depends_on: - prometheus node-exporter: image: prom\/node-exporter profiles: [&#34;monitoring&#34;, &#34;metrics&#34;] ports: - &#34;9100:9100&#34; Usage:\n# Dev without monitoring docker compose up # Full monitoring stack docker compose --profile monitoring up # Just metrics collection docker compose --profile metrics up Profile combinations Mix profiles for different scenarios:\nservices: api: image: api:latest frontend: image: frontend:latest profiles: [&#34;full&#34;, &#34;ui&#34;] backend-tools: image: debug-tools profiles: [&#34;debug&#34;, &#34;full&#34;] load-test: image: k6 profiles: [&#34;test&#34;, &#34;performance&#34;] # API only docker compose up # Full stack docker compose --profile full up # Performance testing docker compose --profile performance up Environment-based profiles Use environment variables to control profiles:\n# .env COMPOSE_PROFILES=dev,debug # Or via command line export COMPOSE_PROFILES=production,monitoring docker compose up Pro tip View active services for each profile:\n# See what would start with a specific profile docker compose --profile debug config --services # Check all profiles at once docker compose --profile=&#34;*&#34; config --services # Check each profile one by one for profile in dev test debug monitoring; do echo &#34;Profile: $profile&#34; docker compose --profile $profile config --services done Profiles keep your stack lean and flexible!\nFurther reading Using profiles with Compose Compose specification - profiles ","permalink":"https:\/\/lours.me\/posts\/compose-tip-024-profiles\/","summary":"<p>Keep your Compose stack flexible! Profiles let you include or exclude services based on your current needs.<\/p>\n<h2 id=\"basic-profiles\">Basic profiles<\/h2>\n<p>Define optional services with profiles:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;3000:3000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># No profile - always starts<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">debug<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">debug-tools<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">profiles<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">debug<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Only starts with --profile debug<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">test-db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:15<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">profiles<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">test<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">POSTGRES_DB<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">test_db<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"starting-with-profiles\">Starting with profiles<\/h2>\n<p>Choose which services to include:<\/p>","title":"Docker Compose Tip #24: Using profiles to organize optional services"},{"content":"Build once, run everywhere! Create images that work on ARM Macs, Intel servers, and Raspberry Pi with a single build command.\nConfigure multi-arch builder Docker Desktop handles this by default. For other Docker installations, set up buildx:\n# Only needed if not using Docker Desktop # Create and use a new builder docker buildx create --name multiarch --use # Verify available platforms docker buildx ls Configure platforms Specify target architectures in your compose file:\nservices: app: build: context: . platforms: - linux\/amd64 # Intel\/AMD 64-bit - linux\/arm64 # ARM 64-bit (M1\/M2 Macs, AWS Graviton) - linux\/arm\/v7 # ARM 32-bit (Raspberry Pi) image: myapp:latest Build and push Build for all platforms and push to registry:\n# Build for all platforms docker compose build # Build and push to registry docker compose build --push # Specific service docker compose build --push app Platform-specific Dockerfiles Handle platform differences in your Dockerfile:\nFROM --platform=$BUILDPLATFORM node:20 AS builder ARG TARGETPLATFORM ARG BUILDPLATFORM RUN echo &#34;Building on $BUILDPLATFORM for $TARGETPLATFORM&#34; # Platform-specific commands RUN if [ &#34;$TARGETPLATFORM&#34; = &#34;linux\/arm\/v7&#34; ]; then \\ echo &#34;ARM v7 specific setup&#34;; \\ fi WORKDIR \/app COPY package*.json .\/ RUN npm ci FROM node:20-alpine COPY --from=builder \/app \/app CMD [&#34;node&#34;, &#34;app.js&#34;] Development workflow Different platforms for dev and production:\nservices: app: build: context: . platforms: - ${DOCKER_DEFAULT_PLATFORM:-linux\/amd64} # Production build app-prod: build: context: . platforms: - linux\/amd64 - linux\/arm64 profiles: [&#34;prod&#34;] Local development:\n# Build for current platform only docker compose build app # Production multi-platform build docker compose --profile prod build --push app-prod Check image platforms Verify multi-platform support:\n# Inspect manifest docker buildx imagetools inspect myapp:latest # Output shows: # MediaType: application\/vnd.docker.distribution.manifest.list.v2+json # Manifests: # linux\/amd64 # linux\/arm64 # linux\/arm\/v7 CI\/CD integration GitHub Actions example:\n- name: Set up QEMU uses: docker\/setup-qemu-action@v3 - name: Set up Docker Buildx uses: docker\/setup-buildx-action@v3 - name: Build and push run: | docker compose build --push Performance tips Building for multiple platforms takes longer:\nservices: # Development - single platform dev: build: context: . platforms: - linux\/arm64 # Just for M1 Mac profiles: [&#34;dev&#34;] # CI\/CD - all platforms prod: build: context: . cache_from: - type=registry,ref=myapp:buildcache cache_to: - type=registry,ref=myapp:buildcache platforms: - linux\/amd64 - linux\/arm64 Common platform combinations # Modern cloud (AWS, GCP, Azure) platforms: - linux\/amd64 - linux\/arm64 # IoT and edge platforms: - linux\/arm64 - linux\/arm\/v7 - linux\/arm\/v6 # Maximum compatibility platforms: - linux\/amd64 - linux\/arm64 - linux\/arm\/v7 - linux\/386 Pro tip Docker automatically selects the correct variant of multi-arch images:\n# This automatically uses the right platform variant FROM node:20-alpine # You can also use platform variables in your Dockerfile ARG TARGETPLATFORM RUN echo &#34;Building for $TARGETPLATFORM&#34; For platform-specific optimization, build separately:\n# Build only for ARM64 with specific optimizations docker compose build --platform linux\/arm64 # Build only for AMD64 docker compose build --platform linux\/amd64 Further reading Docker buildx documentation Multi-platform images ","permalink":"https:\/\/lours.me\/posts\/compose-tip-023-multi-platform\/","summary":"<p>Build once, run everywhere! Create images that work on ARM Macs, Intel servers, and Raspberry Pi with a single build command.<\/p>\n<h2 id=\"configure-multi-arch-builder\">Configure multi-arch builder<\/h2>\n<p>Docker Desktop handles this by default. For other Docker installations, set up buildx:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Only needed if not using Docker Desktop<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Create and use a new builder<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker buildx create --name multiarch --use\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Verify available platforms<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker buildx ls\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"configure-platforms\">Configure platforms<\/h2>\n<p>Specify target architectures in your compose file:<\/p>","title":"Docker Compose Tip #23: Multi-platform builds with platforms"},{"content":"Stop hardcoding passwords! Docker Compose secrets provide a secure way to handle sensitive data.\nBasic secret setup Define secrets and use them in services:\nsecrets: db_password: file: .\/secrets\/db_password.txt api_key: file: .\/secrets\/api_key.txt services: app: image: myapp:latest secrets: - db_password - api_key environment: DB_PASSWORD_FILE: \/run\/secrets\/db_password API_KEY_FILE: \/run\/secrets\/api_key Secrets appear as files in \/run\/secrets\/ inside containers.\nReading secrets in your app Node.js example:\nconst fs = require(&#39;fs&#39;); function getSecret(name) { try { return fs.readFileSync(`\/run\/secrets\/${name}`, &#39;utf8&#39;).trim(); } catch (err) { return process.env[name]; \/\/ Fallback for dev } } const dbPassword = getSecret(&#39;db_password&#39;); Python example:\ndef get_secret(name): try: with open(f&#39;\/run\/secrets\/{name}&#39;) as f: return f.read().strip() except FileNotFoundError: return os.environ.get(name) # Fallback Environment variables as secrets For development, use environment variables:\nsecrets: db_password: environment: DB_PASSWORD api_key: environment: API_KEY services: app: image: myapp secrets: - db_password - api_key Run with:\nDB_PASSWORD=secret API_KEY=key123 docker compose up Multiple environments Use different secret sources per environment:\n# compose.yml (base) services: app: image: myapp secrets: - db_password # compose.dev.yml secrets: db_password: environment: DB_PASSWORD # compose.prod.yml secrets: db_password: file: \/secure\/vault\/db_password Secret permissions Control access within containers:\nservices: app: image: myapp secrets: - source: db_password target: database_password # Rename in container uid: &#39;1000&#39; gid: &#39;1000&#39; mode: 0400 # Read-only for owner External secrets Use secrets from Docker Swarm or external sources:\nsecrets: db_password: external: true external_name: prod_db_password Common patterns Database connection:\nservices: postgres: image: postgres:15 secrets: - postgres_password environment: POSTGRES_PASSWORD_FILE: \/run\/secrets\/postgres_password API keys:\nservices: api: image: api:latest secrets: - stripe_key - jwt_secret command: &gt; sh -c &#34; export STRIPE_KEY=$$(cat \/run\/secrets\/stripe_key) &amp;&amp; export JWT_SECRET=$$(cat \/run\/secrets\/jwt_secret) &amp;&amp; npm start&#34; Pro tip Never commit secrets. Always use .gitignore \ud83d\ude05.\nFurther reading Compose secrets specification Docker secrets management ","permalink":"https:\/\/lours.me\/posts\/compose-tip-022-secrets\/","summary":"<p>Stop hardcoding passwords! Docker Compose secrets provide a secure way to handle sensitive data.<\/p>\n<h2 id=\"basic-secret-setup\">Basic secret setup<\/h2>\n<p>Define secrets and use them in services:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">secrets<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">db_password<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">file<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/secrets\/db_password.txt<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api_key<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">file<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/secrets\/api_key.txt<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">secrets<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">db_password<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">api_key<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DB_PASSWORD_FILE<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">\/run\/secrets\/db_password<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">API_KEY_FILE<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">\/run\/secrets\/api_key<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Secrets appear as files in <code>\/run\/secrets\/<\/code> inside containers.<\/p>\n<h2 id=\"reading-secrets-in-your-app\">Reading secrets in your app<\/h2>\n<p><strong>Node.js example:<\/strong><\/p>","title":"Docker Compose Tip #22: Using secrets in Compose files"},{"content":"Choose the right networking mode for your containers. Understand when isolation matters and when performance is key.\nBridge mode (default) The default and most secure option - containers get their own network namespace:\nservices: web: image: nginx ports: - &#34;8080:80&#34; # Port mapping required networks: - app_network db: image: postgres:15 networks: - app_network networks: app_network: driver: bridge Containers can communicate using service names (web, db) within the network.\nHost mode Container shares the host&rsquo;s network stack - no network isolation:\nservices: monitoring: image: prometheus\/node-exporter network_mode: host # No port mapping needed - uses host ports directly The container can access all host network interfaces directly.\nKey differences Feature Bridge Host Port mapping Required (8080:80) Not needed Network isolation Yes No Container DNS Service names work Use localhost\/IPs Performance Small overhead Native speed Security Better isolation Less secure When to use each Use Bridge for:\nservices: # Application services api: networks: [app] # Databases postgres: networks: [app] # Web servers nginx: networks: [app] Use Host for:\nservices: # System monitoring node-exporter: network_mode: host # Network tools tcpdump: network_mode: host # Performance-critical game-server: network_mode: host Security considerations Bridge mode provides better security:\nservices: # Isolated database database: image: postgres networks: - backend # Not exposed to host network # Only web is exposed web: image: nginx networks: - backend ports: - &#34;443:443&#34; # Controlled exposure Host mode risks:\nContainer can access all host ports Can see all network traffic No network-level isolation Mixing modes You can mix both in one project:\nservices: app: image: myapp networks: - isolated ports: - &#34;3000:3000&#34; monitoring: image: netdata\/netdata network_mode: host # Can monitor host system and services on host ports networks: isolated: driver: bridge Pro tip Test network isolation:\n# Bridge mode - can&#39;t access host services directly docker compose exec web curl localhost:5432 # Fails # Host mode - full access docker compose exec monitoring curl localhost:5432 # Works Choose bridge for security, host for system-level tools.\nFurther reading Docker networking overview Bridge network driver ","permalink":"https:\/\/lours.me\/posts\/compose-tip-021-bridge-vs-host\/","summary":"<p>Choose the right networking mode for your containers. Understand when isolation matters and when performance is key.<\/p>\n<h2 id=\"bridge-mode-default\">Bridge mode (default)<\/h2>\n<p>The default and most secure option - containers get their own network namespace:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;8080:80&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Port mapping required<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">app_network<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:15<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">app_network<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app_network<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">driver<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">bridge<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Containers can communicate using service names (web, db) within the network.<\/p>","title":"Docker Compose Tip #21: Understanding bridge vs host networking modes"},{"content":"Stop scrolling through endless output. Master docker compose logs options to find issues fast and monitor services effectively.\nBasic commands # All service logs docker compose logs # Single service docker compose logs web # Multiple services docker compose logs web worker Follow logs in real-time Watch logs as they happen:\n# Follow all services docker compose logs -f # Follow specific service docker compose logs -f api # Start fresh and follow docker compose logs -f --since 1m Tail recent logs Get last N lines:\n# Last 100 lines per service docker compose logs --tail 100 # Last 50 lines of api service docker compose logs --tail 50 api # Just show last line docker compose logs --tail 1 Filter by time Focus on recent issues:\n# Last 5 minutes docker compose logs --since 5m # Last hour docker compose logs --since 1h # Since specific time docker compose logs --since &#34;2024-01-30T10:00:00&#34; # Between times docker compose logs --since &#34;2024-01-30T10:00:00&#34; --until &#34;2024-01-30T11:00:00&#34; Add helpful context Include timestamps and service details:\n# Add timestamps docker compose logs -t # No service name prefix (cleaner output) docker compose logs --no-log-prefix # Combine for debugging docker compose logs -t --tail 50 -f api Search logs effectively Find specific errors or patterns:\n# Search for errors docker compose logs | grep -i error # Find specific request ID docker compose logs api | grep &#34;req-12345&#34; # Count occurrences docker compose logs | grep -c &#34;connection refused&#34; # Show context around matches docker compose logs | grep -B 2 -A 2 &#34;panic&#34; Monitor multiple services Split terminal approach:\n# Terminal 1: Frontend logs docker compose logs -f frontend # Terminal 2: Backend logs docker compose logs -f api # Terminal 3: Database logs docker compose logs -f postgres Save logs for analysis # Save all logs docker compose logs &gt; logs.txt # Save with timestamps docker compose logs -t &gt; logs-$(date +%Y%m%d).txt # Service-specific logs docker compose logs api &gt; api-debug.log Common debugging patterns Application won&rsquo;t start:\ndocker compose logs --tail 100 app | grep -E &#34;error|fatal|panic&#34; Connection issues:\ndocker compose logs --since 5m | grep -i &#34;connection\\|refused\\|timeout&#34; Memory problems:\ndocker compose logs | grep -i &#34;memory\\|oom\\|heap&#34; Pro tip Create log aliases for common tasks:\n# Add to ~\/.bashrc or ~\/.zshrc alias dcl=&#39;docker compose logs&#39; alias dclf=&#39;docker compose logs -f&#39; alias dclt=&#39;docker compose logs --tail 100&#39; alias dcle=&#39;docker compose logs | grep -i error&#39; # Usage dcl api # Quick logs dclf web # Follow web logs dcle # Find all errors Further reading Docker Compose logs reference Docker logging drivers ","permalink":"https:\/\/lours.me\/posts\/compose-tip-020-docker-compose-logs\/","summary":"<p>Stop scrolling through endless output. Master <code>docker compose logs<\/code> options to find issues fast and monitor services effectively.<\/p>\n<h2 id=\"basic-commands\">Basic commands<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># All service logs<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose logs\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Single service<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose logs web\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Multiple services<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose logs web worker\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"follow-logs-in-real-time\">Follow logs in real-time<\/h2>\n<p>Watch logs as they happen:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Follow all services<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose logs -f\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Follow specific service<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose logs -f api\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Start fresh and follow<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose logs -f --since 1m\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"tail-recent-logs\">Tail recent logs<\/h2>\n<p>Get last N lines:<\/p>","title":"Docker Compose Tip #20: Using docker compose logs effectively"},{"content":"Keep production and development configs separate. Docker Compose automatically merges compose.override.yml for local development tweaks.\nThe magic Compose automatically loads two files:\ncompose.yml (base configuration) compose.override.yml (local overrides) # These are equivalent: docker compose up docker compose -f compose.yml -f compose.override.yml up Basic setup compose.yml (production-ready):\nservices: web: image: myapp:latest ports: - &#34;80:80&#34; environment: NODE_ENV: production LOG_LEVEL: warn compose.override.yml (developer-friendly):\nservices: web: build: . # Build locally instead of using image ports: - &#34;3000:80&#34; # Different port for development volumes: - .:\/app # Mount source code environment: NODE_ENV: development LOG_LEVEL: debug DEBUG: &#34;true&#34; Real development example compose.yml:\nservices: frontend: image: frontend:${VERSION:-latest} depends_on: - api api: image: api:${VERSION:-latest} environment: DATABASE_URL: ${DATABASE_URL} depends_on: - postgres postgres: image: postgres:15 environment: POSTGRES_PASSWORD: ${DB_PASSWORD} compose.override.yml:\nservices: frontend: build: .\/frontend volumes: - .\/frontend:\/app - \/app\/node_modules command: npm run dev ports: - &#34;3000:3000&#34; api: build: .\/api volumes: - .\/api:\/app environment: DATABASE_URL: postgres:\/\/postgres:localpass@postgres\/devdb FLASK_DEBUG: &#34;1&#34; ports: - &#34;5000:5000&#34; postgres: environment: POSTGRES_PASSWORD: localpass ports: - &#34;5432:5432&#34; volumes: - postgres_dev:\/var\/lib\/postgresql\/data volumes: postgres_dev: Exclude override in production Deploy without override file:\n# Production deployment - override.yml ignored docker compose -f compose.yml up -d # Or explicitly with production override docker compose -f compose.yml -f compose.prod.yml up -d Multiple override files Chain multiple configurations:\n# Base + override + additional testing setup docker compose \\ -f compose.yml \\ -f compose.override.yml \\ -f compose.test.yml \\ up Check merged configuration See the final result:\n# View merged configuration (both files) docker compose config # View with explicit files docker compose -f compose.yml -f compose.override.yml config # Production config without override docker compose -f compose.yml config # Save merged config docker compose config &gt; composed.yml Common patterns Enable debugging tools:\n# compose.override.yml services: web: command: npm run dev environment: DEBUG: &#34;*&#34; ports: - &#34;9229:9229&#34; # Node debugger Add development services:\n# compose.override.yml services: mailhog: # Email testing image: mailhog\/mailhog ports: - &#34;8025:8025&#34; Pro tip Add compose.override.yml to .gitignore for personal settings:\necho &#34;compose.override.yml&#34; &gt;&gt; .gitignore # Provide a template cp compose.override.yml compose.override.yml.example git add compose.override.yml.example Developers copy the example and customize locally without affecting others.\nFurther reading Compose file merging Override and extend ","permalink":"https:\/\/lours.me\/posts\/compose-tip-019-override-files\/","summary":"<p>Keep production and development configs separate. Docker Compose automatically merges <code>compose.override.yml<\/code> for local development tweaks.<\/p>\n<h2 id=\"the-magic\">The magic<\/h2>\n<p>Compose automatically loads two files:<\/p>\n<ol>\n<li><code>compose.yml<\/code> (base configuration)<\/li>\n<li><code>compose.override.yml<\/code> (local overrides)<\/li>\n<\/ol>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># These are equivalent:<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose up\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose -f compose.yml -f compose.override.yml up\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"basic-setup\">Basic setup<\/h2>\n<p><strong>compose.yml<\/strong> (production-ready):<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;80:80&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">LOG_LEVEL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">warn<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p><strong>compose.override.yml<\/strong> (developer-friendly):<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">. <\/span><span class=\"w\"> <\/span><span class=\"c\"># Build locally instead of using image<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;3000:80&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Different port for development<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.:\/app <\/span><span class=\"w\"> <\/span><span class=\"c\"># Mount source code<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">development<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">LOG_LEVEL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">debug<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DEBUG<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;true&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"real-development-example\">Real development example<\/h2>\n<p><strong>compose.yml<\/strong>:<\/p>","title":"Docker Compose Tip #19: Override files for local development"},{"content":"Give your containers time to clean up. Configure grace periods to ensure database connections close, transactions complete, and data saves properly.\nThe problem By default, Docker gives containers 10 seconds to stop before forcefully killing them:\nservices: app: image: myapp:latest # Container gets SIGTERM, then SIGKILL after 10s This can interrupt long-running operations and corrupt data.\nThe solution Use stop_grace_period to extend shutdown time:\nservices: worker: image: myworker:latest stop_grace_period: 2m # 2 minutes to finish current job stop_signal: SIGTERM # Signal to send first (default) Real-world examples Different services need different grace periods:\nservices: # Web server - quick shutdown nginx: image: nginx stop_grace_period: 30s # API - finish active requests api: image: api:latest stop_grace_period: 45s environment: SHUTDOWN_TIMEOUT: 40 # App-level timeout # Background worker - complete current job worker: image: worker:latest stop_grace_period: 5m environment: WORKER_SHUTDOWN_TIMEOUT: 290 # Slightly less than grace period # Database - flush and close properly postgres: image: postgres:15 stop_grace_period: 2m command: postgres -c max_wal_size=2GB Handle signals properly Your application must respond to SIGTERM:\n\/\/ Node.js example process.on(&#39;SIGTERM&#39;, async () =&gt; { console.log(&#39;SIGTERM received, shutting down gracefully...&#39;); server.close(() =&gt; { console.log(&#39;HTTP server closed&#39;); }); \/\/ Close database connections await db.close(); \/\/ Finish current jobs await jobQueue.shutdown(); process.exit(0); }); Test graceful shutdown Verify your grace period works:\n# Start services docker compose up -d # Trigger long operation in container docker compose exec worker trigger-long-job # Stop with timing time docker compose stop worker # Should take close to your grace period # real 2m3.456s Compose commands respect grace period All these commands honor stop_grace_period:\ndocker compose stop docker compose down docker compose restart docker compose rm -s # Stop first Common patterns Quick web services:\nstop_grace_period: 30s # Finish HTTP requests Job processors:\nstop_grace_period: 5m # Complete current job Databases:\nstop_grace_period: 2m # Flush buffers, close connections Message consumers:\nstop_grace_period: 1m # Process remaining messages Pro tip For critical data operations, combine grace period with health checks:\nservices: processor: image: processor:latest stop_grace_period: 5m healthcheck: test: [&#34;CMD&#34;, &#34;pgrep&#34;, &#34;-x&#34;, &#34;processor&#34;] interval: 30s retries: 10 # Keep checking during shutdown This ensures the container stays healthy during graceful shutdown.\nFurther reading Compose stop_grace_period Docker stop documentation ","permalink":"https:\/\/lours.me\/posts\/compose-tip-018-graceful-shutdown\/","summary":"<p>Give your containers time to clean up. Configure grace periods to ensure database connections close, transactions complete, and data saves properly.<\/p>\n<h2 id=\"the-problem\">The problem<\/h2>\n<p>By default, Docker gives containers 10 seconds to stop before forcefully killing them:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Container gets SIGTERM, then SIGKILL after 10s<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>This can interrupt long-running operations and corrupt data.<\/p>\n<h2 id=\"the-solution\">The solution<\/h2>\n<p>Use <code>stop_grace_period<\/code> to extend shutdown time:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">worker<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myworker:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">stop_grace_period<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">2m <\/span><span class=\"w\"> <\/span><span class=\"c\"># 2 minutes to finish current job<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">stop_signal<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">SIGTERM  <\/span><span class=\"w\"> <\/span><span class=\"c\"># Signal to send first (default)<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"real-world-examples\">Real-world examples<\/h2>\n<p>Different services need different grace periods:<\/p>","title":"Docker Compose Tip #18: Graceful shutdown with stop_grace_period"},{"content":"Stop copy-pasting the same configuration. YAML anchors let you define once and reuse everywhere in your Compose files.\nThe basics Define an anchor with &amp; and reference it with *:\nservices: web: &amp;default-app image: myapp:latest environment: NODE_ENV: production LOG_LEVEL: info networks: - app-network worker: &lt;&lt;: *default-app # Inherit all settings from web command: npm run worker The worker service inherits everything from web, then overrides the command.\nCommon logging configuration Share logging setup across all services:\nx-logging: &amp;default-logging logging: driver: &#34;json-file&#34; options: max-size: &#34;10m&#34; max-file: &#34;3&#34; services: web: image: nginx &lt;&lt;: *default-logging api: image: myapi:latest &lt;&lt;: *default-logging worker: image: myworker:latest &lt;&lt;: *default-logging Shared environment variables Perfect for microservices with common config:\nx-common-variables: &amp;common-variables REDIS_URL: redis:\/\/redis:6379 POSTGRES_HOST: postgres POSTGRES_PORT: 5432 LOG_LEVEL: ${LOG_LEVEL:-info} services: api: image: api:latest environment: &lt;&lt;: *common-variables SERVICE_NAME: api PORT: 8080 worker: image: worker:latest environment: &lt;&lt;: *common-variables SERVICE_NAME: worker WORKER_CONCURRENCY: 10 Network and volume patterns Reuse complex configurations:\nx-app-service: &amp;app-defaults networks: - frontend - backend volumes: - .\/shared:\/app\/shared:ro - logs:\/app\/logs restart: unless-stopped deploy: resources: limits: memory: 512M services: web: &lt;&lt;: *app-defaults image: web:latest ports: - &#34;3000:3000&#34; api: &lt;&lt;: *app-defaults image: api:latest ports: - &#34;8080:8080&#34; networks: frontend: backend: volumes: logs: Build configuration reuse Share build settings across services:\nx-build-args: &amp;build-args NODE_VERSION: &#34;20&#34; NPM_TOKEN: ${NPM_TOKEN} services: app: build: context: .\/app args: &lt;&lt;: *build-args worker: build: context: .\/worker args: &lt;&lt;: *build-args WORKER_MODE: &#34;true&#34; View expanded configuration Check how anchors expand:\ndocker compose config This shows the final configuration with all anchors resolved.\nPro tip Use the x- prefix for anchor-only blocks - Compose ignores top-level keys starting with x-:\nx-healthcheck: &amp;healthcheck healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost\/health&#34;] interval: 30s timeout: 3s retries: 3 services: web: image: web:latest &lt;&lt;: *healthcheck The x-healthcheck block exists only for the anchor, not as a service.\nFurther reading YAML anchors specification Compose extension fields ","permalink":"https:\/\/lours.me\/posts\/compose-tip-017-yaml-anchors\/","summary":"<p>Stop copy-pasting the same configuration. YAML anchors let you define once and reuse everywhere in your Compose files.<\/p>\n<h2 id=\"the-basics\">The basics<\/h2>\n<p>Define an anchor with <code>&amp;<\/code> and reference it with <code>*<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"cp\">&amp;default-app<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">LOG_LEVEL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">info<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">app-network<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">worker<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">&lt;&lt;<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"cp\">*default-app<\/span><span class=\"w\">  <\/span><span class=\"c\"># Inherit all settings from web<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">command<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">npm run worker<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>The <code>worker<\/code> service inherits everything from <code>web<\/code>, then overrides the command.<\/p>","title":"Docker Compose Tip #17: YAML anchors to reduce duplication"},{"content":"Prevent containers from consuming all available resources. Set CPU and memory limits to ensure stable multi-service deployments.\nThe basics Resource limits protect your system from runaway containers:\nservices: api: image: node:20 deploy: resources: limits: cpus: &#39;0.5&#39; # Half a CPU core memory: 512M # 512 megabytes reservations: cpus: &#39;0.25&#39; # Minimum guaranteed memory: 256M The container can use up to 512MB memory and 50% of one CPU core.\nReal-world example Production stack with proper resource allocation:\nservices: nginx: image: nginx:alpine deploy: resources: limits: cpus: &#39;0.5&#39; memory: 256M reservations: memory: 128M app: image: myapp:latest deploy: resources: limits: cpus: &#39;2.0&#39; # 2 full cores memory: 2G reservations: cpus: &#39;1.0&#39; memory: 1G postgres: image: postgres:15 deploy: resources: limits: cpus: &#39;1.0&#39; memory: 1G reservations: memory: 512M Monitor resource usage Check actual resource consumption:\n# Real-time resource usage for all services docker compose stats # Monitor specific service docker compose stats app # One-time snapshot docker compose stats --no-stream Output shows:\nNAME CPU % MEM USAGE \/ LIMIT MEM % app 45.2% 892MiB \/ 2GiB 43.5% nginx 0.1% 12MiB \/ 256MiB 4.7% postgres 12.3% 467MiB \/ 1GiB 45.6% Development vs production Use environment variables for different environments:\nservices: app: image: myapp:latest deploy: resources: limits: cpus: &#39;${CPU_LIMIT:-2.0}&#39; memory: &#39;${MEMORY_LIMIT:-2G}&#39; # Development (relaxed limits) CPU_LIMIT=4.0 MEMORY_LIMIT=4G docker compose up # Production (strict limits) CPU_LIMIT=1.0 MEMORY_LIMIT=1G docker compose up Common issues Container killed (exit code 137):\n# Out of memory - increase limit deploy: resources: limits: memory: 1G # Was 512M Slow performance:\n# CPU throttling - increase CPU limit deploy: resources: limits: cpus: &#39;2.0&#39; # Was 0.5 Pro tip Test your limits under load before production:\n# Stress test with limited resources docker compose up -d docker exec app stress --cpu 4 --vm 2 --vm-bytes 256M --timeout 30s # Check if limits hold docker compose stats --no-stream This ensures your limits are realistic for actual workload.\nFurther reading Deploy specification Runtime options with Memory, CPUs ","permalink":"https:\/\/lours.me\/posts\/compose-tip-016-resource-limits\/","summary":"<p>Prevent containers from consuming all available resources. Set CPU and memory limits to ensure stable multi-service deployments.<\/p>\n<h2 id=\"the-basics\">The basics<\/h2>\n<p>Resource limits protect your system from runaway containers:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">node:20<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">deploy<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">resources<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">limits<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">cpus<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s1\">&#39;0.5&#39;<\/span><span class=\"w\">     <\/span><span class=\"c\"># Half a CPU core<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">memory<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">512M   <\/span><span class=\"w\"> <\/span><span class=\"c\"># 512 megabytes<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">reservations<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">cpus<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s1\">&#39;0.25&#39;<\/span><span class=\"w\">    <\/span><span class=\"c\"># Minimum guaranteed<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">memory<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">256M<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>The container can use up to 512MB memory and 50% of one CPU core.<\/p>","title":"Docker Compose Tip #16: Setting resource limits with deploy.resources"},{"content":"Deploy with zero downtime using Traefik&rsquo;s dynamic routing. Switch traffic between blue and green deployments by updating environment variables, with automatic health checks.\nThe setup Traefik automatically discovers services and routes traffic based on labels:\n# compose.yml services: traefik: image: traefik:v3.0 command: - &#34;--api.insecure=true&#34; - &#34;--providers.docker=true&#34; - &#34;--providers.docker.exposedbydefault=false&#34; ports: - &#34;80:80&#34; - &#34;8080:8080&#34; # Traefik dashboard volumes: - \/var\/run\/docker.sock:\/var\/run\/docker.sock:ro networks: - web app-blue: image: myapp:${BLUE_VERSION:-v1.0} labels: - &#34;traefik.enable=${BLUE_ENABLED:-true}&#34; - &#34;traefik.http.routers.app-blue.rule=Host(`app.localhost`)&#34; - &#34;traefik.http.routers.app-blue.priority=1&#34; - &#34;traefik.http.services.app-blue.loadbalancer.server.port=3000&#34; networks: - web environment: VERSION: blue app-green: image: myapp:${GREEN_VERSION:-v2.0} labels: - &#34;traefik.enable=${GREEN_ENABLED:-false}&#34; # Start disabled - &#34;traefik.http.routers.app-green.rule=Host(`app.localhost`)&#34; - &#34;traefik.http.routers.app-green.priority=2&#34; # Higher priority when enabled - &#34;traefik.http.services.app-green.loadbalancer.server.port=3000&#34; networks: - web environment: VERSION: green networks: web: driver: bridge Deployment workflow Switch traffic by recreating containers with updated labels:\n# 1. Deploy with blue active docker compose up -d # 2. Update green to new version GREEN_VERSION=v2.0 docker compose up -d app-green # 3. Switch traffic to green (recreate with new labels) BLUE_ENABLED=false GREEN_ENABLED=true docker compose up -d # Traefik detects the change and switches routing instantly! For this to work, update your compose file:\nservices: app-blue: labels: - &#34;traefik.enable=${BLUE_ENABLED:-true}&#34; app-green: labels: - &#34;traefik.enable=${GREEN_ENABLED:-false}&#34; Weighted canary deployment Gradually shift traffic from blue to green:\nservices: app-blue: labels: - &#34;traefik.enable=true&#34; - &#34;traefik.http.services.app.loadbalancer.server.port=3000&#34; - &#34;traefik.http.services.app.loadbalancer.sticky=true&#34; - &#34;traefik.http.services.app.loadbalancer.weight=90&#34; # 90% traffic app-green: labels: - &#34;traefik.enable=true&#34; - &#34;traefik.http.services.app.loadbalancer.server.port=3000&#34; - &#34;traefik.http.services.app.loadbalancer.weight=10&#34; # 10% traffic Adjust weights to gradually migrate:\n# Shift to 50\/50 BLUE_WEIGHT=50 GREEN_WEIGHT=50 docker compose up -d # Full migration to green BLUE_WEIGHT=0 GREEN_WEIGHT=100 docker compose up -d Update your compose file to use variables:\nservices: app-blue: labels: - &#34;traefik.http.services.app.loadbalancer.weight=${BLUE_WEIGHT:-90}&#34; app-green: labels: - &#34;traefik.http.services.app.loadbalancer.weight=${GREEN_WEIGHT:-10}&#34; Health-check based routing Traefik only routes to healthy services:\nservices: app-green: image: myapp:v2.0 healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:3000\/health&#34;] interval: 10s timeout: 3s retries: 3 labels: - &#34;traefik.enable=true&#34; - &#34;traefik.http.services.app-green.loadbalancer.healthcheck.path=\/health&#34; - &#34;traefik.http.services.app-green.loadbalancer.healthcheck.interval=10s&#34; Instant rollback # Revert to blue immediately BLUE_ENABLED=true GREEN_ENABLED=false docker compose up -d # Containers recreate quickly and Traefik switches routing! Pro tip Monitor deployments via Traefik dashboard at http:\/\/localhost:8080. You can see:\nActive services and their health Current routing rules Real-time traffic distribution Response times and error rates Further reading Traefik Docker provider Docker Compose profiles ","permalink":"https:\/\/lours.me\/posts\/compose-tip-015-blue-green-deployments\/","summary":"<p>Deploy with zero downtime using Traefik&rsquo;s dynamic routing. Switch traffic between blue and green deployments by updating environment variables, with automatic health checks.<\/p>\n<h2 id=\"the-setup\">The setup<\/h2>\n<p>Traefik automatically discovers services and routes traffic based on labels:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># compose.yml<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">traefik<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">traefik:v3.0<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">command<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;--api.insecure=true&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;--providers.docker=true&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;--providers.docker.exposedbydefault=false&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;80:80&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;8080:8080&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Traefik dashboard<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">\/var\/run\/docker.sock:\/var\/run\/docker.sock:ro<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">web<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app-blue<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:${BLUE_VERSION:-v1.0}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">labels<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.enable=${BLUE_ENABLED:-true}&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.http.routers.app-blue.rule=Host(`app.localhost`)&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.http.routers.app-blue.priority=1&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.http.services.app-blue.loadbalancer.server.port=3000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">web<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">VERSION<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">blue<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app-green<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:${GREEN_VERSION:-v2.0}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">labels<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.enable=${GREEN_ENABLED:-false}&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Start disabled<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.http.routers.app-green.rule=Host(`app.localhost`)&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.http.routers.app-green.priority=2&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Higher priority when enabled<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;traefik.http.services.app-green.loadbalancer.server.port=3000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">web<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">VERSION<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">green<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">networks<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">driver<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">bridge<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"deployment-workflow\">Deployment workflow<\/h2>\n<p>Switch traffic by recreating containers with updated labels:<\/p>","title":"Docker Compose Tip #15: Blue-green deployments with Traefik"},{"content":"Running containers as root is a security risk. Configure your services to use non-root users for defense in depth.\nThe problem By default, many containers run as root:\nservices: app: image: nginx # Runs as root user (uid 0) - security risk! If compromised, attackers have root privileges inside the container.\nThe solution Set the user in compose.yml:\nservices: app: image: node:20 user: &#34;1000:1000&#34; # Run as uid:gid 1000 working_dir: \/app volumes: - .\/app:\/app Or use the image&rsquo;s built-in user:\nservices: nginx: image: nginx:alpine user: &#34;nginx&#34; # Use nginx user from image Creating users in Dockerfile Best practice: create a dedicated user in your image:\nFROM node:20-alpine # Create app user and group RUN addgroup -g 1001 -S appuser &amp;&amp; \\ adduser -u 1001 -S appuser -G appuser # Create app directory with correct ownership RUN mkdir -p \/app &amp;&amp; chown -R appuser:appuser \/app # Switch to non-root user USER appuser WORKDIR \/app COPY --chown=appuser:appuser package*.json .\/ RUN npm ci --only=production COPY --chown=appuser:appuser . . CMD [&#34;node&#34;, &#34;server.js&#34;] Use it in compose.yml:\nservices: api: build: . # Already runs as appuser from Dockerfile ports: - &#34;3000:3000&#34; Handling file permissions When using volumes with non-root users:\nservices: app: image: node:20 user: &#34;1000:1000&#34; volumes: - .\/data:\/data # Must be writable by uid 1000 # Fix permissions on startup entrypoint: | sh -c &#39;chown -R 1000:1000 \/data 2&gt;\/dev\/null || true &amp;&amp; npm start&#39; Better approach - use init container:\nservices: # Fix permissions before app starts init-permissions: image: busybox user: root volumes: - .\/data:\/data command: chown -R 1000:1000 \/data app: image: node:20 user: &#34;1000:1000&#34; volumes: - .\/data:\/data depends_on: init-permissions: condition: service_completed_successfully Common issues and solutions Port binding below 1024:\nservices: web: image: nginx user: &#34;nginx&#34; ports: - &#34;8080:8080&#34; # Use high ports for non-root # Configure nginx to listen on 8080 instead of 80 Reading secrets:\nservices: app: image: myapp user: &#34;1000:1000&#34; secrets: - source: db_password uid: &#34;1000&#34; # Make secret readable by user mode: 0400 secrets: db_password: file: .\/secrets\/db_password.txt Verify user Check which user is running:\ndocker compose exec app whoami # Should output: appuser (not root) docker compose exec app id # uid=1000(appuser) gid=1000(appuser) Further reading Docker security best practices USER instruction in Dockerfile ","permalink":"https:\/\/lours.me\/posts\/compose-tip-014-non-root-users\/","summary":"<p>Running containers as root is a security risk. Configure your services to use non-root users for defense in depth.<\/p>\n<h2 id=\"the-problem\">The problem<\/h2>\n<p>By default, many containers run as root:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Runs as root user (uid 0) - security risk!<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>If compromised, attackers have root privileges inside the container.<\/p>\n<h2 id=\"the-solution\">The solution<\/h2>\n<p>Set the user in compose.yml:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">node:20<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">user<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;1000:1000&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Run as uid:gid 1000<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">working_dir<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">\/app<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/app:\/app<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Or use the image&rsquo;s built-in user:<\/p>","title":"Docker Compose Tip #14: Running containers as non-root users"},{"content":"Need your frontend project to talk to a backend in another Compose project? External networks let you connect containers across different stacks.\nThe problem Two separate Compose projects need to communicate:\nfrontend\/compose.yml - React app backend\/compose.yml - API service By default, each creates its own isolated network.\nThe solution Create a shared external network:\n# Create the network once docker network create shared-network Then reference it in both projects:\nbackend\/compose.yml:\nservices: api: image: myapi:latest networks: - shared # Connect to external network - default # Keep internal network too networks: shared: external: true name: shared-network frontend\/compose.yml:\nservices: web: image: myfrontend:latest environment: API_URL: http:\/\/api:8080 # Use service name! networks: - shared networks: shared: external: true name: shared-network Now web can reach api by name across projects!\nReal microservices example # shared\/compose.yml services: postgres: image: postgres:15 networks: [backbone] redis: image: redis:7-alpine networks: [backbone] networks: backbone: external: true name: company-backbone --- # api\/compose.yml services: api: image: company\/api:latest networks: - backbone # Access shared services - internal # Private network environment: DATABASE_URL: postgres:\/\/postgres:5432\/api REDIS_URL: redis:\/\/redis:6379 networks: backbone: external: true name: company-backbone internal: {} # Project-specific Service discovery Services on the same external network can reach each other by name:\n# From frontend container docker compose -f frontend\/compose.yml exec web sh $ curl http:\/\/api:8080\/health # Works! Hybrid networking Keep sensitive services isolated:\nservices: public-api: networks: - shared # External access - internal # Internal only database: networks: - internal # Not on shared network! networks: shared: external: true name: shared-network internal: # Project-specific, isolated Troubleshooting # &#34;Network not found&#34; error? Create it first: docker network create shared-network # Can&#39;t connect? Verify both services are on same network: docker network inspect shared-network Pro tip Create the external network first, or your stack won&rsquo;t start:\n# Always create before using docker network create shared-network docker compose up -d Further reading Docker networking overview Compose networking ","permalink":"https:\/\/lours.me\/posts\/compose-tip-013-external-networks\/","summary":"<p>Need your frontend project to talk to a backend in another Compose project? External networks let you connect containers across different stacks.<\/p>\n<h2 id=\"the-problem\">The problem<\/h2>\n<p>Two separate Compose projects need to communicate:<\/p>\n<ul>\n<li><code>frontend\/compose.yml<\/code> - React app<\/li>\n<li><code>backend\/compose.yml<\/code> - API service<\/li>\n<\/ul>\n<p>By default, each creates its own isolated network.<\/p>\n<h2 id=\"the-solution\">The solution<\/h2>\n<p>Create a shared external network:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Create the network once<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker network create shared-network\n<\/span><\/span><\/code><\/pre><\/div><p>Then reference it in both projects:<\/p>\n<p><strong>backend\/compose.yml:<\/strong><\/p>","title":"Docker Compose Tip #13: Using external networks to connect multiple projects"},{"content":"One Dockerfile, multiple environments. Use target to build only the stage you need - faster builds, smaller images, cleaner separation.\nThe basics Multi-stage Dockerfile:\n# Development stage FROM node:20-alpine AS development WORKDIR \/app COPY package*.json .\/ RUN npm install COPY . . CMD [&#34;npm&#34;, &#34;run&#34;, &#34;dev&#34;] # Production stage FROM node:20-alpine AS production WORKDIR \/app COPY package*.json .\/ RUN npm ci --only=production COPY . . RUN npm run build CMD [&#34;npm&#34;, &#34;start&#34;] Target specific stages in compose.yml:\nservices: app-dev: build: context: . target: development # Stops at development stage volumes: - .:\/app app-prod: build: context: . target: production # Builds to production stage Real example Go application with test and production stages:\n# Dockerfile FROM golang:1.21-alpine AS base WORKDIR \/app COPY go.* .\/ RUN go mod download FROM base AS test COPY . . RUN go test -v .\/... FROM base AS builder COPY . . RUN CGO_ENABLED=0 go build -o server FROM alpine:3.19 AS production RUN apk --no-cache add ca-certificates COPY --from=builder \/app\/server \/server CMD [&#34;\/server&#34;] # compose.yml services: app: build: context: . target: ${BUILD_TARGET:-production} ports: - &#34;8080:8080&#34; test: build: context: . target: test profiles: [&#34;test&#34;] Running different targets # Development BUILD_TARGET=development docker compose up # Run tests docker compose --profile test build # Production BUILD_TARGET=production docker compose build Size comparison # Development image (includes all tools) docker compose build --no-cache app-dev # Image size: 450MB # Production image (optimized) docker compose build --no-cache app-prod # Image size: 12MB That&rsquo;s 37x smaller for production!\nCI pattern Run tests without building production:\nservices: ci-test: build: context: . target: test # CI fails fast if tests don&#39;t pass docker compose build ci-test || exit 1 Pro tip Use environment variables for flexible target selection:\n# Development by default export BUILD_TARGET=development # Switch to production for deployment BUILD_TARGET=production docker compose up -d One compose file, multiple environments!\nFurther reading Docker multi-stage builds Compose build specification ","permalink":"https:\/\/lours.me\/posts\/compose-tip-012-target-build-stages\/","summary":"<p>One Dockerfile, multiple environments. Use <code>target<\/code> to build only the stage you need - faster builds, smaller images, cleaner separation.<\/p>\n<h2 id=\"the-basics\">The basics<\/h2>\n<p>Multi-stage Dockerfile:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-dockerfile\" data-lang=\"dockerfile\"><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Development stage<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">FROM<\/span><span class=\"w\"> <\/span><span class=\"s\">node:20-alpine<\/span><span class=\"w\"> <\/span><span class=\"k\">AS<\/span><span class=\"w\"> <\/span><span class=\"s\">development<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">WORKDIR<\/span><span class=\"w\"> <\/span><span class=\"s\">\/app<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> package*.json .\/<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">RUN<\/span> npm install<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> . .<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">CMD<\/span> <span class=\"p\">[<\/span><span class=\"s2\">&#34;npm&#34;<\/span><span class=\"p\">,<\/span> <span class=\"s2\">&#34;run&#34;<\/span><span class=\"p\">,<\/span> <span class=\"s2\">&#34;dev&#34;<\/span><span class=\"p\">]<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c\"># Production stage<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">FROM<\/span><span class=\"w\"> <\/span><span class=\"s\">node:20-alpine<\/span><span class=\"w\"> <\/span><span class=\"k\">AS<\/span><span class=\"w\"> <\/span><span class=\"s\">production<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">WORKDIR<\/span><span class=\"w\"> <\/span><span class=\"s\">\/app<\/span><span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> package*.json .\/<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">RUN<\/span> npm ci --only<span class=\"o\">=<\/span>production<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">COPY<\/span> . .<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">RUN<\/span> npm run build<span class=\"err\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">CMD<\/span> <span class=\"p\">[<\/span><span class=\"s2\">&#34;npm&#34;<\/span><span class=\"p\">,<\/span> <span class=\"s2\">&#34;start&#34;<\/span><span class=\"p\">]<\/span><span class=\"err\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Target specific stages in compose.yml:<\/p>","title":"Docker Compose Tip #12: Using target to specify build stages"},{"content":"Stop manually restarting containers when code changes. Docker Compose Watch automatically syncs files and reloads services - zero interruption development.\nThe basics Enable watch mode with:\ndocker compose up --watch # If you don&#39;t want mixed logs, you can run it in a dedicated process, you need to have your stack started on its own process docker compose watch Then configure watching in your compose.yml:\nservices: web: image: node:20 command: npm start develop: watch: - path: .\/src target: \/app\/src action: sync - path: package.json action: rebuild Files in .\/src sync instantly. Changes to package.json trigger a rebuild.\nWatch actions explained Three actions control what happens when files change:\nsync - Updates files in the container instantly:\nwatch: - path: .\/src target: \/app\/src action: sync rebuild - Rebuilds image and restarts container:\nwatch: - path: .\/Dockerfile action: rebuild sync+restart - Syncs files then restarts container:\nwatch: - path: .\/config target: \/app\/config action: sync+restart Real development example Full stack app with hot reloading:\nservices: frontend: build: .\/frontend ports: - &#34;3000:3000&#34; develop: watch: # React\/Vue\/Angular source files - instant sync - path: .\/frontend\/src target: \/app\/src action: sync # Config changes need restart - path: .\/frontend\/.env target: \/app\/.env action: sync+restart # Dependencies need rebuild - path: .\/frontend\/package*.json action: rebuild backend: build: .\/backend ports: - &#34;8080:8080&#34; develop: watch: # Python\/Node source - sync for hot reload - path: .\/backend\/app target: \/app action: sync ignore: - __pycache__\/ - &#34;*.pyc&#34; # Requirements change = rebuild - path: .\/backend\/requirements.txt action: rebuild Ignore patterns Exclude files that shouldn&rsquo;t trigger updates:\nwatch: - path: .\/src target: \/app\/src action: sync ignore: - node_modules\/ - &#34;*.test.js&#34; - &#34;.git\/&#34; - &#34;**\/*.log&#34; Check watch status See what&rsquo;s being watched:\ndocker compose watch --no-up Output shows all watched paths:\nweb Watching: - .\/src \u2192 \/app\/src (sync) - package.json (rebuild) backend Watching: - .\/backend\/app \u2192 \/app (sync) Common patterns Frontend development (React\/Vue\/Angular):\ndevelop: watch: - path: .\/src target: \/app\/src action: sync # Webpack\/Vite handles reload Backend with nodemon\/Flask debug:\ndevelop: watch: - path: .\/app target: \/app action: sync # App framework handles reload Static sites (Hugo\/Jekyll):\ndevelop: watch: - path: .\/content action: sync+restart # Regenerate site Further reading Docker Compose Watch documentation Compose specification - develop section Related: Restarting single services ","permalink":"https:\/\/lours.me\/posts\/compose-tip-011-docker-compose-watch\/","summary":"<p>Stop manually restarting containers when code changes. Docker Compose Watch automatically syncs files and reloads services - zero interruption development.<\/p>\n<h2 id=\"the-basics\">The basics<\/h2>\n<p>Enable watch mode with:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose up --watch\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># If you don&#39;t want mixed logs, you can run it in a dedicated process, you need to have your stack started on its own process<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose watch\n<\/span><\/span><\/code><\/pre><\/div><p>Then configure watching in your <code>compose.yml<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">node:20<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">command<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">npm start<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">develop<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">watch<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.\/src<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">target<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">\/app\/src<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">action<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">sync<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span>- <span class=\"nt\">path<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">package.json<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">          <\/span><span class=\"nt\">action<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">rebuild<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Files in <code>.\/src<\/code> sync instantly. Changes to <code>package.json<\/code> trigger a rebuild.<\/p>","title":"Docker Compose Tip #11: Mastering docker compose up --watch for hot reload"},{"content":"Zombie processes in your containers? Slow shutdowns? Your app shouldn&rsquo;t run as PID 1. Here&rsquo;s the simple fix.\nThe problem When your app runs as PID 1, it has special responsibilities:\nHandle system signals (SIGTERM, SIGINT) Reap zombie processes Forward signals to child processes Most apps (especially Node.js, Python) don&rsquo;t handle these well.\nThe solution Add init: true to your service:\nservices: app: image: node:20 init: true # Adds Tini as PID 1 command: node server.js Docker automatically injects a tiny init system (Tini) that handles PID 1 responsibilities properly.\nWhat it fixes Before (without init):\nservices: app: image: node:20 command: node server.js Problems:\ndocker compose stop takes 10 seconds (waiting for SIGKILL) Zombie processes accumulate Ctrl+C doesn&rsquo;t stop the container cleanly After (with init):\nservices: app: image: node:20 init: true command: node server.js Fixed:\nInstant graceful shutdown No zombie processes Signals handled correctly Real example Our Node.js API that spawns child processes:\nservices: api: image: node:20-alpine init: true command: node index.js stop_grace_period: 5s # Now actually works! worker: image: python:3.11-slim init: true command: python worker.py # Even shells benefit debug: image: alpine init: true command: sh -c &#34;while true; do echo working; sleep 10; done&#34; Verify it&rsquo;s working Check process tree:\ndocker compose exec app ps aux Without init:\nPID USER COMMAND 1 node node server.js # App is PID 1 - problematic! 15 node \/usr\/bin\/worker # Child process With init:\nPID USER COMMAND 1 root \/sbin\/docker-init # Tini is PID 1 7 node node server.js # App is child of init 22 node \/usr\/bin\/worker # Grandchild process Test graceful shutdown # Time how long stop takes time docker compose stop # Without init: ~10 seconds # With init: ~1 second When you need it most Essential for:\nNode.js apps - Doesn&rsquo;t handle SIGTERM by default Python scripts - Poor signal handling Shell scripts - No zombie reaping Apps spawning subprocesses - Prevents zombie accumulation Kubernetes - Critical for pod termination Production impact In our production Kubernetes clusters:\n95% faster pod terminations Zero zombie processes after 30 days uptime Clean connection draining during deployments Pro tip Some images include their own init:\nservices: # These handle PID 1 properly already nginx: image: nginx:alpine # Has its own signal handling postgres: image: postgres:15 # Database handles signals well # These need init node-app: image: node:20 init: true # Add init for Node.js python-app: image: python:3.11 init: true # Add init for Python One line of config prevents entire classes of production issues. Always use init: true for interpreted languages.\nFurther reading Tini - A tiny but valid init Docker run &ndash;init documentation ","permalink":"https:\/\/lours.me\/posts\/compose-tip-010-init-pid1\/","summary":"<p>Zombie processes in your containers? Slow shutdowns? Your app shouldn&rsquo;t run as PID 1. Here&rsquo;s the simple fix.<\/p>\n<h2 id=\"the-problem\">The problem<\/h2>\n<p>When your app runs as PID 1, it has special responsibilities:<\/p>\n<ul>\n<li>Handle system signals (SIGTERM, SIGINT)<\/li>\n<li>Reap zombie processes<\/li>\n<li>Forward signals to child processes<\/li>\n<\/ul>\n<p>Most apps (especially Node.js, Python) don&rsquo;t handle these well.<\/p>\n<h2 id=\"the-solution\">The solution<\/h2>\n<p>Add <code>init: true<\/code> to your service:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">node:20<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">init<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"kc\">true<\/span><span class=\"w\">  <\/span><span class=\"c\"># Adds Tini as PID 1<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">command<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">node server.js<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Docker automatically injects a tiny init system (Tini) that handles PID 1 responsibilities properly.<\/p>","title":"Docker Compose Tip #10: Using init for proper PID 1 handling"},{"content":"Package your entire Docker Compose application as an OCI artifact and share it through any container registry. No more complex installation instructions.\nThe basics Publish your Compose configuration as an OCI artifact:\n# Publish your compose.yml to a registry docker compose publish myusername\/myapp:v1.0 # Users run it directly with oci:\/\/ prefix docker compose -f oci:\/\/docker.io\/myusername\/myapp:v1.0 up The compose.yml (and any included files) are stored as an OCI artifact alongside your container images.\nRequirements Docker Compose 2.34.0 or later OCI-compliant registry (Docker Hub, GitHub Container Registry, etc.) Publishing your application # compose.yml name: voting-app services: vote: image: mycompany\/vote:latest ports: - &#34;5000:80&#34; depends_on: - redis redis: image: redis:7-alpine worker: image: mycompany\/worker:latest depends_on: - redis - db db: image: postgres:15 environment: POSTGRES_PASSWORD: postgres volumes: - db-data:\/var\/lib\/postgresql\/data volumes: db-data: Publish it:\n# Simple publish docker compose publish mycompany\/voting-app:v1.0 # With multiple compose files docker compose -f compose.yml -f compose.prod.yml \\ publish mycompany\/voting-app:prod Advanced publishing options Pin images to specific digests for reproducibility:\n# Lock all image versions docker compose publish \\ --resolve-image-digests \\ mycompany\/voting-app:v1.0 Include environment for fully self-contained app:\n# Bundle environment variables docker compose publish \\ --with-env \\ mycompany\/voting-app:v1.0 Consuming published applications Users run your app with one command:\n# Pull and run from registry docker compose -f oci:\/\/docker.io\/mycompany\/voting-app:v1.0 up -d # Or from GitHub Container Registry docker compose -f oci:\/\/ghcr.io\/mycompany\/voting-app:latest up # Check what&#39;s running docker compose -f oci:\/\/docker.io\/mycompany\/voting-app:v1.0 ps Real example We distribute sample apps at Docker this way:\n# Publish our example voting app docker compose publish dockersamples\/example-voting-app:latest # Users run it instantly docker compose -f oci:\/\/docker.io\/dockersamples\/example-voting-app:latest up No git clone, no README setup, just run.\nLimitations Cannot publish applications with:\nBind mounts in services (volumes are OK) Services with only build section (need image specified) Local files in include directives (remote includes work) Security considerations Compose prompts for confirmation when running OCI artifacts with:\nVariable interpolation Environment variables Remote includes Use -y flag to skip prompts in automation:\ndocker compose -f oci:\/\/myregistry\/app:v1 up -y Version management # Publish different versions docker compose publish mycompany\/app:v2.0 docker compose publish mycompany\/app:v2.1 docker compose publish mycompany\/app:latest # Users choose version docker compose -f oci:\/\/docker.io\/mycompany\/app:v2.0 up Pro tip Perfect for distributing internal tools:\n# Publish development environment docker compose publish internal-registry.company.com\/devenv:latest # Developers run with one command docker compose -f oci:\/\/internal-registry.company.com\/devenv:latest up # Update? Just publish new version docker compose publish internal-registry.company.com\/devenv:v2 OCI artifacts transform how we share Compose applications - from complex READMEs to single commands.\nFurther reading Docker Compose OCI artifact documentation OCI Image Specification ","permalink":"https:\/\/lours.me\/posts\/compose-tip-009-oci-artifacts\/","summary":"<p>Package your entire Docker Compose application as an OCI artifact and share it through any container registry. No more complex installation instructions.<\/p>\n<h2 id=\"the-basics\">The basics<\/h2>\n<p>Publish your Compose configuration as an OCI artifact:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Publish your compose.yml to a registry<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose publish myusername\/myapp:v1.0\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Users run it directly with oci:\/\/ prefix<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose -f oci:\/\/docker.io\/myusername\/myapp:v1.0 up\n<\/span><\/span><\/code><\/pre><\/div><p>The compose.yml (and any included files) are stored as an OCI artifact alongside your container images.<\/p>","title":"Docker Compose Tip #9: Publishing Compose applications as OCI artifacts"},{"content":"Docker Hardened Images (DHI) maximize security by removing shells and package managers. But how do you add healthchecks? Use a secure sidecar with shared network namespace.\nThe problem Your hardened Node.js application:\nservices: app: image: dhi.io\/node:25-debian13-sfw-ent-dev healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:3000\/health&#34;] # FAILS: No curl in hardened image! The solution: Network namespace sidecar Use a hardened curl image that shares the app&rsquo;s network:\nservices: app: image: dhi.io\/node:25-debian13-sfw-ent-dev ports: - &#34;3000:3000&#34; environment: NODE_ENV: production app-health: image: dhi.io\/curl:8-debian13-dev entrypoint: [&#34;sleep&#34;, &#34;infinity&#34;] network_mode: &#34;service:app&#34; # Shares app&#39;s network namespace! healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:3000\/health&#34;] interval: 30s timeout: 3s retries: 3 start_period: 10s The network_mode: &quot;service:app&quot; allows the sidecar to access localhost:3000 directly - they share the same network stack!\nHow it works Main app runs with DHI Node.js image (no shell, minimal attack surface) Sidecar runs with DHI curl image sharing the app&rsquo;s network namespace Sidecar can reach app on localhost (same network stack) Other services depend on the app itself or app-health Real production example A real example with a secure Node.js image:\nservices: # API using hardened Node.js api: image: dhi.io\/node:25-debian13-sfw-ent-dev ports: - &#34;8080:8080&#34; environment: NODE_ENV: production PORT: 8080 # Hardened curl sidecar for healthchecks api-health: image: dhi.io\/curl:8-debian13-dev network_mode: &#34;service:api&#34; # Critical: shares api&#39;s network healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:8080\/healthz&#34;] interval: 30s timeout: 5s retries: 3 start_period: 45s entrypoint: [&#34;sleep&#34;, &#34;infinity&#34;] deploy: resources: limits: memory: 32M # Minimal resource usage # Worker depends on API being healthy worker: image: dhi.io\/node:25-debian13-sfw-ent-dev depends_on: api-health: condition: service_healthy environment: API_URL: http:\/\/api:8080 # Uses service name for cross-container Verify health status docker compose ps Output shows both containers:\nNAME STATUS PORTS api Up 5 minutes 0.0.0.0:8080-&gt;8080\/tcp api-health Up 5 minutes (healthy) worker Up 4 minutes Check the network namespace:\n# Both containers share the same network docker compose exec api-health curl http:\/\/localhost:8080\/healthz # Works! They share the network stack Multiple services pattern Scale this pattern for multiple services:\nservices: frontend: image: dhi.io\/node:25-debian13-sfw-ent-dev ports: - &#34;3000:3000&#34; frontend-health: image: dhi.io\/curl:8-debian13-dev network_mode: &#34;service:frontend&#34; healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:3000\/&#34;] backend: image: dhi.io\/node:25-debian13-sfw-ent-dev ports: - &#34;8080:8080&#34; backend-health: image: dhi.io\/curl:8-debian13-dev network_mode: &#34;service:backend&#34; healthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:8080\/api\/health&#34;] # Services that need everything healthy e2e-tests: image: test-runner depends_on: frontend-health: condition: service_healthy backend-health: condition: service_healthy Why network_mode matters With network_mode: &quot;service:app&quot;:\nSidecar sees the same network as app Can use localhost to reach app&rsquo;s ports No inter-container networking needed Perfect isolation from other services Without it:\nWould need to use service names Requires app to bind to 0.0.0.0 (not just localhost) Less secure network isolation Security benefits Zero shell access in production containers No package managers to exploit Minimal attack surface - only required binaries Network isolation - sidecar only sees app&rsquo;s network DHI throughout - even healthcheck containers are hardened Maximum security with full observability - DHI sidecars with shared networking.\nFurther reading Docker Hardened Images Docker Compose networking Related tip: Service dependencies with health checks ","permalink":"https:\/\/lours.me\/posts\/compose-tip-008-dhi-healthcheck\/","summary":"<p>Docker Hardened Images (DHI) maximize security by removing shells and package managers. But how do you add healthchecks? Use a secure sidecar with shared network namespace.<\/p>\n<h2 id=\"the-problem\">The problem<\/h2>\n<p>Your hardened Node.js application:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">dhi.io\/node:25-debian13-sfw-ent-dev<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">healthcheck<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">test<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"s2\">&#34;CMD&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;curl&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;-f&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;http:\/\/localhost:3000\/health&#34;<\/span><span class=\"p\">]<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"c\"># FAILS: No curl in hardened image!<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"the-solution-network-namespace-sidecar\">The solution: Network namespace sidecar<\/h2>\n<p>Use a hardened curl image that shares the app&rsquo;s network:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">dhi.io\/node:25-debian13-sfw-ent-dev<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;3000:3000&#34;<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">NODE_ENV<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app-health<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">dhi.io\/curl:8-debian13-dev<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">entrypoint<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"s2\">&#34;sleep&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;infinity&#34;<\/span><span class=\"p\">]<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">network_mode<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;service:app&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Shares app&#39;s network namespace!<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">healthcheck<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">test<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"s2\">&#34;CMD&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;curl&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;-f&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;http:\/\/localhost:3000\/health&#34;<\/span><span class=\"p\">]<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">interval<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">30s<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">timeout<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">3s<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">retries<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"m\">3<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">start_period<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">10s<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>The <code>network_mode: &quot;service:app&quot;<\/code> allows the sidecar to access <code>localhost:3000<\/code> directly - they share the same network stack!<\/p>","title":"Docker Compose Tip #8: Healthchecks with Docker Hardened Images"},{"content":"Stop doing docker compose down &amp;&amp; docker compose up for every code change. Docker Compose lets you restart individual services while keeping the rest running.\nThe solution Restart just what changed:\n# Restart only the web service docker compose up -d web # Your database, cache, and queue keep running! This simple command saves minutes per restart. Your database keeps its data, Redis maintains its cache, message queues preserve their state.\nCommon patterns Basic restart after code changes:\n# Make your changes, then: docker compose up -d api Force recreate when config changed:\n# When you&#39;ve changed environment variables or volumes docker compose up -d --force-recreate web Rebuild and restart for local builds:\n# After changing code in a service you build docker compose up -d --build api Pull latest image and restart:\ndocker compose pull web docker compose up -d web Real production example Here&rsquo;s our typical development workflow:\nservices: api: build: .\/api volumes: - .\/api:\/app # Code mounted for development depends_on: - postgres - redis postgres: image: postgres:15 volumes: - pgdata:\/var\/lib\/postgresql\/data # Persistent data redis: image: redis:7-alpine Development session:\n# Start everything docker compose up -d # Make API changes, restart just the API docker compose up -d api # Database and Redis stay running with all your test data! Check what&rsquo;s running See the impact:\n# Before restart docker compose ps Output:\nNAME STATUS PORTS myapp-api-1 Up 2 hours 0.0.0.0:3000-&gt;3000\/tcp myapp-postgres-1 Up 2 hours 5432\/tcp myapp-redis-1 Up 2 hours 6379\/tcp After docker compose up -d api:\nNAME STATUS PORTS myapp-api-1 Up 5 seconds 0.0.0.0:3000-&gt;3000\/tcp myapp-postgres-1 Up 2 hours 5432\/tcp # Still running! myapp-redis-1 Up 2 hours 6379\/tcp # Still running! Multiple services at once Restart several services together:\ndocker compose up -d web api worker Common pitfall Dependencies won&rsquo;t start automatically with --no-deps:\n# This won&#39;t start postgres if it&#39;s not running docker compose up -d --no-deps web # This ensures dependencies are running docker compose up -d web Pro tip During development, an even faster solution exists - use docker compose up --watch for automatic hot reloading:\n# Instead of manually restarting services docker compose up --watch # Files change \u2192 services automatically reload! This enables hot reloading when your code changes. We&rsquo;ll cover this powerful feature in detail in an upcoming post.\nPerformance impact In our Docker Desktop development:\nFull restart (down &amp;&amp; up): ~45 seconds, loses all state Single service restart: ~3 seconds, preserves everything That&rsquo;s 15x faster, plus no data loss or cache warming.\nStop restarting everything. Restart what changed. Your development speed will thank you.\nFurther reading Docker Compose up reference Docker Compose Watch Related tip: Service dependencies with health checks ","permalink":"https:\/\/lours.me\/posts\/compose-tip-007-restart-single\/","summary":"<p>Stop doing <code>docker compose down &amp;&amp; docker compose up<\/code> for every code change. Docker Compose lets you restart individual services while keeping the rest running.<\/p>\n<h2 id=\"the-solution\">The solution<\/h2>\n<p>Restart just what changed:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Restart only the web service<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose up -d web\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Your database, cache, and queue keep running!<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>This simple command saves minutes per restart. Your database keeps its data, Redis maintains its cache, message queues preserve their state.<\/p>","title":"Docker Compose Tip #7: Restarting single services without stopping the stack"},{"content":"Hardcoding IP addresses in your containers? Docker Compose provides automatic DNS-based service discovery. Each service can reach another using just the service name.\nHow it works Docker Compose creates a default network and registers each container with an internal DNS server. The DNS name matches the service name in your compose.yml.\nservices: web: image: nginx environment: # Just use the service name! API_URL: http:\/\/api:3000 DB_HOST: postgres api: image: myapi environment: DATABASE_URL: postgres:\/\/user:pass@postgres:5432\/mydb postgres: image: postgres:15 No configuration needed. The web service connects to api using http:\/\/api:3000, and api connects to postgres using the hostname postgres.\nVerify DNS resolution Check what&rsquo;s actually happening:\n# See what IP &#39;postgres&#39; resolves to docker compose exec web nslookup postgres Output:\nServer: 127.0.0.11 Address: 127.0.0.11#53 Name: postgres Address: 172.18.0.3 Test connectivity:\ndocker compose exec web ping -c 2 api Multiple instances When scaling, DNS returns all container IPs:\nservices: worker: image: myworker deploy: replicas: 3 docker compose up -d --scale worker=3 docker compose exec web nslookup worker # Returns 3 IP addresses for round-robin Common gotchas Using container names instead of service names:\nservices: web: container_name: my-web-container # Don&#39;t use this name for connections! # ... Always use the service name (web), not the container name.\nServices on different networks: Services must be on the same network to resolve each other. By default, Compose creates one network for all services.\nReal production example Here&rsquo;s how we use this at Docker:\nservices: api: image: docker\/api:latest environment: # Service names for all connections CACHE_URL: redis:\/\/cache:6379 SEARCH_URL: http:\/\/search:9200 METRICS_URL: http:\/\/metrics:9090 cache: image: redis:7-alpine search: image: elasticsearch:8.11 metrics: image: prom\/prometheus Each service finds the others by name. When containers restart and get new IPs, the DNS automatically updates.\nPro tip For debugging network issues between services:\n# Run a debug container on the same network docker compose run --rm alpine sh # Inside the container: apk add curl bind-tools nslookup api curl http:\/\/api:3000\/health Service discovery through DNS eliminates configuration complexity. No more managing IP addresses or host files - Compose handles it all automatically.\nFurther reading Docker Compose Networking Compose Specification - Networks ","permalink":"https:\/\/lours.me\/posts\/compose-tip-006-service-discovery\/","summary":"<p>Hardcoding IP addresses in your containers? Docker Compose provides automatic DNS-based service discovery. Each service can reach another using just the service name.<\/p>\n<h2 id=\"how-it-works\">How it works<\/h2>\n<p>Docker Compose creates a default network and registers each container with an internal DNS server. The DNS name matches the service name in your <code>compose.yml<\/code>.<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">nginx<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"c\"># Just use the service name!<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">API_URL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">http:\/\/api:3000<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DB_HOST<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">api<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapi<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DATABASE_URL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:\/\/user:pass@postgres:5432\/mydb<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">postgres<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:15<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>No configuration needed. The <code>web<\/code> service connects to <code>api<\/code> using <code>http:\/\/api:3000<\/code>, and <code>api<\/code> connects to <code>postgres<\/code> using the hostname <code>postgres<\/code>.<\/p>","title":"Docker Compose Tip #6: Service discovery and internal DNS"},{"content":"AI tools work better when they understand the setup. Here&rsquo;s how to document Compose files effectively.\nAdd context with comments Comments help AI understand what each service does:\nservices: # Primary web application serving React frontend # Handles user authentication and API gateway web: image: myapp:latest ports: - &#34;3000:3000&#34; # Public facing port - update in .env for production environment: # Connection string to PostgreSQL - format: postgresql:\/\/user:pass@host:5432\/db DATABASE_URL: ${DATABASE_URL} # JWT secret for auth - must be at least 256 bits JWT_SECRET: ${JWT_SECRET} depends_on: db: condition: service_healthy # Development only - remove for production volumes: - .\/src:\/app\/src # Hot reload for development # PostgreSQL 15 database with PostGIS extension # Stores user data and geographic information db: image: postgis\/postgis:15-3.3 environment: POSTGRES_DB: myapp POSTGRES_PASSWORD: ${DB_PASSWORD} # Never commit actual password volumes: # Initial schema and seed data - .\/init.sql:\/docker-entrypoint-initdb.d\/01-init.sql # Persistent data storage - postgres_data:\/var\/lib\/postgresql\/data volumes: postgres_data: # Named volume for database persistence across container restarts File headers For bigger projects, add a header:\n# Application: E-commerce Platform # Environment: Development # Required: Docker 24.0+, Compose v2.20+ # # Services: # - web: Next.js frontend (port 3000) # - api: Node.js backend (port 4000) # - db: PostgreSQL database # - redis: Session cache # # Quick Start: # 1. cp .env.example .env # 2. docker compose up -d # 3. Visit http:\/\/localhost:3000 name: ecommerce-dev services: # ... services Document environment variables services: api: environment: # Required: External service credentials STRIPE_API_KEY: ${STRIPE_API_KEY:?Missing STRIPE_API_KEY} # Optional: Defaults provided LOG_LEVEL: ${LOG_LEVEL:-info} # Options: debug, info, warn, error # Feature flags ENABLE_BETA_FEATURES: ${ENABLE_BETA:-false} # Set to true for beta testing Useful patterns Describe relationships:\nworker: # Processes background jobs from Redis queue # Depends on: api (for job creation), redis (for queue) Explain your choices:\nnginx: image: nginx:alpine # Alpine: 5MB vs 142MB regular Mark the tricky parts:\nvolumes: - .\/data:\/data # WARNING: Check ownership (1000:1000) What AI can do with this When your files are documented, AI tools can:\nWrite health checks that make sense Spot security issues Generate CI\/CD configs Create test setups Suggest performance improvements Example prompt &ldquo;Generate a production version of this Compose file with security improvements&rdquo;\nThe AI uses your comments to understand what needs protecting and what to remove.\nExtra tip Create an AI-CONTEXT.md file:\n# Project Context for AI ## Architecture - Microservices with REST APIs - PostgreSQL for persistent data - Redis for caching - nginx for reverse proxy ## Conventions - Port 3xxx for frontend services - Port 4xxx for backend services - All services run as non-root Then reference it:\n# See AI-CONTEXT.md for project details services: # ... ","permalink":"https:\/\/lours.me\/posts\/compose-tip-005-ai-documentation\/","summary":"<p>AI tools work better when they understand the setup. Here&rsquo;s how to document Compose files effectively.<\/p>\n<h2 id=\"add-context-with-comments\">Add context with comments<\/h2>\n<p>Comments help AI understand what each service does:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Primary web application serving React frontend<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Handles user authentication and API gateway<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:latest<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">ports<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"s2\">&#34;3000:3000&#34;<\/span><span class=\"w\">  <\/span><span class=\"c\"># Public facing port - update in .env for production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"c\"># Connection string to PostgreSQL - format: postgresql:\/\/user:pass@host:5432\/db<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DATABASE_URL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${DATABASE_URL}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"c\"># JWT secret for auth - must be at least 256 bits<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">JWT_SECRET<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${JWT_SECRET}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">depends_on<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">condition<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">service_healthy<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Development only - remove for production<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/src:\/app\/src <\/span><span class=\"w\"> <\/span><span class=\"c\"># Hot reload for development<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># PostgreSQL 15 database with PostGIS extension<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"c\"># Stores user data and geographic information<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgis\/postgis:15-3.3<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">POSTGRES_DB<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">POSTGRES_PASSWORD<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${DB_PASSWORD} <\/span><span class=\"w\"> <\/span><span class=\"c\"># Never commit actual password<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"c\"># Initial schema and seed data<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/init.sql:\/docker-entrypoint-initdb.d\/01-init.sql<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"c\"># Persistent data storage<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">postgres_data:\/var\/lib\/postgresql\/data<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">postgres_data<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"c\"># Named volume for database persistence across container restarts<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"file-headers\">File headers<\/h2>\n<p>For bigger projects, add a header:<\/p>","title":"Docker Compose Tip #5: Writing Compose files for AI tools"},{"content":"Need to access private Git repositories during build? Here&rsquo;s how to do it securely with SSH.\nThe setup Enable SSH forwarding in your compose.yml:\nservices: app: build: context: . ssh: - default # Uses your default SSH agent Or use specific keys for different services:\nservices: app: build: context: . ssh: - github=\/home\/user\/.ssh\/github_key # Custom key for GitHub - gitlab=\/home\/user\/.ssh\/gitlab_key # Different key for GitLab The Dockerfile Use BuildKit&rsquo;s SSH mount to clone private repos:\n# syntax=docker\/dockerfile:1 FROM node:20 # Using default SSH key RUN --mount=type=ssh \\ git clone git@github.com:mycompany\/private-lib.git \/tmp\/lib &amp;&amp; \\ cd \/tmp\/lib &amp;&amp; npm install &amp;&amp; npm run build &amp;&amp; \\ cp -r dist \/app\/vendor\/ # Using a specific key ID RUN --mount=type=ssh,id=github \\ git clone git@github.com:mycompany\/private-package.git \/tmp\/package # Different key for GitLab RUN --mount=type=ssh,id=gitlab \\ git clone git@gitlab.com:mycompany\/internal-tool.git \/tmp\/tool Building with SSH Make sure your SSH agent is running:\n# Start SSH agent if needed eval $(ssh-agent) ssh-add ~\/.ssh\/id_rsa # Build with SSH forwarding docker compose build --ssh default CI\/CD setup For GitHub Actions or similar:\nservices: app: build: context: . ssh: - default=${{ secrets.SSH_KEY }} Security notes SSH keys are never stored in the image They&rsquo;re only available during the RUN command with --mount=type=ssh No secrets leak into your final container BuildKit handles the SSH agent forwarding securely Common issues &ldquo;Could not read from remote repository&rdquo;\nMake sure the host is in known_hosts:\nRUN --mount=type=ssh \\ mkdir -p ~\/.ssh &amp;&amp; \\ ssh-keyscan github.com &gt;&gt; ~\/.ssh\/known_hosts &amp;&amp; \\ git clone git@github.com:mycompany\/repo.git &ldquo;SSH agent not available&rdquo;\nOn macOS, the SSH agent should work automatically. On Linux:\ndocker compose build --ssh default=$SSH_AUTH_SOCK Why this matters No more:\nCopying SSH keys into images (security risk!) Building everything publicly Complex workarounds with access tokens Just secure, straightforward access to private dependencies during build.\n","permalink":"https:\/\/lours.me\/posts\/compose-tip-004-ssh-build\/","summary":"<p>Need to access private Git repositories during build? Here&rsquo;s how to do it securely with SSH.<\/p>\n<h2 id=\"the-setup\">The setup<\/h2>\n<p>Enable SSH forwarding in your compose.yml:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">context<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">ssh<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span>- <span class=\"l\">default <\/span><span class=\"w\"> <\/span><span class=\"c\"># Uses your default SSH agent<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Or use specific keys for different services:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">build<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">context<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">.<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">ssh<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span>- <span class=\"l\">github=\/home\/user\/.ssh\/github_key <\/span><span class=\"w\"> <\/span><span class=\"c\"># Custom key for GitHub<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span>- <span class=\"l\">gitlab=\/home\/user\/.ssh\/gitlab_key <\/span><span class=\"w\"> <\/span><span class=\"c\"># Different key for GitLab<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"the-dockerfile\">The Dockerfile<\/h2>\n<p>Use BuildKit&rsquo;s SSH mount to clone private repos:<\/p>","title":"Docker Compose Tip #4: Using SSH keys during build"},{"content":"&ldquo;Connection refused&rdquo; errors? The app starts before the database is ready. Here&rsquo;s the fix.\nWhat doesn&rsquo;t work This only waits for the container to start, not for it to be ready:\nservices: app: depends_on: - db # Container starts, but database isn&#39;t ready yet What actually works Add health checks:\nservices: db: image: postgres:16 environment: POSTGRES_PASSWORD: secret healthcheck: test: [&#34;CMD-SHELL&#34;, &#34;pg_isready -U postgres&#34;] interval: 10s timeout: 5s retries: 5 start_period: 10s app: image: myapp depends_on: db: condition: service_healthy # Now it actually waits for the database Common health checks PostgreSQL:\nhealthcheck: test: [&#34;CMD-SHELL&#34;, &#34;pg_isready -U ${POSTGRES_USER:-postgres}&#34;] MySQL:\nhealthcheck: test: [&#34;CMD&#34;, &#34;mysqladmin&#34;, &#34;ping&#34;, &#34;-h&#34;, &#34;localhost&#34;] Redis:\nhealthcheck: test: [&#34;CMD&#34;, &#34;redis-cli&#34;, &#34;ping&#34;] HTTP Service:\nhealthcheck: test: [&#34;CMD&#34;, &#34;curl&#34;, &#34;-f&#34;, &#34;http:\/\/localhost:8080\/health&#34;] # Or wget if curl isn&#39;t available test: [&#34;CMD-SHELL&#34;, &#34;wget --quiet --tries=1 --spider http:\/\/localhost:8080\/health || exit 1&#34;] What the options mean interval: How often to run the check timeout: How long to wait for a response retries: Failures before marking unhealthy start_period: Grace period for slow services Multiple dependencies services: app: depends_on: db: condition: service_healthy redis: condition: service_started # Mix different conditions migration: condition: service_completed_successfully # Waits for migrations to finish Development tip Add restart logic for local dev:\nservices: app: depends_on: db: condition: service_healthy restart: true # Restarts app when db restarts The app will reconnect when the database restarts.\nDebugging # Check health status docker compose ps # See health check output docker inspect --format=&#39;{{json .State.Health}}&#39; &lt;container_name&gt; | jq ","permalink":"https:\/\/lours.me\/posts\/compose-tip-003-depends-on-healthcheck\/","summary":"<p>&ldquo;Connection refused&rdquo; errors? The app starts before the database is ready. Here&rsquo;s the fix.<\/p>\n<h2 id=\"what-doesnt-work\">What doesn&rsquo;t work<\/h2>\n<p>This only waits for the container to start, not for it to be ready:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">depends_on<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">db <\/span><span class=\"w\"> <\/span><span class=\"c\"># Container starts, but database isn&#39;t ready yet<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"what-actually-works\">What actually works<\/h2>\n<p>Add health checks:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">postgres:16<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">POSTGRES_PASSWORD<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">secret<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">healthcheck<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">test<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"p\">[<\/span><span class=\"s2\">&#34;CMD-SHELL&#34;<\/span><span class=\"p\">,<\/span><span class=\"w\"> <\/span><span class=\"s2\">&#34;pg_isready -U postgres&#34;<\/span><span class=\"p\">]<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">interval<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">10s<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">timeout<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">5s<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">retries<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"m\">5<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">start_period<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">10s<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">app<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">depends_on<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">db<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">        <\/span><span class=\"nt\">condition<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">service_healthy <\/span><span class=\"w\"> <\/span><span class=\"c\"># Now it actually waits for the database<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><h2 id=\"common-health-checks\">Common health checks<\/h2>\n<p><strong>PostgreSQL:<\/strong><\/p>","title":"Docker Compose Tip #3: Service dependencies with health checks"},{"content":"Same compose.yml, different environments. Here&rsquo;s the cleanest approach.\nBasic setup Create different env files for each environment:\n.env.dev\nDATABASE_URL=postgresql:\/\/localhost:5432\/dev_db API_KEY=dev_key_12345 LOG_LEVEL=debug REPLICAS=1 .env.prod\nDATABASE_URL=postgresql:\/\/prod-db.example.com:5432\/prod_db API_KEY=${SECURE_API_KEY} # From CI\/CD secrets LOG_LEVEL=error REPLICAS=3 How to use them # Development docker compose --env-file .env.dev up # Production docker compose --env-file .env.prod up # Override specific vars API_KEY=test_key docker compose --env-file .env.dev up Layering configs You can use multiple env files:\n# Base + environment-specific docker compose \\ --env-file .env.base \\ --env-file .env.prod \\ up Note: Later files override earlier ones.\nRecommended project structure This works well:\nproject\/ \u251c\u2500\u2500 compose.yml \u251c\u2500\u2500 .env # Git-ignored, local overrides \u251c\u2500\u2500 .env.example # Committed, template for team \u2514\u2500\u2500 environments\/ \u251c\u2500\u2500 .env.dev # Development defaults \u251c\u2500\u2500 .env.staging # Staging config \u2514\u2500\u2500 .env.prod # Production (maybe in CI\/CD) Debugging # See what Compose is using docker compose --env-file .env.prod config # Check specific variable docker compose run --rm web printenv DATABASE_URL Git strategy What I ignore:\n.env .env.local .env.*.local What I commit:\n.env.example .env.dev New team members just run cp .env.example .env and they&rsquo;re ready.\n","permalink":"https:\/\/lours.me\/posts\/compose-tip-002-env-files\/","summary":"<p>Same compose.yml, different environments. Here&rsquo;s the cleanest approach.<\/p>\n<h2 id=\"basic-setup\">Basic setup<\/h2>\n<p>Create different env files for each environment:<\/p>\n<p><strong>.env.dev<\/strong><\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"nv\">DATABASE_URL<\/span><span class=\"o\">=<\/span>postgresql:\/\/localhost:5432\/dev_db\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">API_KEY<\/span><span class=\"o\">=<\/span>dev_key_12345\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">LOG_LEVEL<\/span><span class=\"o\">=<\/span>debug\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">REPLICAS<\/span><span class=\"o\">=<\/span><span class=\"m\">1<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p><strong>.env.prod<\/strong><\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"nv\">DATABASE_URL<\/span><span class=\"o\">=<\/span>postgresql:\/\/prod-db.example.com:5432\/prod_db\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">API_KEY<\/span><span class=\"o\">=<\/span><span class=\"si\">${<\/span><span class=\"nv\">SECURE_API_KEY<\/span><span class=\"si\">}<\/span>  <span class=\"c1\"># From CI\/CD secrets<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">LOG_LEVEL<\/span><span class=\"o\">=<\/span>error\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">REPLICAS<\/span><span class=\"o\">=<\/span><span class=\"m\">3<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"how-to-use-them\">How to use them<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Development<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose --env-file .env.dev up\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Production<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose --env-file .env.prod up\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Override specific vars<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nv\">API_KEY<\/span><span class=\"o\">=<\/span>test_key docker compose --env-file .env.dev up\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"layering-configs\">Layering configs<\/h2>\n<p>You can use multiple env files:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Base + environment-specific<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">docker compose <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  --env-file .env.base <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  --env-file .env.prod <span class=\"se\">\\\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\">  up\n<\/span><\/span><\/code><\/pre><\/div><p>Note: Later files override earlier ones.<\/p>","title":"Docker Compose Tip #2: Using --env-file for different environments"},{"content":"When your Compose setup gets complex, docker compose config becomes your best debugging tool. Especially with profiles.\nThe basics docker compose config This shows you the actual configuration that Docker Compose will run:\nEnvironment variables are replaced with their values Relative paths become absolute Default values are applied Multiple compose files are merged YAML anchors are resolved What you write:\nservices: web: image: myapp:${VERSION:-latest} volumes: - .\/data:\/app\/data environment: DATABASE_URL: ${DATABASE_URL} What docker compose config shows you:\nservices: web: image: myapp:1.2.3 # VERSION was set to 1.2.3 volumes: - \/home\/user\/project\/data:\/app\/data # Absolute path environment: DATABASE_URL: postgresql:\/\/localhost:5432\/mydb # Actual value Understanding variable resolution See exactly how your variables expand:\n# Show resolved values docker compose config # Keep variables as-is docker compose config --no-interpolate # Check what services will run docker compose config --services Complex merge scenarios When using multiple files and overrides:\ndocker compose -f compose.yml -f compose.dev.yml -f compose.override.yml config This shows the final merged result. It&rsquo;s invaluable for debugging why a service isn&rsquo;t configured as expected.\nCI\/CD validation # Validate all profiles for profile in dev staging prod; do docker compose --profile $profile config --quiet || exit 1 done This catches issues where profiles break due to missing dependencies or circular references.\nThe real power: debugging profiles Profiles can be tricky. Services get pulled in through dependencies even without having the profile:\nservices: web: image: nginx profiles: [&#34;frontend&#34;] depends_on: - api api: image: myapi profiles: [&#34;frontend&#34;] depends_on: - db db: image: postgres # No profile - always runs cache: image: redis profiles: [&#34;backend&#34;] worker: image: worker profiles: [&#34;backend&#34;] depends_on: - db - cache What happens with --profile backend?\ndocker compose --profile backend config --services You get: db, cache, AND worker. But here&rsquo;s the trick - db runs even without the profile because it has no profile defined. Services without profiles are always started.\nEven trickier - dependencies across profiles will fail:\nservices: test-runner: image: test-runner profiles: [&#34;test&#34;] depends_on: - web - db web: image: nginx profiles: [&#34;frontend&#34;] db: image: postgres # No profile - always runs Running docker compose --profile test config errors out:\nservice &#34;test-runner&#34; depends on undefined service &#34;web&#34;: invalid compose project The web service isn&rsquo;t available because the frontend profile isn&rsquo;t active. You need BOTH profiles:\ndocker compose --profile test --profile frontend config --services # Now you get: db, web, test-runner This is why docker compose config is invaluable - it catches these dependency issues before runtime.\n","permalink":"https:\/\/lours.me\/posts\/compose-tip-001-validate-config\/","summary":"<p>When your Compose setup gets complex, <code>docker compose config<\/code> becomes your best debugging tool. Especially with profiles.<\/p>\n<h2 id=\"the-basics\">The basics<\/h2>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">docker compose config\n<\/span><\/span><\/code><\/pre><\/div><p>This shows you the <strong>actual configuration<\/strong> that Docker Compose will run:<\/p>\n<ul>\n<li>Environment variables are replaced with their values<\/li>\n<li>Relative paths become absolute<\/li>\n<li>Default values are applied<\/li>\n<li>Multiple compose files are merged<\/li>\n<li>YAML anchors are resolved<\/li>\n<\/ul>\n<p>What you write:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-yaml\" data-lang=\"yaml\"><span class=\"line\"><span class=\"cl\"><span class=\"nt\">services<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">  <\/span><span class=\"nt\">web<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">image<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">myapp:${VERSION:-latest}<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">volumes<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span>- <span class=\"l\">.\/data:\/app\/data<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">    <\/span><span class=\"nt\">environment<\/span><span class=\"p\">:<\/span><span class=\"w\">\n<\/span><\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"w\">      <\/span><span class=\"nt\">DATABASE_URL<\/span><span class=\"p\">:<\/span><span class=\"w\"> <\/span><span class=\"l\">${DATABASE_URL}<\/span><span class=\"w\">\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>What <code>docker compose config<\/code> shows you:<\/p>","title":"Docker Compose Tip #1: Debug your configuration with config"}]