A professional, production-ready MySQL backup solution for Octeth using Percona XtraBackup. Designed for zero-downtime hot backups of large databases (2GB+) with intelligent retention policies and cloud storage integration (AWS S3, Google Cloud Storage, and Cloudflare R2).
- Zero-Downtime Hot Backups: Uses Percona XtraBackup for hot backups while MySQL stays online
- Smart Retention Policy: Daily (7 days) + Weekly (4 weeks) + Monthly (6 months)
- Cloud Storage: Local filesystem + AWS S3, Google Cloud Storage, or Cloudflare R2
- Production-Ready: Comprehensive error handling, logging, and notifications
- Fast & Efficient: 70-80% less CPU usage compared to mysqldump
- Parallel Compression: Supports pigz for faster compression
- Easy Restore: Simple restore process with verification
- Automated Cleanup: Automatic retention policy enforcement
Traditional mysqldump becomes impractical for large databases:
- mysqldump on 2GB+ database: Hours of runtime, high CPU load, table locks
- XtraBackup on 2GB+ database: 5-15 minutes, minimal CPU, no downtime
XtraBackup copies InnoDB data files directly and uses transaction logs to maintain consistency, making it ideal for production systems.
octeth-backup-tools/
├── bin/
│ ├── octeth-backup.sh # Main backup script
│ ├── octeth-restore.sh # Restore script
│ ├── octeth-cleanup.sh # Retention policy cleanup
│ └── octeth-test-storage.sh # Cloud storage connectivity test
├── config/
│ ├── .env.example # Environment configuration template
│ └── backup.conf.example # Backup configuration template
├── install.sh # Installation script
└── README.md # This file
# Clone or copy this repository
cd octeth-backup-tools
# Run installation (installs dependencies, sets up config, and cron)
sudo ./install.sh
# Or run with wizard for guided setup
sudo ./install.sh --wizardThe installer will:
- Check/install Percona XtraBackup 8.0
- Install compression tools (pigz recommended)
- Optionally install AWS CLI for S3 backups
- Create configuration files
- Set up cron jobs for automated backups
Edit config/.env with your MySQL credentials:
# MySQL Connection
MYSQL_HOST=oempro_mysql
MYSQL_ROOT_PASSWORD=your_root_password
MYSQL_DATABASE=oempro
# MySQL data directory on HOST (required for XtraBackup)
# For Octeth Docker: /opt/oempro/_dockerfiles/mysql/data_v8
MYSQL_DATA_DIR=/opt/oempro/_dockerfiles/mysql/data_v8
# Backup Storage
BACKUP_DIR=/var/backups/octeth
# Cloud Storage (optional - choose s3, gcs, r2, or none)
CLOUD_STORAGE_PROVIDER=s3
# S3 Settings (if using AWS S3)
S3_BUCKET=my-octeth-backups
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
# GCS Settings (if using Google Cloud Storage)
# GCS_BUCKET=my-octeth-backups
# GCS_PROJECT_ID=my-project-id
# GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json
# R2 Settings (if using Cloudflare R2)
# R2_BUCKET=my-octeth-backups
# R2_ACCOUNT_ID=your-account-id
# R2_ACCESS_KEY_ID=your_key
# R2_SECRET_ACCESS_KEY=your_secret# Run your first backup manually
./bin/octeth-backup.sh
# Check backup was created
./bin/octeth-restore.sh --list# Manual backup
./bin/octeth-backup.sh
# Backup runs automatically via cron (default: daily at 2 AM)The backup script automatically:
- Checks disk space and MySQL connectivity
- Determines backup type (daily/weekly/monthly based on date)
- Performs hot backup with XtraBackup
- Compresses and creates checksum
- Uploads to cloud storage (if enabled)
- Sends notifications
- Logs everything
# List available local backups
./bin/octeth-restore.sh --list
# List cloud backups (S3, GCS, or R2 based on config)
./bin/octeth-restore.sh --list-cloud
# Restore from local backup
./bin/octeth-restore.sh --file /var/backups/octeth/daily/octeth-backup-2025-01-15_02-00-00.tar.gz
# Restore from cloud (uses CLOUD_STORAGE_PROVIDER from config)
./bin/octeth-restore.sh --cloud octeth-backup-2025-01-15_02-00-00.tar.gz daily
# Restore from S3 (specific)
./bin/octeth-restore.sh --s3 octeth-backup-2025-01-15_02-00-00.tar.gz daily
# Restore from GCS (specific)
./bin/octeth-restore.sh --gcs octeth-backup-2025-01-15_02-00-00.tar.gz daily
# Force restore (skip checksum verification)
./bin/octeth-restore.sh --file backup.tar.gz --force
# Skip confirmation prompt
./bin/octeth-restore.sh --file backup.tar.gz --yesWarning: Restore operations will:
- Stop the MySQL container
- Backup current data (safety backup)
- Replace MySQL data with backup
- Restart MySQL
# Show backup statistics
./bin/octeth-cleanup.sh --stats
# Dry run (see what would be deleted)
./bin/octeth-cleanup.sh --dry-run
# Perform cleanup
./bin/octeth-cleanup.sh
# Verbose output
./bin/octeth-cleanup.sh --verboseCleanup runs automatically after backups (via cron) and enforces the retention policy:
- Daily: Keep last 7 backups
- Weekly: Keep last 4 Sunday backups
- Monthly: Keep last 6 first-of-month backups
MYSQL_HOST=oempro_mysql # MySQL container name
MYSQL_PORT=3306 # MySQL port
MYSQL_ROOT_PASSWORD= # Root password (required)
MYSQL_DATABASE=oempro # Database name
MYSQL_USERNAME=oempro # MySQL user
MYSQL_PASSWORD= # MySQL password
MYSQL_DATA_DIR= # MySQL data directory on HOST (required)
# For Octeth: /opt/oempro/_dockerfiles/mysql/data_v8BACKUP_DIR=/var/backups/octeth # Local backup directory
TEMP_DIR=/var/backups/octeth/tmp # Temporary directory (CRITICAL: needs DB size + 20% free space)
# WARNING: Do NOT use /tmp - often too small!
MAX_DISK_USAGE=85 # Maximum disk usage % (abort if exceeded)
MIN_FREE_SPACE_GB=10 # Minimum free space requiredIMPORTANT: TEMP_DIR must have enough space for the full uncompressed database backup. The script calculates required space as: Database Size + 20% buffer + 5GB. Using /tmp will likely cause "No space left on device" errors for databases larger than a few GB.
COMPRESSION_TOOL=auto # auto, pigz, or gzip
COMPRESSION_LEVEL=6 # 1-9 (6 recommended)
PARALLEL_THREADS=auto # auto or numberCLOUD_STORAGE_PROVIDER=none # s3, gcs, r2, or noneS3_BUCKET=my-octeth-backups # S3 bucket name
S3_REGION=us-east-1 # S3 region
S3_PREFIX=octeth # S3 path prefix
S3_STORAGE_CLASS=STANDARD_IA # S3 storage class
AWS_ACCESS_KEY_ID= # AWS credentials (leave empty for IAM role)
AWS_SECRET_ACCESS_KEY=
S3_UPLOAD_TOOL=awscli # awscli or rclone
RCLONE_REMOTE=s3 # rclone remote name (if using rclone)GCS_BUCKET=my-octeth-backups # GCS bucket name
GCS_PROJECT_ID= # GCS project ID (optional, auto-detected if not set)
GCS_PREFIX=octeth # GCS path prefix
GCS_STORAGE_CLASS=NEARLINE # STANDARD, NEARLINE, COLDLINE, ARCHIVE
GCS_UPLOAD_TOOL=gsutil # gsutil or rclone
GCS_RCLONE_REMOTE=gcs # rclone remote name (if using rclone)
GOOGLE_APPLICATION_CREDENTIALS= # Path to credentials JSON (optional)R2_BUCKET=my-octeth-backups # R2 bucket name
R2_ACCOUNT_ID= # R2 account ID (required, from Cloudflare dashboard)
R2_PREFIX=octeth # R2 path prefix
R2_STORAGE_CLASS=STANDARD # Not applicable for R2, included for consistency
R2_ACCESS_KEY_ID= # R2 API token credentials
R2_SECRET_ACCESS_KEY=
R2_UPLOAD_TOOL=awscli # awscli or rclone
R2_RCLONE_REMOTE=r2 # rclone remote name (if using rclone)RETENTION_DAILY=7 # Keep last 7 daily backups
RETENTION_WEEKLY=4 # Keep last 4 weekly backups
RETENTION_MONTHLY=6 # Keep last 6 monthly backupsEMAIL_NOTIFICATIONS=false # Enable email notifications
[email protected] # Recipient emails (comma-separated)
[email protected] # Sender email
SMTP_HOST=smtp.gmail.com # SMTP server
SMTP_PORT=587 # SMTP port
SMTP_USERNAME= # SMTP username
SMTP_PASSWORD= # SMTP password
NOTIFY_ON_FAILURE_ONLY=true # Only notify on failures
WEBHOOK_ENABLED=false # Enable webhook notifications
WEBHOOK_URL= # Webhook URLVERIFY_BACKUP=true # Verify backup after creation
LOCK_FILE=/tmp/octeth-backup.lock # Lock file path
LOG_FILE=/var/log/octeth-backup.log # Log file path
LOG_RETENTION_DAYS=30 # Keep logs for N days
DOCKER_CMD=docker # Docker command (use "sudo docker" if needed)
BACKUP_TIMEOUT=120 # Backup timeout in minutesThe tool automatically determines backup type based on the current date:
| Type | When | Retention | Storage Path |
|---|---|---|---|
| Monthly | 1st of month | 6 months | /var/backups/octeth/monthly/ |
| Weekly | Sunday | 4 weeks | /var/backups/octeth/weekly/ |
| Daily | All other days | 7 days | /var/backups/octeth/daily/ |
If backups run daily at 2 AM:
- Day 1-6: Daily backups only
- Day 7 (Sunday): Creates weekly backup (also kept as daily)
- Day 1 of Month (Sunday): Creates monthly backup (also kept as weekly and daily)
Cleanup runs automatically and removes:
- Daily backups older than 7 days
- Weekly backups older than 4 weeks
- Monthly backups older than 6 months
# Install AWS CLI (done by install.sh)
# Configure in .env or use IAM instance role
# Configure in .env
CLOUD_STORAGE_PROVIDER=s3
S3_BUCKET=my-octeth-backups
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
# Test S3 access
aws s3 ls s3://my-octeth-backups/Choose based on your recovery time requirements:
- STANDARD: Frequent access, highest cost
- STANDARD_IA (recommended): Infrequent access, 30-day minimum
- GLACIER_IR: Archive, minutes retrieval, lowest cost
- DEEP_ARCHIVE: Long-term, 12-hour retrieval
# Install Google Cloud SDK
# Ubuntu/Debian:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get update && sudo apt-get install google-cloud-sdk
# Authenticate
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
# Configure in .env
CLOUD_STORAGE_PROVIDER=gcs
GCS_BUCKET=my-octeth-backups
GCS_PROJECT_ID=my-project-id
GCS_UPLOAD_TOOL=gsutil
# Test GCS access
gsutil ls gs://my-octeth-backups/# Create service account in GCP Console
# Download JSON key file
# Configure in .env
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
# Grant permissions to service account:
# - Storage Object Admin (for bucket)
# - Storage Legacy Bucket Reader (for listing)Choose based on your access patterns and cost requirements:
- STANDARD: Frequent access, highest performance
- NEARLINE (recommended): Access < 1/month, 30-day minimum
- COLDLINE: Access < 1/quarter, 90-day minimum
- ARCHIVE: Long-term, 365-day minimum, lowest cost
Cloudflare R2 is an S3-compatible object storage with zero egress fees, making it ideal for backups.
# Install AWS CLI (done by install.sh)
# R2 uses S3-compatible API with custom endpoint
# Configure in .env
CLOUD_STORAGE_PROVIDER=r2
R2_BUCKET=my-octeth-backups
R2_ACCOUNT_ID=your-account-id # Found in Cloudflare dashboard
R2_ACCESS_KEY_ID=your_r2_key
R2_SECRET_ACCESS_KEY=your_r2_secret
R2_UPLOAD_TOOL=awscli
# Test R2 access
aws s3 ls s3://my-octeth-backups/ --endpoint-url https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com- Log in to Cloudflare Dashboard
- Go to R2 > Overview
- Create a new R2 bucket (if needed)
- Go to "Manage R2 API Tokens"
- Create API token with:
- Permissions: Object Read & Write
- Bucket: Specify your backup bucket or all buckets
- Copy the Access Key ID and Secret Access Key
- Note your Account ID from the R2 Overview page
- Zero Egress Fees: No charges for data retrieval
- S3-Compatible: Works with AWS CLI and tools
- Global Performance: Automatic geographic distribution
- Cost-Effective: ~$0.015/GB/month storage
# Configure rclone for R2
rclone config
# Choose: Amazon S3 or S3-compatible
# Provider: Any S3-compatible
# Endpoint: https://YOUR_ACCOUNT_ID.r2.cloudflarestorage.com
# Enter Access Key ID and Secret Access Key
# Configure in .env
R2_UPLOAD_TOOL=rclone
R2_RCLONE_REMOTE=r2 # Name you gave in rclone config
# Test access
rclone ls r2:my-octeth-backups/# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure rclone for S3
rclone config
# Choose: Amazon S3 or S3-compatible
# Configure rclone for GCS
rclone config
# Choose: Google Cloud Storage
# Set in .env
# For S3:
CLOUD_STORAGE_PROVIDER=s3
S3_UPLOAD_TOOL=rclone
RCLONE_REMOTE=s3
# For GCS:
CLOUD_STORAGE_PROVIDER=gcs
GCS_UPLOAD_TOOL=rclone
GCS_RCLONE_REMOTE=gcsConfigure SMTP in .env to receive email notifications:
EMAIL_NOTIFICATIONS=true
[email protected],[email protected]
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
[email protected]
SMTP_PASSWORD=your_app_password
NOTIFY_ON_FAILURE_ONLY=trueFor Gmail, use an App Password.
Send backup status to monitoring systems:
WEBHOOK_ENABLED=true
WEBHOOK_URL=https://hooks.slack.com/services/YOUR/WEBHOOK/URLWebhook payload:
{
"status": "success",
"message": "Octeth backup completed",
"timestamp": "2025-01-15T02:00:00Z",
"backup_size": "2.4GB"
}All operations are logged to /var/log/octeth-backup.log:
# View recent logs
tail -f /var/log/octeth-backup.log
# Search for errors
grep ERROR /var/log/octeth-backup.log
# View backup history
grep "Backup completed" /var/log/octeth-backup.logLogs are automatically rotated and cleaned up after 30 days.
The installer creates cron jobs automatically:
# Backup at 2 AM daily
0 2 * * * /path/to/octeth-backup-tools/bin/octeth-backup.sh
# Cleanup at 2:30 AM daily
30 2 * * * /path/to/octeth-backup-tools/bin/octeth-cleanup.sh# Edit cron
crontab -e
# Examples:
0 2 * * * # Daily at 2 AM
0 */6 * * * # Every 6 hours
0 3 * * 0 # Weekly on Sunday at 3 AM
0 1 1 * * # Monthly on 1st at 1 AMFor a 2GB database:
- Backup time: 5-15 minutes
- CPU usage: Low (file copy operations)
- I/O usage: Moderate (reading data files)
- Downtime: Zero
For larger databases:
- 10GB: ~30-45 minutes
- 50GB: 2-3 hours
- 100GB+: 4-6 hours
-
Use pigz: Install pigz for 3-4x faster compression
sudo apt-get install pigz
-
Adjust compression level: Lower = faster, higher = smaller
COMPRESSION_LEVEL=3 # Fast, larger files COMPRESSION_LEVEL=9 # Slow, smaller files
-
Tune parallel threads: Match your CPU cores
PARALLEL_THREADS=8 # For 8-core CPU -
Run during off-peak hours: Minimize impact on production
0 2 * * * # 2 AM is typical
-
Monitor disk space: Ensure adequate free space
# Rule of thumb: 2-3x database size # For 10GB database, keep 20-30GB free
Calculate required storage:
Daily retention: 7 days × backup_size = 7 × backup_size
Weekly retention: 4 weeks × backup_size = 4 × backup_size
Monthly retention: 6 months × backup_size = 6 × backup_size
Total: ~17 × backup_size
Example for 5GB compressed backups:
- Local: ~85GB
- S3: ~85GB (with lifecycle policies)
-
Use S3 Intelligent-Tiering or Lifecycle Policies
# Move old backups to cheaper storage automatically # S3 Console → Bucket → Management → Lifecycle rules
-
Keep fewer monthly backups locally
RETENTION_MONTHLY=3 # Keep only 3 months locally # Use S3 for longer-term retention
-
Compress more aggressively
COMPRESSION_LEVEL=9 # Smaller files, slightly slower
# Install XtraBackup manually
sudo ./install.sh --deps-only
# Or follow manual installation:
# Ubuntu/Debian:
wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
sudo apt-get update
sudo percona-release enable-only tools release
sudo apt-get install percona-xtrabackup-80# Check MySQL container is running
docker ps | grep mysql
# Check MySQL credentials in .env
docker exec oempro_mysql mysql -uroot -p'your_password' -e "SHOW DATABASES;"
# Check Docker command
# If you need sudo for docker, set in .env:
DOCKER_CMD="sudo docker"This is a critical error that occurs when XtraBackup runs out of disk space during backup. This can also cause MySQL to crash or become unresponsive.
Symptoms:
xtrabackup: Error writing file ... (OS errno 28 - No space left on device)
Cause: The TEMP_DIR (default: /tmp/octeth-backup) doesn't have enough space for the uncompressed database backup.
Solution:
-
Change TEMP_DIR location (recommended):
# Edit config/.env TEMP_DIR=/var/backups/octeth/tmp # Use same disk as backups
-
Check space requirements:
# Check database size du -sh /opt/oempro/_dockerfiles/mysql/data_v8 # Check available space in temp directory df -h /var/backups # Rule: TEMP_DIR needs DB size + 20% + 5GB free # Example: 10GB database needs ~17GB free in TEMP_DIR
-
Clean up temp directory:
# Remove any stale temp files rm -rf /var/backups/octeth/tmp/*
-
If MySQL crashed during backup:
# Restart MySQL container docker restart oempro_mysql # Verify MySQL is healthy docker logs oempro_mysql docker exec oempro_mysql mysql -uroot -p'password' -e "SHOW STATUS LIKE 'Uptime';"
Prevention: Always ensure TEMP_DIR has sufficient space before running backups. The script now checks this automatically and will abort with a clear error if space is insufficient.
# Test AWS CLI access
aws s3 ls s3://your-bucket-name/
# Check credentials in .env
# Verify IAM permissions:
# - s3:PutObject
# - s3:GetObject
# - s3:ListBucket
# - s3:DeleteObject# Restore with --force to skip checksum
./bin/octeth-restore.sh --file backup.tar.gz --force
# Or verify backup manually
sha256sum -c backup.tar.gz.sha256# Remove stale lock file
rm -f /tmp/octeth-backup.lock
# Check for running backup processes
ps aux | grep octeth-backup-
Protect configuration files
chmod 600 config/.env # Never commit .env to git (it's in .gitignore) -
Use secure S3 access
- Prefer IAM instance roles over hardcoded credentials
- Use least-privilege IAM policies
- Enable S3 bucket encryption
-
Restrict file permissions
chmod 700 bin/*.sh chmod 600 config/.env -
Monitor backup logs
# Check for suspicious activity grep -i "error\|fail\|warning" /var/log/octeth-backup.log
-
Test restores regularly
# Verify backups are valid # Restore to test environment monthly
This tool integrates seamlessly with Octeth:
- Automatic MySQL detection: Connects to
oempro_mysqlcontainer - Respects Docker network: Uses existing Octeth Docker network
- No service disruption: Zero downtime backups
- Compatible with all Octeth versions: Works with v5.7.1+
You can also integrate with Octeth's CLI:
# Add to octeth CLI (optional)
cd /path/to/oempro
ln -s /path/to/octeth-backup-tools/bin/octeth-backup.sh cli/backup.sh
# Then run via Octeth CLI
./cli/octeth.sh backup# Run backup manually
./bin/octeth-backup.sh
# Verify backup exists
./bin/octeth-restore.sh --list
# Check logs
tail -50 /var/log/octeth-backup.log# DANGER: Only test restore in development/staging!
# This will replace your database!
# List backups
./bin/octeth-restore.sh --list
# Restore
./bin/octeth-restore.sh --file /var/backups/octeth/daily/octeth-backup-YYYY-MM-DD.tar.gz --yes# Dry run to see what would be deleted
./bin/octeth-cleanup.sh --dry-run
# View statistics
./bin/octeth-cleanup.sh --statsThe octeth-test-storage.sh tool tests connectivity to your configured cloud storage provider (AWS S3, Google Cloud Storage, or Cloudflare R2). It verifies credentials, bucket access, and read/write/delete permissions.
# Test configured cloud storage
./bin/octeth-test-storage.sh
# Verbose output (detailed logging)
./bin/octeth-test-storage.sh -v
# Quiet mode (for scripting)
./bin/octeth-test-storage.sh -q && echo "Storage ready"What it tests:
- ✓ Upload tool installation (AWS CLI, gsutil, or rclone)
- ✓ Authentication and credentials
- ✓ Bucket exists and is accessible
- ✓ Write permissions (uploads test file)
- ✓ Read permissions (downloads test file)
- ✓ Delete permissions (removes test file)
- ✓ Storage class validity
Example output:
========================================
Octeth Storage Connectivity Test
========================================
Testing cloud storage provider: s3
[✓] AWS CLI found: aws-cli/2.15.30
[✓] AWS credentials configured
[✓] Bucket accessible: s3://my-octeth-backups/octeth/
[✓] Write test passed (uploaded 245 bytes)
[✓] Read test passed (downloaded 245 bytes)
[✓] Delete test passed
[✓] Storage class valid: STANDARD_IA
========================================
All tests passed! ✓
========================================
Exit codes:
0: All tests passed1: One or more tests failed2: Configuration error (missing .env or invalid provider)3: Tool not installed (aws/gsutil/rclone)
Use cases:
- After initial setup to verify cloud configuration
- Before running first backup to catch credential issues
- In automated monitoring (cron job every 6 hours)
- During troubleshooting of backup failures
When restoring XtraBackup backups from a Linux production server to macOS for local development, you'll encounter a lower_case_table_names incompatibility:
Different lower_case_table_names settings for server ('2') and data dictionary ('0').
Data Dictionary initialization failed.
Why this happens: Linux uses case-sensitive filesystems (lower_case_table_names=0), while macOS uses case-insensitive filesystems (lower_case_table_names=2). MySQL stores this setting in the data dictionary and refuses to start if there's a mismatch.
Docker named volumes use a Linux filesystem inside Docker Desktop's VM, preserving Linux behavior.
Create a file in your Oempro project directory that overrides the MySQL volume:
# docker-compose.override.yml (Mac-only, add to .gitignore)
services:
mysql:
volumes:
- ./_dockerfiles/mysql/log_v8:/var/log/mysql
- ./_dockerfiles/mysql/conf.d:/etc/mysql/conf.d
- oempro_mysql_data:/var/lib/mysql # Named volume instead of bind mount
volumes:
oempro_mysql_data:Add to .gitignore:
echo "docker-compose.override.yml" >> .gitignore# Download backup from production server to ~/tmp/
scp user@production:/var/backups/octeth/daily/octeth-backup-YYYY-MM-DD.tar.gz ~/tmp/
# Stop and remove all containers that reference the volume
docker compose down
# Delete the previously created volume if it exists
# NOTE: This will fail silently if containers still reference it.
# Always run 'docker compose down' first to remove all containers.
docker volume rm oempro_mysql_data
docker volume rm oempro_oempro_mysql_data
# Verify the volume was actually removed
docker volume inspect oempro_mysql_data
docker volume inspect oempro_oempro_mysql_data
# Extract backup into the Docker volume
docker compose run --rm \
-v ~/tmp:/backup \
--entrypoint bash mysql -c \
"cd /var/lib/mysql && tar -xzf /backup/octeth-backup-YYYY-MM-DD.tar.gz --strip-components=1"docker compose run --rm --entrypoint bash mysql -c \
"rm -f /var/lib/mysql/xtrabackup_* /var/lib/mysql/backup-my.cnf /var/lib/mysql/mysql.sock && \
chown -R mysql:mysql /var/lib/mysql"docker compose up -d mysql
# Verify it's running
docker compose ps mysql- Use production passwords: The restored database has production MySQL credentials, not your local
.envpasswords - Volume persists: The named volume persists between container restarts. To reset, run:
docker compose down docker volume rm oempro_oempro_mysql_data
- First-time setup: Docker Compose automatically creates the volume on first run
- Keep override file local: Don't commit
docker-compose.override.ymlto git - it's Mac-specific
If you prefer logical backups that work across platforms:
# On production (creates SQL dump)
docker exec oempro_mysql mysqldump -u root -p'password' --all-databases > dump.sql
# On macOS (restore SQL dump)
docker exec -i oempro_mysql mysql -u root -p'password' < dump.sqlNote: mysqldump is slower and causes brief locks, but produces platform-independent backups.
A: No. XtraBackup performs hot backups with zero downtime. MySQL stays online and applications continue running.
A: For a 2GB database: 5-15 minutes. Larger databases scale linearly.
A: Yes. Modify the cron schedule. XtraBackup is efficient enough for hourly backups if needed.
A: The script logs errors, sends notifications (if configured), and exits without affecting your database.
A: Yes. Copy the backup file to the new server and run the restore script.
A: Plan for ~17x your compressed backup size for local retention (7 daily + 4 weekly + 6 monthly).
A: If S3 is enabled, download from S3. Otherwise, backups are gone. Enable S3 for redundancy.
A: Currently not built-in. You can add GPG encryption by modifying the backup script or use S3 server-side encryption.
MIT License - See LICENSE file for details
For issues, questions, or contributions:
- GitHub Issues: [Your Repository URL]
- Documentation: This README
- Octeth Support: [email protected]
- Initial release
- Percona XtraBackup 8.0 integration
- Zero-downtime hot backups
- Smart retention policy (Daily 7 + Weekly 4 + Monthly 6)
- S3 support (AWS CLI and rclone)
- Email and webhook notifications
- Automated installation and cron setup
- Comprehensive restore functionality
- Production-ready logging and error handling
Made for Octeth - Professional email marketing platform