Databases

PostgreSQL and MariaDB Backup with Point-in-Time Recovery

File-level backups are useless for databases. If you tar a running PostgreSQL data directory, you get a corrupted pile of files that no sane restore will accept. Database backups need their own tools, and more importantly, they need point-in-time recovery. When someone runs DELETE FROM orders WHERE 1=1 at 3:47 PM, you need to restore to 3:46 PM, not to last night’s backup.

Original content from computingforgeeks.com - post 165167

Both PostgreSQL and MariaDB ship with built-in mechanisms for continuous archiving and point-in-time recovery (PITR). PostgreSQL uses Write-Ahead Logs (WAL), while MariaDB relies on binary logs. The concept is identical: take a base backup, then replay transaction logs up to a specific moment. This guide demonstrates actual data corruption and recovery on both engines, with every command tested on a live system. For file-level backups of non-database directories, see the Restic backup guide or the rsync with systemd timers article.

We load sample data, take a base backup, insert more records, simulate a catastrophic DROP TABLE, then recover to the exact second before the disaster. Every command is copy-pasteable and tested on Rocky Linux 10.1 with SELinux enforcing. If you manage BorgBackup for filesystem snapshots, think of PITR as the database equivalent of incremental snapshots with per-second granularity.

Verified working: March 2026 on Rocky Linux 10.1 with PostgreSQL 17.9 and MariaDB 11.4.10, SELinux enforcing

Prerequisites

  • Rocky Linux 10, AlmaLinux 10, RHEL 10, Ubuntu 24.04, or Debian 13
  • PostgreSQL 17 installed and running (tested with 17.9). See install guides for Rocky Linux / AlmaLinux or Debian / Ubuntu
  • MariaDB 11.4 installed and running (tested with 11.4.10). See install guides for Rocky Linux / AlmaLinux or Ubuntu / Debian
  • Root or sudo access
  • Separate backup storage: local disk, mounted volume, or NFS share
  • At least 2x the database size in free disk space for backups and WAL/binlog archives

Part 1: PostgreSQL Point-in-Time Recovery

PostgreSQL’s PITR relies on two components: a base backup (a snapshot of the entire data directory) and WAL segments (the transaction log files generated after that snapshot). By archiving WAL segments to a safe location, you can replay them on top of a base backup to reach any point in time.

Load Sample Data

Create a test database with realistic tables. We use 500 users and 2000 orders so the recovery results are easy to verify.

Switch to the postgres user and create the database:

sudo -u postgres createdb appdb

Now create the tables and load the sample data:

sudo -u postgres psql appdb <<'SQLEOF'
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(100) NOT NULL,
    email VARCHAR(200) NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id),
    product VARCHAR(200) NOT NULL,
    amount DECIMAL(10,2) NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);

INSERT INTO users (username, email)
SELECT
    'user_' || generate_series,
    'user_' || generate_series || '@example.com'
FROM generate_series(1, 500);

INSERT INTO orders (user_id, product, amount)
SELECT
    (random() * 499 + 1)::int,
    'Product-' || generate_series,
    (random() * 500 + 10)::numeric(10,2)
FROM generate_series(1, 2000);
SQLEOF

Verify the row counts match what we expect:

sudo -u postgres psql appdb -c "SELECT 'users' AS table_name, count(*) FROM users UNION ALL SELECT 'orders', count(*) FROM orders;"

The output confirms both tables are loaded:

 table_name | count
------------+-------
 users      |   500
 orders     |  2000
(2 rows)

Configure WAL Archiving

WAL archiving copies completed WAL segments to a safe directory. Without this, PostgreSQL recycles old WAL files and you lose the ability to replay past a certain point. The official PostgreSQL continuous archiving documentation covers the theory in depth.

Create the backup directories first:

sudo mkdir -p /backup/postgresql/base /backup/postgresql/wal_archive
sudo chown -R postgres:postgres /backup/postgresql

On RHEL-family systems with SELinux enforcing, set the correct file context so PostgreSQL can write to the backup directory:

sudo semanage fcontext -a -t postgresql_db_t "/backup/postgresql(/.*)?"
sudo restorecon -Rv /backup/postgresql

The restorecon output confirms the context was applied:

Relabeled /backup/postgresql from unconfined_u:object_r:default_t:s0 to unconfined_u:object_r:postgresql_db_t:s0
Relabeled /backup/postgresql/base from unconfined_u:object_r:default_t:s0 to unconfined_u:object_r:postgresql_db_t:s0
Relabeled /backup/postgresql/wal_archive from unconfined_u:object_r:default_t:s0 to unconfined_u:object_r:postgresql_db_t:s0

Open the PostgreSQL configuration file:

sudo vi /var/lib/pgsql/17/data/postgresql.conf

Add or modify these settings. On Ubuntu/Debian, the path is /etc/postgresql/17/main/postgresql.conf instead:

wal_level = replica
archive_mode = on
archive_command = 'cp %p /backup/postgresql/wal_archive/%f'
archive_timeout = 60

Here is what each setting does:

  • wal_level = replica is the default in PostgreSQL 15+ and provides enough WAL detail for PITR. Setting it explicitly ensures archiving works even if a previous config changed it
  • archive_mode = on tells PostgreSQL to call the archive command after each WAL segment is complete
  • archive_command copies the WAL file (%p is the full path, %f is the filename) to our archive directory
  • archive_timeout = 60 forces a WAL switch every 60 seconds even if the segment is not full. Use 300 or higher in production to reduce I/O overhead

Restart PostgreSQL to apply the changes:

sudo systemctl restart postgresql-17

Confirm WAL archiving is active:

sudo -u postgres psql -c "SHOW wal_level; SHOW archive_mode; SHOW archive_command;"

All three settings should reflect the new values:

 wal_level
-----------
 replica
(1 row)

 archive_mode
--------------
 on
(1 row)

              archive_command
--------------------------------------------
 cp %p /backup/postgresql/wal_archive/%f
(1 row)

Take a Base Backup

The base backup captures the entire PostgreSQL data directory at a single point in time. Combined with archived WAL segments, it forms the foundation for PITR.

Run pg_basebackup with tar format and gzip compression:

sudo -u postgres pg_basebackup -D /backup/postgresql/base -Ft -z -Xs -P

The -Ft flag creates tar archives, -z compresses with gzip, -Xs streams WAL files into the backup, and -P shows progress. The output shows the backup completed quickly:

31184/31184 kB (100%), 1/1 tablespace

Check the backup files:

ls -lh /backup/postgresql/base/

Two files are present: the data directory archive and the WAL archive:

total 4.1M
-rw-------. 1 postgres postgres 176K Mar 31 19:41 backup_manifest
-rw-------. 1 postgres postgres 4.0M Mar 31 19:41 base.tar.gz
-rw-------. 1 postgres postgres  17K Mar 31 19:41 pg_wal.tar.gz

A 4 MB compressed backup for our test database. Production databases will be much larger, but the process is identical.

Simulate Data Changes and Corruption

This is the critical part. We insert 300 more orders after the base backup, then destroy the table. A successful PITR must recover both the original 2000 orders and these 300 post-backup orders.

Insert the additional records:

sudo -u postgres psql appdb -c "
INSERT INTO orders (user_id, product, amount)
SELECT
    (random() * 499 + 1)::int,
    'PostBackup-Product-' || generate_series,
    (random() * 200 + 5)::numeric(10,2)
FROM generate_series(1, 300);
"

Confirm the total is now 2300 orders:

sudo -u postgres psql appdb -c "SELECT count(*) FROM orders;"

The count reflects all records including the 300 we just added:

 count
-------
  2300
(1 row)

Record a safe recovery timestamp. This is the moment we want to restore to (before the disaster). In a real incident, you would check application logs or database logs for the exact time of the bad query.

sudo -u postgres psql -c "SELECT now();"

Note the timestamp. In our test, this was 2026-03-31 19:44:17.

Force a WAL segment switch so the latest changes are archived:

sudo -u postgres psql -c "SELECT pg_switch_wal();"

Wait a moment for the archive command to copy the segment, then verify WAL files are in the archive:

ls -la /backup/postgresql/wal_archive/

The archive should contain multiple WAL segments:

total 65548
drwxr-xr-x. 2 postgres postgres     4096 Mar 31 19:44 .
drwxr-xr-x. 4 postgres postgres     4096 Mar 31 19:41 ..
-rw-------. 1 postgres postgres 16777216 Mar 31 19:41 000000010000000000000001
-rw-------. 1 postgres postgres 16777216 Mar 31 19:42 000000010000000000000002
-rw-------. 1 postgres postgres 16777216 Mar 31 19:43 000000010000000000000003
-rw-------. 1 postgres postgres 16777216 Mar 31 19:44 000000010000000000000004

Now simulate the disaster. Drop the orders table:

sudo -u postgres psql appdb -c "DROP TABLE orders CASCADE;"

Verify the destruction is complete:

sudo -u postgres psql appdb -c "\dt"

Only the users table remains:

        List of relations
 Schema | Name  | Type  |  Owner
--------+-------+-------+----------
 public | users | table | postgres
(1 row)

The orders table, all 2300 rows, and the foreign key relationship are gone. Time to recover.

Recover to the Exact Second

PITR recovery follows a strict sequence: stop the server, replace the data directory with the base backup, configure recovery parameters, and start the server. PostgreSQL replays archived WAL segments until it reaches the target time, then promotes itself to a normal running state.

Stop PostgreSQL:

sudo systemctl stop postgresql-17

Move the corrupted data directory out of the way. Do not delete it yet in case you need to investigate:

sudo mv /var/lib/pgsql/17/data /var/lib/pgsql/17/data.corrupted

Create a fresh data directory and extract the base backup into it:

sudo mkdir -p /var/lib/pgsql/17/data
sudo chown postgres:postgres /var/lib/pgsql/17/data
sudo chmod 700 /var/lib/pgsql/17/data
sudo -u postgres tar xzf /backup/postgresql/base/base.tar.gz -C /var/lib/pgsql/17/data/
sudo -u postgres tar xzf /backup/postgresql/base/pg_wal.tar.gz -C /var/lib/pgsql/17/data/pg_wal/

Create the recovery.signal file. This tells PostgreSQL to enter recovery mode on startup:

sudo -u postgres touch /var/lib/pgsql/17/data/recovery.signal

Open the PostgreSQL configuration to add recovery parameters:

sudo vi /var/lib/pgsql/17/data/postgresql.conf

Add these three lines at the end of the file. Replace the timestamp with your safe recovery time:

restore_command = 'cp /backup/postgresql/wal_archive/%f %p'
recovery_target_time = '2026-03-31 19:44:17'
recovery_target_action = 'promote'

The restore_command tells PostgreSQL where to find archived WAL segments. The recovery_target_time is the exact second to stop replaying. The recovery_target_action = promote makes the server writable again once recovery completes, instead of pausing and waiting for manual intervention.

On RHEL/Rocky with SELinux enforcing, restore the correct file contexts on the new data directory:

sudo restorecon -Rv /var/lib/pgsql/17/data/

Start PostgreSQL. It enters recovery mode automatically:

sudo systemctl start postgresql-17

Check the server log for recovery progress (the log path may differ on your system):

sudo tail -5 /var/lib/pgsql/17/data/log/postgresql-*.log

You should see lines indicating WAL replay and recovery completion:

LOG:  starting point-in-time recovery to 2026-03-31 19:44:17+00
LOG:  restored log file "000000010000000000000001" from archive
LOG:  restored log file "000000010000000000000002" from archive
LOG:  restored log file "000000010000000000000003" from archive
LOG:  recovery stopping before commit of transaction 748, time 2026-03-31 19:44:52
LOG:  redo done at 0/4000108
LOG:  selected new timeline ID: 2
LOG:  archive recovery complete
LOG:  database system is ready to accept connections

Now verify the data. Check both tables and the post-backup orders:

sudo -u postgres psql appdb -c "SELECT 'users' AS table_name, count(*) FROM users UNION ALL SELECT 'orders', count(*) FROM orders;"

Both tables are back with all rows intact:

 table_name | count
------------+-------
 users      |   500
 orders     |  2300
(2 rows)

The 300 orders inserted after the base backup are recovered. Confirm by checking for the PostBackup products specifically:

sudo -u postgres psql appdb -c "SELECT count(*) FROM orders WHERE product LIKE 'PostBackup-Product-%';"

All 300 post-backup records are present:

 count
-------
   300
(1 row)

Verify the server has promoted out of recovery mode and is accepting writes:

sudo -u postgres psql -c "SELECT pg_is_in_recovery();"

A result of f (false) confirms the server is fully promoted:

 pg_is_in_recovery
-------------------
 f
(1 row)

PostgreSQL PITR is complete. Every record, including those created after the base backup, has been recovered to the exact second before the DROP TABLE.

Clean up the corrupted data directory once you are satisfied with the recovery:

sudo rm -rf /var/lib/pgsql/17/data.corrupted

Part 2: MariaDB Point-in-Time Recovery

MariaDB uses binary logs (binlogs) instead of WAL. The approach is the same: take a full backup with mariadb-backup, then replay binary log events up to a target timestamp. The MariaDB Backup documentation covers the tool’s full capabilities.

Enable Binary Logging

Binary logging may already be enabled on your system. To ensure the correct settings for PITR, create a dedicated configuration file.

Open a new config file for backup settings:

sudo vi /etc/my.cnf.d/backup.cnf

Add the following configuration:

[mysqld]
log_bin = /var/lib/mysql/mariadb-bin
binlog_format = ROW
expire_logs_days = 14
server_id = 1

The ROW format logs actual row changes rather than SQL statements, which makes binlog replay deterministic. The expire_logs_days setting prevents binlogs from consuming all available disk space. The server_id is required when binary logging is active.

Restart MariaDB to apply:

sudo systemctl restart mariadb

Verify binary logging is active:

sudo mariadb -e "SHOW VARIABLES LIKE 'log_bin'; SHOW VARIABLES LIKE 'binlog_format';"

Both variables should reflect the new settings:

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| binlog_format | ROW   |
+---------------+-------+

Create the backup directory:

sudo mkdir -p /backup/mariadb
sudo chown mysql:mysql /backup/mariadb

Set the SELinux context for the backup directory on RHEL-family systems:

sudo semanage fcontext -a -t mysqld_db_t "/backup/mariadb(/.*)?"
sudo restorecon -Rv /backup/mariadb

Now load the same sample data we used for PostgreSQL. Create the database and tables:

sudo mariadb -e "CREATE DATABASE IF NOT EXISTS appdb;"

Create the schema and insert records:

sudo mariadb appdb -e "
CREATE TABLE users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    username VARCHAR(100) NOT NULL,
    email VARCHAR(200) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE orders (
    id INT AUTO_INCREMENT PRIMARY KEY,
    user_id INT,
    product VARCHAR(200) NOT NULL,
    amount DECIMAL(10,2) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    FOREIGN KEY (user_id) REFERENCES users(id)
);"

MariaDB does not have generate_series, so we use a sequence table or a stored procedure. The simplest approach for bulk inserts:

sudo mariadb appdb -e "
INSERT INTO users (username, email)
SELECT CONCAT('user_', seq), CONCAT('user_', seq, '@example.com')
FROM seq_1_to_500;

INSERT INTO orders (user_id, product, amount)
SELECT
    FLOOR(1 + RAND() * 500),
    CONCAT('Product-', seq),
    ROUND(10 + RAND() * 490, 2)
FROM seq_1_to_2000;"

Verify the data loaded correctly:

sudo mariadb appdb -e "SELECT 'users' AS table_name, COUNT(*) AS cnt FROM users UNION ALL SELECT 'orders', COUNT(*) FROM orders;"

Both tables have the expected row counts:

+------------+------+
| table_name | cnt  |
+------------+------+
| users      |  500 |
| orders     | 2000 |
+------------+------+

Take a Full Backup

The mariadb-backup tool (also known by its legacy name mariabackup) creates a hot backup without locking tables for the duration of the copy.

Run the backup:

sudo mariadb-backup --backup --target-dir=/backup/mariadb/full

Key lines from the output confirm success:

[00] 2026-03-31 19:45:01 Connecting to MariaDB server host: localhost
[00] 2026-03-31 19:45:01 Using server version 11.4.10-MariaDB
[00] 2026-03-31 19:45:01 mariadb-backup version 11.4.10-MariaDB
[00] 2026-03-31 19:45:02 Executing BACKUP STAGE END
[00] 2026-03-31 19:45:02 All tables unlocked
[00] 2026-03-31 19:45:02 completed OK!

The backup must be prepared before it can be used for a restore. This step applies any uncommitted transactions from the backup’s redo log:

sudo mariadb-backup --prepare --target-dir=/backup/mariadb/full

Check the total backup size:

du -sh /backup/mariadb/full

For our test database:

48M	/backup/mariadb/full

MariaDB’s backup is larger than PostgreSQL’s compressed tar because mariadb-backup copies the raw data files without compression by default. You can pipe the output through gzip or use --stream=xbstream with compression for production backups.

Simulate Data Changes and Corruption

Same pattern as PostgreSQL. Insert 300 more orders, record a safe timestamp, then drop the table.

sudo mariadb appdb -e "
INSERT INTO orders (user_id, product, amount)
SELECT
    FLOOR(1 + RAND() * 500),
    CONCAT('AfterBackup-Item-', seq),
    ROUND(5 + RAND() * 195, 2)
FROM seq_1_to_300;"

Confirm the order count is now 2300:

sudo mariadb appdb -e "SELECT COUNT(*) AS total_orders FROM orders;"

All 2300 records are present:

+--------------+
| total_orders |
+--------------+
|         2300 |
+--------------+

Flush binary logs to start a new binlog file (this makes it easier to identify which file contains the post-backup transactions):

sudo mariadb -e "FLUSH BINARY LOGS;"

Record the current timestamp as your safe recovery point:

sudo mariadb -e "SELECT NOW();"

In our test, this was 2026-03-31 19:46:22. Note this down carefully.

Check which binary log files exist:

sudo mariadb -e "SHOW BINARY LOGS;"

You should see multiple binlog files:

+--------------------+-----------+
| Log_name           | File_size |
+--------------------+-----------+
| mariadb-bin.000001 |    124890 |
| mariadb-bin.000002 |     98432 |
| mariadb-bin.000003 |       342 |
+--------------------+-----------+

Now destroy the orders table:

sudo mariadb appdb -e "DROP TABLE orders;"

Verify the table is gone:

sudo mariadb appdb -e "SHOW TABLES;"

Only the users table remains:

+-----------------+
| Tables_in_appdb |
+-----------------+
| users           |
+-----------------+

Recover to the Exact Second

MariaDB PITR is a two-phase process. First, restore the base backup to get back to the backup’s point in time. Then replay binary logs up to the target timestamp to recover all changes made between the backup and the disaster.

Stop MariaDB:

sudo systemctl stop mariadb

Move the current (corrupted) data directory aside:

sudo mv /var/lib/mysql /var/lib/mysql.corrupted

Restore the base backup using --copy-back:

sudo mariadb-backup --copy-back --target-dir=/backup/mariadb/full

Fix ownership on the restored data directory. MariaDB requires the mysql user to own all files:

sudo chown -R mysql:mysql /var/lib/mysql

Restore SELinux contexts:

sudo restorecon -Rv /var/lib/mysql

Start MariaDB with the base backup restored:

sudo systemctl start mariadb

Check the order count. At this point, you should see only the 2000 orders from the backup, not the 300 added afterward:

sudo mariadb appdb -e "SELECT COUNT(*) AS orders_from_backup FROM orders;"

The base backup state is confirmed:

+--------------------+
| orders_from_backup |
+--------------------+
|               2000 |
+--------------------+

Now replay the binary logs to recover the 300 missing orders. The binlog files are in the old data directory we moved aside. Use mariadb-binlog with --stop-datetime set to your safe recovery point:

sudo mariadb-binlog --stop-datetime="2026-03-31 19:46:22" \
  /var/lib/mysql.corrupted/mariadb-bin.000001 \
  /var/lib/mysql.corrupted/mariadb-bin.000002 | sudo mariadb

The --stop-datetime flag tells the binlog reader to stop replaying events at the specified time. Anything after that timestamp (including the DROP TABLE) is ignored. On older MariaDB versions, you may see the command as mysqlbinlog, which is the legacy name for the same tool.

Verify the full recovery. Check the total order count:

sudo mariadb appdb -e "SELECT COUNT(*) AS total_orders FROM orders;"

All 2300 orders are back:

+--------------+
| total_orders |
+--------------+
|         2300 |
+--------------+

Confirm the post-backup records specifically:

sudo mariadb appdb -e "SELECT COUNT(*) AS post_backup_orders FROM orders WHERE product LIKE 'AfterBackup-Item-%';"

All 300 post-backup orders recovered from binlog replay:

+--------------------+
| post_backup_orders |
+--------------------+
|                300 |
+--------------------+

MariaDB PITR is complete. The base backup restored 2000 orders, and the binary log replay added the remaining 300. Clean up the old data directory:

sudo rm -rf /var/lib/mysql.corrupted

Automate Daily Backups

Manual backups are fine for testing. Production systems need automated, scheduled backups. Systemd timers are a reliable alternative to cron, with built-in logging and persistence across reboots.

PostgreSQL Backup Service and Timer

Create the backup script first:

sudo vi /usr/local/bin/pg-backup.sh

Add the following script content:

#!/bin/bash
BACKUP_DIR="/backup/postgresql/base"
DATE=$(date +%Y%m%d_%H%M%S)
DEST="${BACKUP_DIR}/${DATE}"

mkdir -p "$DEST"
pg_basebackup -D "$DEST" -Ft -z -Xs -U postgres

# Keep only the last 7 daily backups
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \;

Make it executable:

sudo chmod +x /usr/local/bin/pg-backup.sh

Create the systemd service unit:

sudo vi /etc/systemd/system/pg-backup.service

Define the service to run as the postgres user with low I/O priority:

[Unit]
Description=PostgreSQL Base Backup
After=postgresql-17.service
Requires=postgresql-17.service

[Service]
Type=oneshot
User=postgres
Nice=10
IOSchedulingClass=idle
ExecStart=/usr/local/bin/pg-backup.sh

Create the timer unit:

sudo vi /etc/systemd/system/pg-backup.timer

Schedule it to run daily at 01:00 with persistence enabled:

[Unit]
Description=Daily PostgreSQL Backup Timer

[Timer]
OnCalendar=*-*-* 01:00:00
Persistent=true

[Install]
WantedBy=timers.target

The Persistent=true setting ensures the backup runs at the next opportunity if the system was powered off during the scheduled time.

Enable and start the timer:

sudo systemctl daemon-reload
sudo systemctl enable --now pg-backup.timer

MariaDB Backup Service and Timer

Create the MariaDB backup script:

sudo vi /usr/local/bin/mariadb-backup.sh

Add the script content:

#!/bin/bash
BACKUP_DIR="/backup/mariadb"
DATE=$(date +%Y%m%d_%H%M%S)
DEST="${BACKUP_DIR}/${DATE}"

mkdir -p "$DEST"
mariadb-backup --backup --target-dir="$DEST"
mariadb-backup --prepare --target-dir="$DEST"

# Keep only the last 7 daily backups
find "$BACKUP_DIR" -maxdepth 1 -type d -name "20*" -mtime +7 -exec rm -rf {} \;

Make it executable:

sudo chmod +x /usr/local/bin/mariadb-backup.sh

Create the systemd service:

sudo vi /etc/systemd/system/mariadb-backup.service

Define it to run as root (mariadb-backup needs filesystem access to the data directory):

[Unit]
Description=MariaDB Full Backup
After=mariadb.service
Requires=mariadb.service

[Service]
Type=oneshot
Nice=10
IOSchedulingClass=idle
ExecStart=/usr/local/bin/mariadb-backup.sh

Create the timer:

sudo vi /etc/systemd/system/mariadb-backup.timer

Schedule at 02:00 daily, one hour after the PostgreSQL backup to avoid I/O contention if both run on the same host:

[Unit]
Description=Daily MariaDB Backup Timer

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true

[Install]
WantedBy=timers.target

Enable and start:

sudo systemctl daemon-reload
sudo systemctl enable --now mariadb-backup.timer

Verify both timers are active:

systemctl list-timers pg-backup.timer mariadb-backup.timer

You should see both listed with their next trigger times:

NEXT                         LEFT          LAST PASSED UNIT                  ACTIVATES
Tue 2026-04-01 01:00:00 UTC  5h 14min left n/a  n/a    pg-backup.timer      pg-backup.service
Tue 2026-04-01 02:00:00 UTC  6h 14min left n/a  n/a    mariadb-backup.timer mariadb-backup.service

2 timers listed.

Production Considerations

The walkthrough above uses aggressive settings to make the demo fast. Production environments need different tuning.

archive_timeout: We used 60 seconds for testing. In production, set this to 300 (5 minutes) or 600 (10 minutes). Lower values generate more WAL files and consume more disk space, but reduce the maximum data loss window. A 5-minute timeout means you lose at most 5 minutes of transactions if both the server and the WAL archive disk fail simultaneously.

Backup storage location: Keep backups on a physically separate disk or a remote server. A backup sitting on the same disk as the data directory dies with it. NFS mounts, S3-compatible object storage, or a dedicated backup server over rsync/scp are all viable options.

Test restores regularly. Schedule a monthly restore test on a staging server. A backup you have never tested is not a backup. Automate the restore verification if possible: restore, run a query, check row counts, tear down.

Monitor WAL archive lag for PostgreSQL. If the archive command fails silently, WAL segments pile up in pg_wal/ and eventually fill the disk, crashing the server. Monitor the last_archived_wal and last_failed_wal columns from pg_stat_archiver. For a full monitoring stack, see monitoring PostgreSQL with Prometheus and Grafana.

MariaDB binlog disk usage: Binary logs grow fast on write-heavy databases. Set expire_logs_days to match your backup retention. If you keep 7 days of backups, 14 days of binlogs gives you adequate overlap. Monitor disk usage in /var/lib/mysql/ for binlog files, and set up MariaDB monitoring with Prometheus and Grafana to catch problems before they escalate.

Encryption and permissions: Backup files contain all your data. Set directory permissions to 700, owned by the database user. For offsite storage, encrypt at rest. PostgreSQL’s pg_basebackup does not encrypt natively, but you can pipe through gpg or use an encrypted filesystem. MariaDB’s mariadb-backup supports --encrypt with built-in AES encryption.

Quick Reference

Side-by-side comparison of the PITR mechanisms in both database engines:

ItemPostgreSQL 17MariaDB 11.4
Backup toolpg_basebackupmariadb-backup
Transaction logWAL (Write-Ahead Log)Binary log (binlog)
Archive configarchive_mode, archive_commandlog_bin, binlog_format
Recovery signalrecovery.signal filemariadb-binlog –stop-datetime
Recovery configrestore_command, recovery_target_time–stop-datetime flag on binlog replay
Backup size (test)4.0 MB (compressed)48 MB (uncompressed)
Prepare step neededNoYes (–prepare)
SELinux contextpostgresql_db_tmysqld_db_t
Recovery verified2300 orders restored2300 orders restored

Both engines recovered all 2300 orders, including the 300 records created after the base backup. The key difference is workflow: PostgreSQL handles WAL replay internally during startup, while MariaDB requires a separate binlog replay step after the base restore. Pick the one that matches your stack (our PostgreSQL vs MariaDB benchmark can help with that decision) and build it into your automation from day one. Recovering a production database at 3 AM is stressful enough without reading documentation for the first time.

Related Articles

Databases Install Percona MySQL 8.0 on Ubuntu 24.04|22.04|20.04 Databases Convert MySQL tables from MyISAM into InnoDB Storage engine Security Why Layered Security is a Must for Always On VPN Deployments Security Build Private PKI/TLS CA for Certificates Management With CloudFlare CFSSL

Leave a Comment

Press ESC to close