This has results for Postgres versions 12.22 through 18.1 with the Insert Benchmark on a small server.
Postgres continues to be boring in a good way. It is hard to find performance regressions.
tl;dr for a cached workload
- performance has been stable from Postgres 12 through 18
- performance has mostly been stable
- create index has been ~10% faster since Postgres 15
- throughput for the write-only steps has been ~10% less since Postgres 15
- throughput for the point-query steps (qp*) has been ~20% better since Postgres 13
The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.
- conf.diff.cx10b_c8r32
- uses io_method='sync' to match Postgres 17 behavior
- conf.diff.cx10c_c8r32
- uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
- conf.diff.cx10d_c8r32
- uses io_method='io_uring' to do async IO via io_uring
- cached - the values for X, Y, Z are 30M, 40M, 10M
- IO-bound - the values for X, Y, Z are 800M, 4M, 1M
- l.i0
- insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
- Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
I focus on the latest versions. Throughput for 18.1 is within 2% of 12.22, with the exception of the l.i2 benchmark step. This is great news because it means that Postgres has avoided introducing new CPU overhead as they improve the DBMS. There is some noise from the l.i2 benchmark step and that doesn't surprise me because it is likely variance from two issues -- vacuum and get_actual_variable_range.
- throughput for the load step (l.i0) is 1% less in 18.1 vs 12.22
- throughput for the index step (l.x) is 13% better in 18.1 vs 12.22
- throughput for the write-only steps (l.i1, l.i2) is 11% and 12% less in 18.1 vs 12.22
- throughput for the range-query steps (qr*) is 2%, 3% and 3% less in 18.1 vs 12.22
- throughput for the point-query steps (qp*) is 22%, 23% and 23% better in 18.1 vs 12.22








.png)
.png)
.png)
.png)