Showing posts with label postgresql. Show all posts
Showing posts with label postgresql. Show all posts

Wednesday, December 10, 2025

The insert benchmark on a small server : Postgres 12.22 through 18.1

This has results for Postgres versions 12.22 through 18.1 with the Insert Benchmark on a small server.

Postgres continues to be boring in a good way. It is hard to find performance regressions.

 tl;dr for a cached workload

  • performance has been stable from Postgres 12 through 18
tl;dr for an IO-bound workload
  • performance has mostly been stable
  • create index has been ~10% faster since Postgres 15
  • throughput for the write-only steps has been ~10% less since Postgres 15
  • throughput for the point-query steps (qp*) has been ~20% better since Postgres 13
Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 12.22, 13.22, 13.23, 14.19, 14.20, 15.14, 15.15, 16.10, 16.11, 17.6, 17.7, 18.0 and 18.1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

For versions prior to 18, the config file is named conf.diff.cx10a_c8r32 and they are as similar as possible and here for versions 12, 13, 14, 15, 16 and 17.

For Postgres 18 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table. I repeated it with two workloads:
  • cached - the values for X, Y, Z are 30M, 40M, 10M
  • IO-bound - the values for X, Y, Z are 800M, 4M, 1M
The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 12.22.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
This statement doesn't apply to this blog post, but I keep it here for copy/paste into future posts. Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: cached

The performance summaries are here for all versions and latest versions.

I focus on the  latest versions. Throughput for 18.1 is within 2% of 12.22, with the exception of the l.i2 benchmark step. This is great news because it means that Postgres has avoided introducing new CPU overhead as they improve the DBMS. There is some noise from the l.i2 benchmark step and that doesn't surprise me because it is likely variance from two issues -- vacuum and get_actual_variable_range

Results: IO-bound

The performance summaries are here for all versions and latest versions.

I focus on the latest versions.
  • throughput for the load step (l.i0) is 1% less in 18.1 vs 12.22
  • throughput for the index step (l.x) is 13% better in 18.1 vs 12.22
  • throughput for the write-only steps (l.i1, l.i2) is 11% and 12% less in 18.1 vs 12.22
  • throughput for the range-query steps (qr*) is 2%, 3% and 3% less in 18.1 vs 12.22
  • throughput for the point-query steps (qp*) is 22%, 23% and 23% better in 18.1 vs 12.22
The improvements for the index step arrived in Postgres 15.

The regressions for the write-only steps arrived in Postgres 15 and are likely from two issues -- vacuum and get_actual_variable_range

The improvements for the point-query steps arrived in Postgres 13.













    Saturday, November 29, 2025

    Using sysbench to measure how Postgres performance changes over time, November 2025 edition

    This has results for the sysbench benchmark on a small and big server for Postgres versions 12 through 18. Once again, Postgres is boring because I search for perf regressions and can't find any here. Results from MySQL are here and MySQL is not boring.

    While I don't show the results here, I don't see regressions when comparing the latest point releases with their predecessors -- 13.22 vs 13.23, 14.19 vs 14.20, 15.14 vs 15.15, 16.10 vs 16.11, 17.6 vs 17.7 and 18.0 vs 18.1.

    tl;dr

    • a few small regressions
    • many more small improvements
    • for write-heavy tests at high-concurrency there are many large improvements starting in PG 17

    Builds, configuration and hardware

    I compiled Postgres from source for versions 12.22, 13.22, 13.23, 14.19, 14.20, 15.14, 15.15, 16.10, 16.11, 17.6, 17.7, 18.0 and 18.1.

    I used two servers:
    • small
      • an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.
    • big
      • an ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled
      • 2 Intel D7-P5520 NVMe storage devices with RAID 1 (3.8T each) using ext4
      • 128G RAM
      • Ubuntu 22.04 running the non-HWE kernel (5.5.0-118-generic)
    Configuration files for the small server
    • Configuration files are here for Postgres versions 1213141516 and 17.
    • For Postgres 18 I used io_method=sync and the configuration file is here.
    Configuration files for the big server
    • Configuration files are here for Postgres versions 1213141516 and 17.
    • For Postgres 18 I used io_method=sync and the configuration file is here.
    Benchmark

    I used sysbench and my usage is explained here. I now run 32 of the 42 microbenchmarks listed in that blog post. Most test only one type of SQL statement. Benchmarks are run with the database cached by InnoDB.

    The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. On the small server the benchmark is run with 1 client and 1 table with 50M rows. On the big server the benchmark is run with 12 clients and 8 tables with 10M rows per table. 

    The purpose is to search for regressions from new CPU overhead and mutex contention. I use the small server with low concurrency to find regressions from new CPU overheads and then larger servers with high concurrency to find regressions from new CPU overheads and mutex contention.

    Results

    The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I provide charts below with relative QPS. The relative QPS is the following:
    (QPS for some version) / (QPS for Postgres 12.22)
    When the relative QPS is > 1 then some version is faster than Postgres 12.12.  When it is < 1 then there might be a regression. When the relative QPS is 1.2 then some version is about 20% faster than Postgres 12.22.

    Values from iostat and vmstat divided by QPS are here for the small server and the big serverThese can help to explain why something is faster or slower because it shows how much HW is used per request, including CPU overhead per operation (cpu/o) and context switches per operation (cs/o) which are often a proxy for mutex contention.

    The spreadsheet and charts are here and in some cases are easier to read than the charts below. Converting the Google Sheets charts to PNG files does the wrong thing for some of the test names listed at the bottom of the charts below.

    Results: point queries

    This is from the small server.
    • a large improvement arrived in Postgres 17 for the hot-points test
    • otherwise results have been stable from 12.22 through 18.1
    This is from the big server.
    • a large improvement arrived in Postgres 17 for the hot-points test
    • otherwise results have been stable from 12.22 through 18.1
    Results: range queries without aggregation

    This is from the small server.
    • there are small improvements for the scan test
    • otherwise results have been stable from 12.22 through 18.1
    This is from the big server.
    • there are small improvements for the scan test
    • otherwise results have been stable from 12.22 through 18.1
    Results: range queries with aggregation

    This is from the small server.
    • there are small improvements for a few tests
    • otherwise results have been stable from 12.22 through 18.1
    This is from the big server.
    • there might be small regressions for a few tests
    • otherwise results have been stable from 12.22 through 18.1
    Results: writes

    This is from the small server.
    • there are small improvements for most tests
    • otherwise results have been stable from 12.22 through 18.1
    This is from the big server.
    • there are large improvements for half of the tests
    • otherwise results have been stable from 12.22 through 18.1
    From vmstat results for update-index the per-operation CPU overhead and context switch rate are much smaller starting in Postgres 17.7. The CPU overhead is about 70% of what it was in 16.11 and the context switch rate is about 50% of the rate for 16.11. Note that context switch rates are often a proxy for mutex contention.

    Sunday, October 5, 2025

    Measuring scaleup for Postgres 18.0 with sysbench

    This post has results to measure scaleup for Postgres 18.0 on a 48-core server.

    tl;dr

    • Postgres continues to be boring (in a good way)
    • Results are mostly excellent
    • A few of the range query tests have a scaleup that is less than great but I need time to debug

    Builds, Configuration & Hardware

    The server has an AMD EPYC 9454P 48-Core Processor with AMD SMT disabled, 128G of RAM and SW RAID 0 with 2 NVMe devices. The OS is Ubuntu 22.04.

    I compiled Postgres 18.0 from source and the configuration file is here.

    Benchmark

    I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
    and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres. Each microbenchmark is run for 300 seconds.

    The benchmark is run with 1, 2, 4, 8, 12, 16, 20, 24, 32, 40 and 48 clients. The purpose is to determine how well Postgres scales up. All tests use 8 tables with 10M rows per table.

    Results

    The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I still use relative QPS here, but in a different way. The relative QPS here is:
    (QPS at X clients) / (QPS at 1 client)

    The goal is to determine scaleup efficiency for Postgres. When the relative QPS at X clients is a value near X, then things are great. But sometimes things aren't great and the relative QPS is much less than X. One issue is data contention for some of the write-heavy microbenchmarks. Another issue is mutex and rw-lock contention.

    Perf debugging via vmstat and iostat

    I use normalized results from vmstat and iostat to help explain why things aren't as fast as expected. By normalized I mean I divide the average values from vmstat and iostat by QPS to see things like how much CPU is used per query or how many context switches occur per write. And note that a high context switch rate is often a sign of mutex contention.

    Those results are here but can be difficult to read.

    Charts: point queries

    The spreadsheet with all of the results is here.

    While results aren't perfect, they are excellent. Perfect results would be to get a scaleup of 48 at 48 clients and here the result is between 40 and 42 in most tests. The worst-case is for hot-points where the scaleup is 32.57 at 48 clients. Note that the hot-points test has the most data contention of the point-query tests, as all queries fetch the same rows.

    From the vmstat metrics (see here) I don't see an increase in mutex contention (more context switches, see the cs/o column) but I do see an increase in CPU (cpu/o). When compared to a test that has better scaleup, like points-covered-pk, there I also don't see an increase in mutex contention and do see an increase in CPU overhead (see cpu/o) but the CPU increase is smaller (see here).

    Charts: range queries without aggregation

    The spreadsheet with all of the results is here.

    The results again are great, but not perfect. The worst case is for range-notcovered-pk where the scaleup is 32.92 at 48 clients. The base case is for scan where the scaleup is 46.56 at 48 clients.

    From the vmstat metrics for range-notcovered-pk I don't see any obvious problems. The CPU overhead (cpu/o, CPU per query) increases by 1.08 (about 8%) from 1 to 48 clients while the context switches per query (cs/o) decreases (see here).

    Charts: range queries with aggregation

    The spreadsheet with all of the results is here.

    Results for range queries with aggregation are worse than for range queries without aggregation. I hope to try and explain that later. A perfect result is scaleup equal to 48. Here, 3 of 8 tests have scaleup less than 3, 4 have scaleup between 30 and 40, and the best case is read-only_range=10 with a scaleup of 43.35.

    The worst-case was read-only-count with a scaleup of 21.38. From the vmstat metrics I see that at CPU overhead (cpu/o, CPU per query) increases by 2.08 at 48 clients vs 1 client while context switches per query (cs/o) decrease (see here). I am curious about that CPU increase as isn't as bad for the other range query tests, for example see here where it is no larger than 1.54. The query for read-only-count is here.

    Later I hope to explain why read-only-count, read-only-simple and read-only-sum don't do better.

    Charts: writes

    The spreadsheet with all of the results is here.

    The worst-case is update-one where scaleup is 2.86 at 48 clients. The bad result is expected as having many concurrent clients update the same row is an anti-pattern with Postgres. The scaleup for Postgres on that test is a lot worse than for MySQL where it was ~8 with InnoDB. But I am not here for Postgres vs InnoDB arguments.

    Excluding the tests that mix reads and writes (read-write-*) the scaleup is between 13 and 21. This is far from great but isn't horrible. I run with fsync-on-commit disabled which highlights problems but is less realistic. So for now I am happy with this results.



    Monday, September 29, 2025

    Postgres 18.0 vs sysbench on a 24-core, 2-socket server

    This post has results from sysbench run at higher concurrency for Postgres versions 12 through 18 on a server with 24 cores and 2 sockets. My previous post had results for sysbench run with low concurrency. The goal is to search for regressions from new CPU overhead and mutex contention.

    tl;dr, from Postgres 17.6 to 18.0

    • For most microbenchmarks Postgres 18.0 is between 1% and 3% slower than 17.6
    • The root cause might be new CPU overhead. It will take more time to gain confidence in results like this. On other servers with sysbench run at low concurrency I only see regressions for some of the range-query microbenchmarks. Here I see them for point-query and writes.

    tl;dr, from Postgres 12.22 through 18.0

    • For point queries Postgres 18.0 is usually about 5% faster than 12.22
    • For range queries Postgres 18.0 is usually as fast as 12.22
    • For writes Postgres 18.0 is much faster than 12.22

    Builds, configuration and hardware

    I compiled Postgres from source for versions 12.22, 13.22, 14.19, 15.14, 16.10, 17.6, and 18.0.

    The server is a SuperMicro SuperWorkstation 7049A-T with 2 sockets, 12 cores/socket, 64G RAM. The CPU is Intel Xeon Silver 4214R CPU @ 2.40GHz. It runs Ubuntu 24.04. Storage is a 1TB m.2 NVMe device with ext-4 and discard enabled.

    Prior to 18.0, the configuration file was named conf.diff.cx10a_c24r64 and is here for 12.2213.2214.1915.1416.10 and 17.6.

    For 18.0 I tried 3 configuration files:

    Benchmark

    I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
    and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

    The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

    The benchmark is run with 16 clients and 8 tables with 10M rows per table. The purpose is to search for regressions from new CPU overhead and mutex contention.

    Results

    The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I provide charts below with relative QPS. The relative QPS is the following:
    (QPS for some version) / (QPS for base version)
    When the relative QPS is > 1 then some version is faster than base version.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

    I present results for:
    • versions 12 through 18 using 12.22 as the base version
    • versions 17.6 and 18.0 using 17.6 as the base version
    Results: Postgres 17.6 and 18.0

    Results per microbenchmark from vmstat and iostat are here.

    For point queries, 18.0 often gets between 1% and 3% less QPS than 17.6 and the root cause might be new CPU overhead. See the cpu/o column (CPU per query) in the vmstat metrics here for the random-points microbenchmarks.

    For range queries, 18.0 often gets between 1% and 3% less QPS than 17.6 and the root cause might be new CPU overhead. See the cpu/o column (CPU per query) in the vmstat metrics here for the read-only_range=X microbenchmarks.

    For writes queries, 18.0 often gets between 1% and 2% less QPS than 17.6 and the root cause might be new CPU overhead. I ignore the write-heavy microbenchmarks that also do queries as the regressions for them might be from the queries. See the cpu/o column (CPU per query) in the vmstat metrics here for the update-index microbenchmark.

    Relative to: 17.6
    col-1 : 18.0 with the x10b config
    col-2 : 18.0 with the x10c config
    col-3 : 18.0 with the x10d config

    col-1   col-2   col-3   point queries
    1.00    0.99    1.00    hot-points_range=100
    0.99    0.98    1.00    point-query_range=100
    0.98    0.99    0.99    points-covered-pk_range=100
    0.99    0.99    0.98    points-covered-si_range=100
    0.97    0.99    0.98    points-notcovered-pk_range=100
    0.98    0.99    0.97    points-notcovered-si_range=100
    0.98    0.99    0.98    random-points_range=1000
    0.97    0.99    0.98    random-points_range=100
    0.99    0.99    0.98    random-points_range=10

    col-1   col-2   col-3   range queries without aggregation
    0.98    0.98    0.99    range-covered-pk_range=100
    0.98    0.98    0.98    range-covered-si_range=100
    0.98    0.99    0.98    range-notcovered-pk_range=100
    1.00    1.02    0.99    range-notcovered-si_range=100
    1.01    1.01    1.01    scan_range=100

    col-1   col-2   col-3   range queries with aggregation
    0.99    1.00    0.98    read-only-count_range=1000
    0.98    0.98    0.98    read-only-distinct_range=1000
    0.97    0.97    0.96    read-only-order_range=1000
    0.97    0.98    0.97    read-only_range=10000
    0.98    0.99    0.98    read-only_range=100
    0.99    0.99    0.99    read-only_range=10
    0.98    0.99    0.99    read-only-simple_range=1000
    0.98    1.00    0.98    read-only-sum_range=1000

    col-1   col-2   col-3   writes
    0.99    0.99    0.99    delete_range=100
    0.99    0.99    0.99    insert_range=100
    0.98    0.98    0.98    read-write_range=100
    0.99    1.00    0.99    read-write_range=10
    0.99    0.98    0.97    update-index_range=100
    0.99    0.99    1.00    update-inlist_range=100
    1.00    0.97    0.99    update-nonindex_range=100
    0.97    1.00    0.98    update-one_range=100
    1.00    0.99    1.01    update-zipf_range=100
    0.98    0.98    0.97    write-only_range=10000

    Results: Postgres 12 to 18

    For the Postgres 18.0 results in col-6, the result is in green when relative QPS is >= 1.05 and in yellow when relative QPS is <= 0.98. Yellow indicates a possible regression.

    Results per microbenchmark from vmstat and iostat are here.

    Relative to: 12.22
    col-1 : 13.22
    col-2 : 14.19
    col-3 : 15.14
    col-4 : 16.10
    col-5 : 17.6
    col-6 : 18.0 with the x10b config

    col-1   col-2   col-3   col-4   col-5   col-6   point queries
    0.98    0.96    0.99    0.98    2.13    2.13    hot-points_range=100
    1.00    1.02    1.01    1.02    1.03    1.01    point-query_range=100
    0.99    1.05    1.05    1.08    1.07    1.05    points-covered-pk_range=100
    0.99    1.08    1.05    1.07    1.07    1.05    points-covered-si_range=100
    0.99    1.04    1.05    1.06    1.07    1.05    points-notcovered-pk_range=100
    0.99    1.05    1.04    1.05    1.06    1.04    points-notcovered-si_range=100
    0.98    1.03    1.04    1.06    1.06    1.04    random-points_range=1000
    0.98    1.04    1.05    1.07    1.07    1.05    random-points_range=100
    0.99    1.02    1.03    1.05    1.05    1.04    random-points_range=10

    col-1   col-2   col-3   col-4   col-5   col-6   range queries without aggregation
    1.02    1.04    1.03    1.04    1.03    1.01    range-covered-pk_range=100
    1.05    1.07    1.06    1.06    1.06    1.05    range-covered-si_range=100
    0.99    1.00    1.00    1.00    1.01    0.98    range-notcovered-pk_range=100
    0.97    0.99    1.00    1.01    1.01    1.01    range-notcovered-si_range=100
    0.86    1.06    1.08    1.17    1.18    1.20    scan_range=100

    col-1   col-2   col-3   col-4   col-5   col-6   range queries with aggregation
    0.98    0.97    0.97    1.00    0.98    0.97    read-only-count_range=1000
    0.99    0.99    1.02    1.02    1.01    0.99    read-only-distinct_range=1000
    1.00    0.99    1.02    1.05    1.05    1.02    read-only-order_range=1000
    0.99    0.99    1.04    1.07    1.09    1.06    read-only_range=10000
    0.99    1.00    1.00    1.01    1.02    0.99    read-only_range=100
    1.00    1.00    1.00    1.01    1.01    1.00    read-only_range=10
    0.99    0.99    1.00    1.01    1.01    0.99    read-only-simple_range=1000
    0.98    0.99    0.99    1.00    1.00    0.98    read-only-sum_range=1000

    col-1   col-2   col-3   col-4   col-5   col-6   writes
    0.98    1.09    1.09    1.04    1.29    1.27    delete_range=100
    0.99    1.03    1.02    1.03    1.08    1.07    insert_range=100
    1.00    1.03    1.04    1.05    1.07    1.05    read-write_range=100
    1.01    1.09    1.09    1.09    1.15    1.14    read-write_range=10
    1.00    1.04    1.03    0.86    1.44    1.42    update-index_range=100
    1.01    1.11    1.11    1.12    1.13    1.12    update-inlist_range=100
    0.99    1.04    1.06    1.05    1.25    1.25    update-nonindex_range=100
    1.05    0.92    0.90    0.84    1.18    1.15    update-one_range=100
    0.98    1.04    1.03    1.01    1.26    1.26    update-zipf_range=100
    1.02    1.05    1.10    1.09    1.21    1.18    write-only_range=10000

    Thursday, September 11, 2025

    Postgres 18rc1 vs sysbench

    This post has results for Postgres 18rc1 vs sysbench on small and large servers. Results for Postgres 18beta3 are here for a small and large server.

    tl;dr

    • Postgres 18 looks great
    • I continue to see small CPU regressions in Postgres 18 for range queries that don't do aggregation on low-concurrency workloads. I have yet to explain that. 
    • The throughput for the scan microbenchmark has more variance with Postgres 18. I assume this is related to more or less work getting done by vacuum but I have yet to debug the root cause.

    Builds, configuration and hardware

    I compiled Postgres from source for versions 17.6, 18 beta3 and 18 rc1.

    The servers are:
    • small
      • an ASUS ExpertCenter PN53 with AMD Ryzen 7735HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe device with ext4 and discard enabled.
    • large32
      • Dell Precision 7865 Tower Workstation with 1 socket, 128G RAM, AMD Ryzen Threadripper PRO 5975WX with 32 Cores and AMD SMT disabled, Ubuntu 24.04 and and NVMe device with ext4 and discard.
    • large48
      • an ax162s from Hetzner with an AMD EPYC 9454P 48-Core Processor with SMT disabled
      • 2 Intel D7-P5520 NVMe storage devices with RAID 1 (3.8T each) using ext4
      • 128G RAM
      • Ubuntu 22.04 running the non-HWE kernel (5.5.0-118-generic)
    All configurations use synchronous IO which is the the only option prior to Postgres 18 and for Postgres 18 the config file sets io_method=sync.

    Configuration files:

    Benchmark

    I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks 
    and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

    For all servers the read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

    The number of tables and rows per table was:
    • small server - 1 table, 50M rows
    • large servers - 8 tables, 10M rows per table
    The number of clients (amount of concurrency) was:
    • small server - 1
    • large32 server - 24
    • large48 servcer- 40
    Results

    The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I provide charts below with relative QPS. The relative QPS is the following:
    (QPS for some version) / (QPS for Postgres 17.6)
    When the relative QPS is > 1 then some version is faster than PG 17.6.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

    The numbers highlighted in yellow below might be from a small regression for range queries that don't do aggregation. But note that this does reproduce for the full table scan microbenchmark (scan). I am not certain it is a regression as this might be from non-deterministic CPU overheads for read-heavy workloads that are run after vacuum. I hope to look at CPU flamegraphs soon.

    Results: small server

    I continue to see small (~3%) regressions in throughput for range queries without aggregation across Postgres 18 beta1, beta2, beta3 and rc1. But I have yet to debug this and am not certain it is a regression. I am also skeptical about the great results for scan. I suspect that I have more work to do to make the benchmark less subject to variance from MVCC GC (vacuum here). I also struggle with that on RocksDB (compaction), but not on InnoDB (purge).

    Relative to: Postgres 17.6
    col-1 : 18beta3
    col-2 : 18rc1

    col-1   col-2   point queries
    1.01    0.98    hot-points_range=100
    1.01    1.00    point-query_range=100
    1.02    1.02    points-covered-pk_range=100
    0.99    1.01    points-covered-si_range=100
    1.00    0.99    points-notcovered-pk_range=100
    1.00    0.99    points-notcovered-si_range=100
    1.01    1.00    random-points_range=1000
    1.01    0.99    random-points_range=100
    1.01    1.00    random-points_range=10

    col-1   col-2   range queries without aggregation
    0.97    0.96    range-covered-pk_range=100
    0.97    0.97    range-covered-si_range=100
    0.99    0.99    range-notcovered-pk_range=100
    0.99    0.99    range-notcovered-si_range=100
    1.35    1.36    scan_range=100

    col-1   col-2   range queries with aggregation
    1.02    1.03    read-only-count_range=1000
    1.00    1.00    read-only-distinct_range=1000
    0.99    0.99    read-only-order_range=1000
    1.00    1.00    read-only_range=10000
    1.00    0.99    read-only_range=100
    0.99    0.98    read-only_range=10
    1.01    1.01    read-only-simple_range=1000
    1.02    1.00    read-only-sum_range=1000

    col-1   col-2   writes
    0.99    0.99    delete_range=100
    0.99    1.01    insert_range=100
    0.99    0.99    read-write_range=100
    0.99    0.99    read-write_range=10
    0.98    0.98    update-index_range=100
    1.00    0.99    update-inlist_range=100
    0.98    0.98    update-nonindex_range=100
    0.98    0.97    update-one_range=100
    0.98    0.97    update-zipf_range=100
    0.99    0.98    write-only_range=10000

    Results: large32 server

    I don't see small regressions in throughput for range queries without aggregation across Postgres 18 beta1, beta2, beta3 and rc1. I have only seen that on the low concurrency (small server) results.

    The improvements on the scan microbenchmark come from using less CPU. But I am skeptical about the improvements. I might have more work to do to make the benchmark less subject to variance from MVCC GC (vacuum here). I also struggle with that on RocksDB (compaction), but not on InnoDB (purge).

    Relative to: Postgres 17.6
    col-1 : Postgres 18rc1

    col-1   point queries
    1.01    hot-points_range=100
    1.01    point-query_range=100
    1.01    points-covered-pk_range=100
    1.01    points-covered-si_range=100
    1.00    points-notcovered-pk_range=100
    1.00    points-notcovered-si_range=100
    1.01    random-points_range=1000
    1.00    random-points_range=100
    1.01    random-points_range=10

    col-1   range queries without aggregation
    0.99    range-covered-pk_range=100
    0.99    range-covered-si_range=100
    0.99    range-notcovered-pk_range=100
    0.99    range-notcovered-si_range=100
    1.12    scan_range=100

    col-1   range queries with aggregation
    1.00    read-only-count_range=1000
    1.02    read-only-distinct_range=1000
    1.01    read-only-order_range=1000
    1.03    read-only_range=10000
    1.00    read-only_range=100
    1.00    read-only_range=10
    1.00    read-only-simple_range=1000
    1.00    read-only-sum_range=1000

    col-1   writes
    1.01    delete_range=100
    1.00    insert_range=100
    1.00    read-write_range=100
    1.00    read-write_range=10
    1.00    update-index_range=100
    1.00    update-inlist_range=100
    1.00    update-nonindex_range=100
    0.99    update-one_range=100
    1.00    update-zipf_range=100
    1.00    write-only_range=10000

    Results: large48 server

    I don't see small regressions in throughput for range queries without aggregation across Postgres 18 beta1, beta2, beta3 and rc1. I have only seen that on the low concurrency (small server) results.

    The improvements on the scan microbenchmark come from using less CPU. But I am skeptical about the improvements. I might have more work to do to make the benchmark less subject to variance from MVCC GC (vacuum here). I also struggle with that on RocksDB (compaction), but not on InnoDB (purge).

    I am skeptical about the regression I see here for scan. That comes from using ~10% more CPU per query. I might have more work to do to make the benchmark less subject to variance from MVCC GC (vacuum here). I also struggle with that on RocksDB (compaction), but not on InnoDB (purge).

    I have not see the large improvements for the insert and delete microbenchmarks on previous tests on that large server. I assume this is another case where I need to figure out how to reduce variance when I run the benchmark.

    Relative to: Postgres 17.6
    col-1 : Postgres 18beta3
    col-2 : Postgres 18rc1

    col-1   col-2   point queries
    0.99    0.99    hot-points_range=100
    0.99    0.99    point-query_range=100
    1.00    0.99    points-covered-pk_range=100
    0.99    1.02    points-covered-si_range=100
    1.00    0.99    points-notcovered-pk_range=100
    0.99    1.01    points-notcovered-si_range=100
    1.00    0.99    random-points_range=1000
    1.00    0.99    random-points_range=100
    1.00    1.00    random-points_range=10

    col-1   col-2   range queries without aggregation
    0.99    0.99    range-covered-pk_range=100
    0.98    0.99    range-covered-si_range=100
    0.99    0.99    range-notcovered-pk_range=100
    1.01    1.01    range-notcovered-si_range=100
    0.91    0.91    scan_range=100

    col-1   col-2   range queries with aggregation
    1.04    1.03    read-only-count_range=1000
    1.02    1.01    read-only-distinct_range=1000
    1.01    1.00    read-only-order_range=1000
    1.06    1.06    read-only_range=10000
    0.98    0.97    read-only_range=100
    0.99    0.99    read-only_range=10
    1.02    1.02    read-only-simple_range=1000
    1.03    1.03    read-only-sum_range=1000

    col-1   col-2   writes
    1.46    1.49    delete_range=100
    1.32    1.32    insert_range=100
    0.99    1.00    read-write_range=100
    0.98    1.00    read-write_range=10
    0.99    1.00    update-index_range=100
    0.95    1.03    update-inlist_range=100
    1.00    1.02    update-nonindex_range=100
    0.96    1.04    update-one_range=100
    1.00    1.01    update-zipf_range=100
    1.00    1.00    write-only_range=10000




    Wednesday, June 11, 2025

    Postgres 18 beta1: small server, IO-bound Insert Benchmark (v2)

    This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

    There might be regressions from 17.5 to 18 beta1

    • QPS decreases by ~5% and CPU increases by ~5% on the l.i2 (write-only) step
    • QPS decreases by <= 2% and CPU increases by ~2% on the qr* (range query) steps
    There might be regressions from 14.0 to 18 beta1
    • QPS decreases by ~6% and ~18% on the write-heavy steps (l.i1, l.i2)

    Builds, configuration and hardware

    I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

    The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04. More details on it are here.

    For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
    • conf.diff.cx10b_c8r32
      • uses io_method='sync' to match Postgres 17 behavior
    • conf.diff.cx10c_c8r32
      • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
    • conf.diff.cx10d_c8r32
      • uses io_method='io_uring' to do async IO via io_uring
    The Benchmark

    The benchmark is explained here and is run with 1 client and 1 table with 800M rows. I provide two performance reports:
    • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
    • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
    The benchmark steps are:

    • l.i0
      • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
      • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

    The summary sections (herehere and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for the benchmark steps. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.5.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: Postgres 14.0 through 18 beta1

    The performance summary is here

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions. The results here are similar to what I reported prior to fixing the too many connections problem in the benchmark client.

    For 14.0 through 18 beta1, QPS on ...
    • the initial load (l.i0)
      • Performance is stable across versions
      • 18 beta1 and 17.5 have similar performance
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99)
    • create index (l.x)
      • ~10% faster starting in 15.0
      • 18 beta1 and 17.5 have similar performance
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.11, 1.12)
    • first write-only step (l.i1)
      • Performance decreases ~7% from version 16.9 to 17.0. CPU overhead (see cpupq here) increases by ~5% in 17.0.
      • 18 beta1 and 17.5 have similar performance
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.93, 0.94)
    • second write-only step (l.i2)
      • Performance decreases ~6% in 15.0, ~8% in 17.0 and then ~5% in 18.0. CPU overhead (see cpupq here) increases ~5%, ~6% and ~5% in 15.0, 17.0 and 18beta1. Of all benchmark steps, this has the largest perf regression from 14.0 through 18 beta1 which is ~20%.
      • 18 beta1 is ~4% slower than 17.5
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.86, 0.82)
    • range query steps (qr100, qr500, qr1000)
      • 18 beta1 and 17.5 have similar performance, but 18 beta1 is slightly slower
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99) for qr100, (0.97, 0.98) for qr500 and (0.97, 0.95) for qr1000. The issue is new CPU overhead, see cpupq here.
    • point query steps (qp100, qp500, qp1000)
      • 18 beta1 and 17.5 have similar performance but 18 beta1 is slightly slower
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.98) for qp100, (0.99, 0.98) for qp500 and (0.97, 0.96) for qp1000. The issue is new CPU overhead, see cpupq here.
    Results: Postgres 17.5 vs 18 beta1

    The performance summary is here.

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
    • x10b with io_method=sync
    • x10c with io_method=worker and io_workers=16
    • x10d with io_method=io_uring
    The summary is:
    • initial load step (l.i0)
      • rQPS for (x10b, x10c, x10d) was (0.99, 1001.00)
    • create index step (l.x)
      • rQPS for (x10b, x10c, x10d) was (1.011.021.02)
    • first write-heavy ste (l.i1)
      • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 1.01)
    • second write-heavy step (l.i2)
      • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.960.930.94)
      • CPU overhead (see cpupq here) increases by ~5% in 18 beta1
    • range query steps (qr100, qr500, qr1000)
      • for qr100 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 0.99)
      • for qr500 the rQPS for (x10b, x10c, x10d) was (1.00, 0.97, 0.99)
      • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.98, 0.97)
      • CPU overhead (see cpupq here, here and here) increases by ~2% in 18 beta1
    • point query steps (qp100, qp500, qp1000)
      • for qp100 the rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.99)
      • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
      • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.990.99)










    Sunday, June 8, 2025

    Postgres 18 beta1: small server, CPU-bound Insert Benchmark (v2)

    This is my second attempt at a CPU-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

    tl;dr

    • Performance between 17.5 and 18 beta1 is mostly similar on read-heavy steps
    • 18 beta1 might have small regressions from new CPU overheads on write-heavy steps

    Builds, configuration and hardware

    I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

    The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

    For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
    • conf.diff.cx10b_c8r32
      • uses io_method='sync' to match Postgres 17 behavior
    • conf.diff.cx10c_c8r32
      • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
    • conf.diff.cx10d_c8r32
      • uses io_method='io_uring' to do async IO via io_uring
    The Benchmark

    The benchmark is explained here and is run with 1 client and 1 table with 20M rows. I provide two performance reports:
    • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
    • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
    The benchmark steps are:

    • l.i0
      • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
      • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

    The summary sections (here and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: Postgres 14.0 through 18 beta1

    The performance summary is here

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions.

    For 14.0 through18 beta1, QPS on ...
    • l.i0 (the initial load)
      • Slightly faster starting in 15.0
      • Throughput was ~4% faster starting in 15.0 and that drops to ~2% in 18 beta1
      • 18 beta1 and 17.5 have similar performance
    • l.x (create index) 
      • Faster starting in 15.0
      • Throughput is between 9% and 17% faster in 15.0 through 18 beta1
      • 18 beta1 and 17.5 have similar performance
    • l.i1 (write-only)
      • Slower starting in 15.0
      • It is ~3% slower in 15.0 and that increases to between 6% and 10% in 18 beta1
      • 18 beta1 and 17.5 have similar performance
    • l.i2 (write-only)
      • Slower starting in 15.13 with a big drop in 17.0
      • 18 beta1 with io_method= sync and io_uring is worse than 17.5. It isn't clear but one problem might be more CPU/operation (see cpupq here)
    • qr100, qr500, qr1000 (range query)
      • Stable from 14.0 through 18 beta1
    • qp100, qp500, qp1000 (point query) 
      • Stable from 14.0 through 18 beta1
    Results: Postgres 17.5 vs 18 beta1

    The performance summary is here

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
    • x10b with io_method=sync
    • x10c with io_method=worker and io_workers=16
    • x10d with io_method=io_uring
    The summary of the summary is:
    • initial load step (l.i0)
      • 18 beta1 is 1% to 3% slower than 17.5
      • This step is short running so I don't have a strong opinion on the change
    • create index step (l.x)
      • 18 beta1 is 0% to 2% faster than 17.5
      • This step is short running so I don't have a strong opinion on the change
    • write-heavy step (l.i1)
      • 18 beta1 with io_method= sync and workers has similar perf as 17.5
      • 18 beta1 with io_method=io_uring is ~4% slower than 17.5. The problem might be more CPU/operation, see cpupq here
    • write-heavy step (l.i2)
      • 18 beta1 with io_method=workers is ~2% faster than 17.5
      • 18 beta1 with io_method= sync and workers is 6% and 8% slower than 17.5. The problem might be more CPU/operation, see cpupq here
    • range query steps (qr100, qr500, qr1000)
      • 18 beta1 and 17.5 have similar performance
    • point query steps (qp100, qp500, qp1000)
      • 18 beta1 and 17.5 have similar performance
    The summary is:
    • initial load step (l.i0)
      • rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.97)
    • create index step (l.x)
      • rQPS for (x10b, x10c, x10d) was (1.00, 1.02, 1.00)
    • write-heavy steps (l.i1, l.i2)
      • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.011.00, 0.96)
      • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.941.02, 0.92)
    • range query steps (qr100, qr500, qr1000)
      • for qr100 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
      • for qr500 the rQPS for (x10b, x10c, x10d) was (0.991.011.00)
      • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
    • point query steps (qp100, qp500, qp1000)
      • for qp100 the rQPS for (x10b, x10c, x10d) was (1.001.001.00)
      • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
      • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.991.00, 0.98)

    Sysbench for MySQL 5.6 through 9.5 on a 2-socket, 24-core server

    This has results for the sysbench benchmark on a 2-socket, 24-core server. A post with results from 8-core and 32-core servers is here . tl;...