Add 100k/1m uniform random benchmarks#8
Conversation
There was a problem hiding this comment.
I would suggest a benchmark::DoNotOptimize(delaunator) statement in the benchmark loops.
I know, it's unlikely that the optimizer is clever enough to figure out that it can throw away the whole code, but then again ... it's header only and compilers are only getting better.
|
@delfrrr can you also fix the misleading statement in the readme about 6x faster than JS, or at least explain how you got this result? |
|
@mourner will do soon, need to dig into results; I agree that they may be not precise; also note the version I was testing against; |
|
@delfrrr the script doesn't have any warmup, which is especially important for JS because it's JIT-optimized. It might have also measured a dataset that's too small. |
|
I modified cpp and js benchmark to make it more similar; I think there is ~30ms overhead in JS version which makes much slower on small number of points. On 1M points difference is ~10%; since time is not growing linear with number of points comparison in percents is not relevant. @mourner btw, what is estimated complexity of algorithm itself? |
|
@delfrrr it should be O(n log n) in average. The 30ms JS "overhead" is likely just the JIT warmup — subsequent runs on the same amount of points should be much faster. |

@delfrrr Added some more benchmarks (100k and 1m random) in this PR.
I was curious about this statement and loos like it's misleading — benchmarks show only about ~20-30% improvement, not 6x. And I get similar perf numbers to the C++ version in my Rust port https://github.com/mourner/delaunator-rs