Skip to content

jeffzi/luamark

Repository files navigation

LuaMark

prek Busted Luacheck Defold Love2D Luarocks License

LuaMark is a lightweight, portable microbenchmarking library for Lua. It measures execution time and memory usage with sensible defaults and optional high-precision clocks.

Features

  • Measure time and memory with optional precision via Chronos, LuaPosix, or LuaSocket
  • Statistics: median with 95% confidence intervals
  • Standalone Timer: luamark.Timer() for ad-hoc profiling outside benchmark
  • Ready to use with sensible defaults

Requirements

Installation

Install LuaMark using LuaRocks:

luarocks install luamark

Or include luamark.lua directly in your project.

Usage

API Overview

Function Input Returns params support
timeit single function Stats No
memit single function Stats No
compare_time table of functions Result[] Yes
compare_memory table of functions Result[] Yes

params lets you run benchmarks across parameter combinations (e.g., different input sizes).

Single Function

Measure execution time:

local luamark = require("luamark")

local function factorial(n)
   if n == 0 then return 1 end
   return n * factorial(n - 1)
end

local stats = luamark.timeit(function()
   factorial(10)
end)

print(stats)
-- Output: 250ns ± 0ns

Measure memory allocation:

local luamark = require("luamark")

local stats = luamark.memit(function()
   local t = {}
   for i = 1, 100 do t[i] = i end
end)

print(stats)
-- Output: 2.05kB ± 0B

Comparing Functions

Basic comparison:

local luamark = require("luamark")

local results = luamark.compare_time({
   loop = function()
      local s = ""
      for i = 1, 100 do s = s .. tostring(i) end
   end,
   table_concat = function()
      local t = {}
      for i = 1, 100 do t[i] = tostring(i) end
      table.concat(t)
   end,
})

print(luamark.render(results))

With parameters and setup:

local luamark = require("luamark")

local results = luamark.compare_time({
   loop = function(ctx, p)
      local s = ""
      for i = 1, #ctx.data do s = s .. ctx.data[i] end
   end,
   table_concat = function(ctx, p)
      table.concat(ctx.data)
   end,
}, {
   params = { n = { 100, 1000 } },
   setup = function(p)
      local data = {}
      for i = 1, p.n do data[i] = tostring(i) end
      return { data = data }
   end,
})

print(luamark.render(results))
    Name      Rank     Relative        Median      Ops
------------  ----  ---------------  ----------  --------
table_concat     1  █            1x         1us  959.7k/s
loop             2  ████████ ↓5.36x  6us ± 21ns  179.1k/s

n=1000
    Name      Rank      Relative        Median       Ops
------------  ----  ----------------  -----------  -------
table_concat     1  █             1x  10us ± 31ns  98.4k/s
loop             2  ████████ ↓13.07x  133us ± 2us   7.5k/s

Results with overlapping confidence intervals share the same rank with an prefix (e.g., ≈1 ≈1 3), meaning they are statistically indistinguishable.

Per-Iteration Setup with Spec Hooks

When benchmarking functions that mutate data, use Spec.before for per-iteration setup:

local luamark = require("luamark")

local results = luamark.compare_time({
   table_sort = {
      fn = function(ctx, p)
         table.sort(ctx.data)
      end,
      before = function(ctx, p)
         -- Copy data before each iteration (sort mutates it)
         ctx.data = {}
         for i = 1, #ctx.source do
            ctx.data[i] = ctx.source[i]
         end
         return ctx
      end,
   },
}, {
   params = { n = {100, 1000} },
   setup = function(p)
      local source = {}
      for i = 1, p.n do
         source[i] = math.random(p.n * 10)
      end
      return { source = source }
   end,
})

Standalone Timer

Use luamark.Timer() for ad-hoc profiling outside benchmarks:

local luamark = require("luamark")

local timer = luamark.Timer()
timer.start()
local sum = 0
for i = 1, 1e6 do sum = sum + i end
local elapsed = timer.stop()

print(luamark.humanize_time(elapsed))  -- "4.25ms"

Technical Details

Configuration

LuaMark provides two configuration options:

  • rounds: Target sample count (default: 100)
  • time: Target duration in seconds (default: 1)

Benchmarks run until either target is met: rounds samples collected or time seconds elapsed. For very fast functions, LuaMark caps rounds at 100 to ensure a consistent sample count.

Modify these settings directly:

local luamark = require("luamark")

-- Increase minimum rounds for more statistical reliability
luamark.rounds = 1000

-- Run benchmarks for at least 2 seconds
luamark.time = 2

Iterations and Rounds

LuaMark runs your code multiple times per round (iterations), then repeats for multiple rounds. It computes statistics across rounds, handling clock granularity and filtering system noise.

Clock Precision

LuaMark selects the best available clock:

Priority Module Precision Notes
1 chronos nanosecond recommended
2 luaposix nanosecond not available on macOS
3 love.timer microsecond auto-detected in Love2D
4 luasocket millisecond
5 os.clock (built-in) varies fallback

API Documentation

For full API details, see the API Documentation. See CHANGELOG.md for release history.

Contributing

Contributions welcome: bug fixes, documentation, new features.

License

LuaMark uses the MIT License.

About

A lightweight, portable microbenchmarking library for Lua

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors