You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
flowchart LR
A[Log Call] --> B{Level Check}
B -->|Enabled| C[Build Entry]
B -->|Disabled| X[Skip]
C --> D[Add Context]
D --> E[Dispatch]
E --> F[Console]
E --> G[JSON File]
E --> H[Custom Handler]
style A fill:#2E8B57,stroke:#1a5235,color:#fff
style X fill:#CD5C5C,stroke:#8b3d3d,color:#fff
Loading
Features
Feature
Description
RFC 5424 Levels
Emergency β Trace (9 levels)
Structured Fields
Key-value pairs with every log
Context Propagation
Inherit fields in nested calls
Lazy Evaluation
Avoid string construction when disabled
Sampling
Log only N% of high-volume messages
Multiple Handlers
Console, JSON, File, Custom
Usage
// Quick setup (one import!)log.configure_console(log.debug_level)// Structured logginglog.info("User logged in",[#("user_id","42"),#("ip","192.168.1.1")])// Context propagationlog.with_context([#("request_id","abc123")],fn(){log.debug("Processing...")// inherits request_id})// Lazy evaluation - avoid string construction when disabledlog.debug_lazy(fn(){"Heavy: "<>expensive_to_string(data)},[])// Sampling for high-volume logs (1% of messages)log.sampled(log.trace_level,0.01,"Hot path",[])
Handlers
log.configure_console(log.info_level)// Console onlylog.configure_json("app.jsonl",log.debug_level)// JSON filelog.configure_full(log.debug_level,"app.jsonl",log.info_level)// Both
π Metrics
flowchart TB
subgraph Types
C[Counter]
G[Gauge]
H[Histogram]
end
subgraph Operations
C --> INC[inc / inc_by]
G --> SET[set / add]
H --> OBS[observe / time]
end
subgraph Storage
INC --> ETS[(ETS)]
SET --> ETS
OBS --> ETS
end
subgraph Export
ETS --> PROM[to_prometheus]
ETS --> BEAM[beam_memory]
end
style C fill:#4169E1,stroke:#2d4a9e,color:#fff
style G fill:#4169E1,stroke:#2d4a9e,color:#fff
style H fill:#4169E1,stroke:#2d4a9e,color:#fff
Loading
Metric Types
Type
Use Case
Operations
Counter
Requests, errors, events
inc(), inc_by(n)
Gauge
Connections, queue size
set(v), add(v), inc(), dec()
Histogram
Latency, response sizes
observe(v), time(fn)
Usage
// Counter (monotonically increasing)letrequests=metrics.counter("http_requests_total")metrics.inc(requests)metrics.inc_by(requests,5)// Gauge (can go up or down)letconnections=metrics.gauge("active_connections")metrics.set(connections,42.0)metrics.gauge_inc(connections)// Histogram (distribution)letlatency=metrics.histogram("latency_ms",[10.0,50.0,100.0,500.0])metrics.observe(latency,75.5)// Time a function automaticallyletresult=metrics.time_ms(latency,fn(){do_work()})// BEAM memory trackingletmem=metrics.beam_memory()// β BeamMemory(total, processes, system, atom, binary, ets)// Export Prometheus formatio.println(metrics.to_prometheus())
β‘ Benchmarking
flowchart LR
A[Function] --> B[Warmup]
B --> C[Collect Samples]
C --> D[Calculate Stats]
D --> E[Results]
E --> F[Print]
E --> G[to_json]
E --> H[to_markdown]
E --> I[Compare]
style A fill:#CD5C5C,stroke:#8b3d3d,color:#fff
style E fill:#2E8B57,stroke:#1a5235,color:#fff
Loading
Statistics
Each benchmark calculates:
Stat
Description
mean
Average duration
stddev
Standard deviation
min/max
Range
p50
Median (50th percentile)
p95
95th percentile
p99
99th percentile
ips
Iterations per second
ci_95
95% confidence interval
Usage
// Simple benchmarkbench.run("fib_recursive",fn(){fib(30)})|>bench.print()// Compare implementationsletslow=bench.run("v1",fn(){algo_v1()})letfast=bench.run("v2",fn(){algo_v2()})bench.compare(slow,fast)|>bench.print_comparison()// β v1 vs v2: 2.3x faster π// Export resultsbench.to_json(result)// JSON objectbench.to_json_string(result)// JSON stringbench.to_markdown(result)// | Name | Mean | p50 | p99 | IPS |
Build
make test# Run 32 tests
make bench # Run benchmarks
make log # Run log example
make metrics # Run metrics example
make docs # Generate documentation