Skip to content

Conversation

@anivar
Copy link
Contributor

@anivar anivar commented Aug 18, 2025

Why this matters

Issue #7759 highlighted that we have no visibility into WASM performance changes. As WASM becomes more critical for edge deployments and browser-based policy evaluation, we need to catch performance regressions before they impact users.

What this PR does

Adds lightweight benchmarks that compare WASM vs regular Rego performance, giving us:

  • Performance visibility: See exactly how WASM performs vs standard Rego
  • Regression detection: Catch slowdowns before they hit production
  • Scaling insights: Understand how WASM handles policies of different sizes

The implementation

Just 220 lines added across 3 files:

  • topdown/wasm_bench_test.go - Standard Go benchmarks following OPA patterns
  • Makefile - Simple make bench-wasm target
  • .github/workflows/pull-request.yaml - CI runs benchmarks automatically

Example output

BenchmarkWASMvsRego/rego-8         6686    179456 ns/op
BenchmarkWASMvsRego/wasm-8          567   2104532 ns/op
BenchmarkWASMScaling/10-8          2000    750000 ns/op
BenchmarkWASMScaling/100-8          200   5250000 ns/op
BenchmarkWASMScaling/1000-8          20  52500000 ns/op

Now we can track WASM performance trends over time and ensure it remains viable for production use cases.

Testing

# Run locally
make bench-wasm

# Or with go test
go test -tags=opa_wasm -bench=BenchmarkWASM ./topdown

The benchmarks gracefully skip when WASM engine is not available, so they won't break existing workflows.

Impact

This gives OPA users confidence that WASM performance is being monitored and protected, which is crucial as more organizations adopt WASM for edge and browser deployments.

Fixes #7759

@netlify
Copy link

netlify bot commented Aug 18, 2025

Deploy Preview for openpolicyagent ready!

Name Link
🔨 Latest commit 5566ead
🔍 Latest deploy log https://app.netlify.com/projects/openpolicyagent/deploys/68d52326a5574c00086a4a96
😎 Deploy Preview https://deploy-preview-7841--openpolicyagent.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@anivar anivar force-pushed the feat/wasm-performance-benchmarks branch 5 times, most recently from cb1ad4e to b8779bb Compare August 19, 2025 05:21
@srenatus srenatus self-requested a review August 19, 2025 16:05
@srenatus
Copy link
Contributor

Heya! Thanks for contributing.

I'm afraid this feature could have benefited from a bit of discussion and planning beforehand -- either on a github issue/discussion or in the #contributors channel on the OPA slack.

Anyhow, it's done now, so let's review. Some upfront comments:

  1. Continuous performance benchmarking is a worthy goal, but one we should do for all our benchmarks. Gobenchdata has proven to be a good off-the-shelf solution for this, so I'd prefer we (a) went with that, and (b) did that for all our benchmarks. This puts it out of scope for this PR, so I think it would be better to remove those bits from the PR.
  2. The last time I've checked, memory used through a cgo call (such as the wasm execution via wasmtime-go) would fly under the radar of the benchmark tooling in Go. So it would at least be warranted to leave a note somewhere that any memory-related benchmark results are most likely wrong.
  3. We're checking policy size for "scaling" here, if I understand this correctly. If you really want to compare the performance, I think there's more that's interesting, like different sizes of input and data. Also, it's known that builtin calls that are not implemented in wasm are costly, I suppose the benchmarks could try to surface that with hard numbers.
  4. Finally, your PR message mentions browser-based scenarios. I'd agree that the performance of those is crucial to understand, but the benchmarks added here do not relate to browsers. We're benchmarking the concrete setup of "wasmtime + wasmtime-go through the rego package" -- not, say, the wasm engines of Firefox/Chrome/NodeJS.

What do you think about adding a wasm-target angle to the benchmarks we have here, for example? They try to assess the performance of different scenarios.

Thanks again for your contribution -- Let's discuss further steps! 😃

@anivar anivar force-pushed the feat/wasm-performance-benchmarks branch from 3833525 to 6d6d6eb Compare August 25, 2025 14:31
@anivar
Copy link
Contributor Author

anivar commented Aug 25, 2025

Hey @srenatus, thanks for the detailed review!

You're right about discussing first - though honestly, code comes way easier to me than planning discussions. After seeing the WASM performance issue in #7759, I built some benchmarking to measure
the gap.

Made the changes you asked for:

Removed CI stuff - Yeah, Gobenchdata should be separate. Kept just make bench-wasm.

Added missing scenarios:

  • Different data sizes (10/100/1000)
  • Builtin performance (sprintf is brutal in WASM)
  • Policy complexity scaling
  • Memory benchmark warning for cgo

Better integration - New v1/rego/rego_wasm_bench_test.go adds WASM targets to existing patterns. Can now directly compare topdown vs WASM.

Good point about browser vs wasmtime distinction too.

Better approach?

Copy link
Contributor

@srenatus srenatus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's get this in. Added a few more comments on the latest iteration. Thanks for bearing with me! 🐾

@anivar anivar force-pushed the feat/wasm-performance-benchmarks branch 2 times, most recently from c0c4dca to 4958399 Compare September 10, 2025 14:15
anivar added a commit to anivar/opa that referenced this pull request Sep 10, 2025
- Move benchmarks from v1/topdown to v1/rego to resolve import cycle
- Add comprehensive WASM performance benchmarks as requested:
  * BenchmarkSimpleAuthzTargets: Authorization policy performance
  * BenchmarkBuiltinPerformanceTargets: Builtin function performance
  * BenchmarkDataSizesTargets: Performance with varying data sizes
  * BenchmarkPolicyComplexityTargets: Performance with varying policy complexity
- Use ParsedModule for consistency with existing benchmarks
- Update Makefile bench-wasm target to test both v1/topdown and v1/rego
- Add memory benchmark warning about cgo/wasmtime-go limitations

Addresses review feedback on PR open-policy-agent#7841

Signed-off-by: Anivar <[email protected]>
@anivar anivar requested a review from srenatus September 11, 2025 04:33
@anivar
Copy link
Contributor Author

anivar commented Sep 23, 2025

@srenatus Moved to v1/rego/rego_wasm_bench_test.go as suggested. In addition added compilation, cold start, memory allocation and bundle size benchmarks.

The memory benchmarks show WASM reporting 100x less allocations - this is expected since cgo allocations through wasmtime-go aren't tracked by Go's benchmarking.

Copy link
Contributor

@srenatus srenatus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM

@srenatus srenatus enabled auto-merge (squash) September 25, 2025 11:10
Anivar A Aravind and others added 8 commits September 25, 2025 13:10
- Add comprehensive benchmark package for WASM performance testing
- Create bench-wasm CLI tool for running benchmarks
- Add Makefile targets for benchmark operations
- Integrate benchmarks into CI/CD pipeline
- Add documentation for WASM benchmarking
- Implement performance regression detection (>5% threshold)

This infrastructure helps prevent WASM performance regressions and
provides visibility into performance characteristics across releases.

Fixes open-policy-agent#7759

Signed-off-by: Anivar Aravind <[email protected]>
Signed-off-by: Anivar A Aravind <[email protected]>
Add benchmarks to track WASM target performance and prevent regressions.
The benchmarks compare WASM vs regular Rego execution time and test
scaling with different policy sizes.

This addresses concerns from open-policy-agent#7759 about WASM performance visibility.

Signed-off-by: Anivar Aravind <[email protected]>
Signed-off-by: Anivar A Aravind <[email protected]>
- Remove CI integration as suggested (out of scope for this PR)
- Enhance benchmark coverage with different data sizes and builtin performance
- Add WASM target support to existing benchmark patterns
- Document memory benchmark limitations due to cgo/wasmtime-go
- Focus on wasmtime+wasmtime-go performance measurement

Addresses feedback from @srenatus about benchmark scope and integration.

Signed-off-by: Anivar A Aravind <[email protected]>
- Move benchmark file from topdown/ to v1/topdown/
- Update copyright year to 2025
- Modernize Rego syntax using 'if' keyword
- Update Makefile bench-wasm target to run all benchmarks in v1/topdown and v1/rego

Signed-off-by: Anivar <[email protected]>
Signed-off-by: Anivar <[email protected]>
- Move benchmarks from v1/topdown to v1/rego to resolve import cycle
- Add comprehensive WASM performance benchmarks as requested:
  * BenchmarkSimpleAuthzTargets: Authorization policy performance
  * BenchmarkBuiltinPerformanceTargets: Builtin function performance
  * BenchmarkDataSizesTargets: Performance with varying data sizes
  * BenchmarkPolicyComplexityTargets: Performance with varying policy complexity
- Use ParsedModule for consistency with existing benchmarks
- Update Makefile bench-wasm target to test both v1/topdown and v1/rego
- Add memory benchmark warning about cgo/wasmtime-go limitations

Addresses review feedback on PR open-policy-agent#7841

Signed-off-by: Anivar <[email protected]>
Added WASM vs topdown comparative benchmarks to track performance
regressions mentioned in issue open-policy-agent#7759. The benchmarks provide visibility
into compilation overhead, cold start penalties, and memory reporting.

Key improvements:
- Added copyright header (2025)
- Removed context import, use b.Context()
- Added isWASMNotAvailable() helper for cleaner error handling
- Fixed unsafe type assertion with two-value form
- Added b.Helper() to benchmark helper function
- Pre-allocated slices with capacity
- Used const for magic numbers

Added 4 critical benchmarks:
- BenchmarkWASMCompilationTargets: Compilation overhead metrics
- BenchmarkWASMColdStartTargets: Cold start penalty analysis
- BenchmarkMemoryAllocationTargets: Memory usage comparison with CGO warning
- BenchmarkBundleSizeTargets: Policy size scaling impacts

The memory benchmarks include a note about CGO allocation tracking
limitations when using wasmtime-go.

Fixes: open-policy-agent#7759
Signed-off-by: Anivar Mistry <[email protected]>
Signed-off-by: Anivar A Aravind <[email protected]>
Signed-off-by: Anivar Mistry <[email protected]>
Signed-off-by: Anivar A Aravind <[email protected]>
Signed-off-by: Stephan Renatus <[email protected]>
@srenatus srenatus force-pushed the feat/wasm-performance-benchmarks branch from 5c8ced3 to 5566ead Compare September 25, 2025 11:10
@srenatus srenatus merged commit 3738431 into open-policy-agent:main Sep 25, 2025
31 checks passed
anivar added a commit to anivar/opa that referenced this pull request Oct 9, 2025
This PR introduces two new builtin functions for parsing and validating
Package URLs (PURLs), which are commonly used in Software Bill of
Materials (SBOMs) to identify software packages.

New builtins:
- purl.is_valid(string): Validates if a string is a valid PURL
- purl.parse(string): Parses a PURL into its components (type,
  namespace, name, version, qualifiers, subpath)

The implementation vendors the necessary parts of the packageurl-go
library internally to avoid adding external dependencies. Only required
fields (type, name) are returned as static properties, while optional
fields are dynamically added when present, following OPA's pattern of
omitting empty values.

This addresses the need for SBOM validation in supply chain security
policies, enabling OPA to parse and validate package identifiers from
various ecosystems (npm, maven, docker, pypi, etc.) as requested in
issue open-policy-agent#7841.

Includes comprehensive documentation with SBOM policy examples and
test coverage for various PURL types.

Fixes open-policy-agent#7841

Signed-off-by: Anivar Aravind <[email protected]>
anivar added a commit to anivar/opa that referenced this pull request Oct 9, 2025
This PR introduces two new builtin functions for parsing and validating
Package URLs (PURLs), which are commonly used in Software Bill of
Materials (SBOMs) to identify software packages.

New builtins:
- purl.is_valid(string): Validates if a string is a valid PURL
- purl.parse(string): Parses a PURL into its components (type,
  namespace, name, version, qualifiers, subpath)

The implementation vendors the necessary parts of the packageurl-go
library internally to avoid adding external dependencies. Only required
fields (type, name) are returned as static properties, while optional
fields are dynamically added when present, following OPA's pattern of
omitting empty values.

This addresses the need for SBOM validation in supply chain security
policies, enabling OPA to parse and validate package identifiers from
various ecosystems (npm, maven, docker, pypi, etc.) as requested in
issue open-policy-agent#7841.

Includes comprehensive documentation with SBOM policy examples and
test coverage for various PURL types.

Fixes open-policy-agent#7841

Signed-off-by: Anivar A Aravind <[email protected]>
anivar added a commit to anivar/opa that referenced this pull request Oct 9, 2025
This PR introduces two new builtin functions for parsing and validating
Package URLs (PURLs), which are commonly used in Software Bill of
Materials (SBOMs) to identify software packages.

New builtins:
- purl.is_valid(string): Validates if a string is a valid PURL
- purl.parse(string): Parses a PURL into its components (type,
  namespace, name, version, qualifiers, subpath)

The implementation vendors the necessary parts of the packageurl-go
library internally to avoid adding external dependencies. Only required
fields (type, name) are returned as static properties, while optional
fields are dynamically added when present, following OPA's pattern of
omitting empty values.

This addresses the need for SBOM validation in supply chain security
policies, enabling OPA to parse and validate package identifiers from
various ecosystems (npm, maven, docker, pypi, etc.) as requested in
issue open-policy-agent#7841.

Includes comprehensive documentation with SBOM policy examples and
test coverage for various PURL types.

Fixes open-policy-agent#7841

Signed-off-by: Anivar A Aravind <[email protected]>
anivar added a commit to anivar/opa that referenced this pull request Oct 9, 2025
This PR introduces two new builtin functions for parsing and validating
Package URLs (PURLs), which are commonly used in Software Bill of
Materials (SBOMs) to identify software packages.

New builtins:
- purl.is_valid(string): Validates if a string is a valid PURL
- purl.parse(string): Parses a PURL into its components (type,
  namespace, name, version, qualifiers, subpath)

The implementation vendors the necessary parts of the packageurl-go
library internally to avoid adding external dependencies. Only required
fields (type, name) are returned as static properties, while optional
fields are dynamically added when present, following OPA's pattern of
omitting empty values.

This addresses the need for SBOM validation in supply chain security
policies, enabling OPA to parse and validate package identifiers from
various ecosystems (npm, maven, docker, pypi, etc.) as requested in
issue open-policy-agent#7841.

Includes comprehensive documentation with SBOM policy examples and
test coverage for various PURL types.

Fixes open-policy-agent#7841

Signed-off-by: Anivar A Aravind <[email protected]>
anivar added a commit to anivar/opa that referenced this pull request Dec 7, 2025
This PR introduces two new builtin functions for parsing and validating
Package URLs (PURLs), which are commonly used in Software Bill of
Materials (SBOMs) to identify software packages.

New builtins:
- purl.is_valid(string): Validates if a string is a valid PURL
- purl.parse(string): Parses a PURL into its components (type,
  namespace, name, version, qualifiers, subpath)

The implementation vendors the necessary parts of the packageurl-go
library internally to avoid adding external dependencies. Only required
fields (type, name) are returned as static properties, while optional
fields are dynamically added when present, following OPA's pattern of
omitting empty values.

This addresses the need for SBOM validation in supply chain security
policies, enabling OPA to parse and validate package identifiers from
various ecosystems (npm, maven, docker, pypi, etc.) as requested in
issue open-policy-agent#7841.

Includes comprehensive documentation with SBOM policy examples and
test coverage for various PURL types.

Fixes open-policy-agent#7841

Signed-off-by: Anivar A Aravind <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

wasm tests execution time reported as slower with opa 1.6.0 than 1.5.1

2 participants