Skip to content

Project Tracking: Performance Benchmarking SIG #1617

@cartersocha

Description

@cartersocha

Description

As the adoption of OpenTelemetry grows and larger enterprises continue to deepen their usage of project components there are persistent and ongoing end user questions about the OpenTelemetry performance impact. End user performance varies due to the quirks of their environment but without a project performance standard and historical data record no one really knows if the numbers they're seeing are abnormal or expected. Additionally, there is no comprehensive documentation available on tuning project components or the performance trade-offs available to users which results in a reliance on vendor support.

Project Maintainers need to be able to track the current state of their components while preventing any performance regressions when making new releases. Customers need to be able to get a general sense of potential OpenTelemetry performance impact and the certainty that OpenTelemetry takes performance and customer resources seriously. Performance tracking and quantification is a general need that should be addressed by a project wide effort and automated tooling that minimizes repo owner effort while providing valuable new data points for all project stakeholders.

Project Board

SIG Charter

charter

Deliverables

  • Evaluate the current performance benchmarking specification, propose an updated benchmarking standard that can apply across project components, and make the requisite specification updates. The benchmarking standard should provide relevant information for maintainers and end users.
  • Develop automated tooling that can be used across project repos to report current performance numbers and track changes as new features / PRs are merged.
  • Write performance tuning documentation for the project website that can help customers make actionable decisions when faced with performance trade-offs or debugging bad component performance.
  • Provide ongoing maintenance as needed on automated tooling and own the underlying assets

Initial implementation scope would be the core Collector components (main repo), JavaScript / Java / Python SDKs and their core components. No contrib or instrumentation.

Staffing / Help Wanted

Anyone with an opinion on performance standards and testing.

Language maintainers or approvers as they will be tasked with implementing the changes and following through on the process.

Required staffing

lead - tbd
@jpkrohling domain expert
@cartersocha contributor
@mwear collector sig
@codeboten collector sig implementation
@ocelotl python sig
@martinkuba javascript
@tylerbenson java
@sbaum1994 contributor

@jpkrohling - TC/GC sponsor
@alolita - TC/GC sponsor

Need: more performance domain experts
Need: maintainers or approvers from several language sigs to participate

Meeting Times

TBD

Timeline

Initial scope is for the Collector and 3 SDKs. Output should be by KubeCon NA November 6, 2023

Labels

tbd

Linked Issues and PRs

https://opentelemetry.io/docs/collector/benchmarks/
cncf/cluster#245
cncf/cluster#182
https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/performance-benchmark.md
https://opentelemetry.io/docs/specs/otel/performance-benchmark/

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions