Skip to content

Add histogram collection benchmark#4912

Merged
jack-berg merged 2 commits intoopen-telemetry:mainfrom
jack-berg:exp-histogram-benchmark
Nov 30, 2022
Merged

Add histogram collection benchmark#4912
jack-berg merged 2 commits intoopen-telemetry:mainfrom
jack-berg:exp-histogram-benchmark

Conversation

@jack-berg
Copy link
Copy Markdown
Member

@jsuereth has noticed that the collection of exponential histograms has high memory churn. After looking at the code, there are a few places where we unnecessary copies of bucket count arrays. First step to improving is to benchmark current state.

The benchmark does the following:

  • Sets up a SdkMeterProvider with a reader configured to use either delta or cumulative temporality, and either explicit or exponential bucket histograms
  • For each iteration of the benchmark, record 10k measurements to 100 different attribute sets, then collect the metrics.
  • The attribute sets are allocated beforehand so there should be very little memory allocated during recording measurements. The vast majority of allocation should be generated by the collection.
  • Repeat this record and collect cycle 100 times.
  • Executed for following combinations of temporality and aggregation: 1. cumulative + explicit bucket 2. delta + explicit bucket 3. cumulative + exponential bucket 4. delta + exponential bucket

This should make it very clear where implementation details of the histogram aggregation and temporality cause excessive memory allocation.

Here's the results on my machine:

Benchmark                                                                      (aggregationGenerator)  (aggregationTemporality)  Mode  Cnt           Score          Error   Units
HistogramCollectBenchmark.recordAndCollect                                  EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5  4036622200.200 ± 73759470.900   ns/op
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate                   EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5           5.640 ±        0.095  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate.norm              EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5    26876296.000 ±    41961.756    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space          EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5           2.100 ±       18.079  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space.norm     EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5    10066329.600 ± 86674133.674    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.count                        EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5           1.000                 counts
HistogramCollectBenchmark.recordAndCollect:·gc.time                         EXPLICIT_BUCKET_HISTOGRAM                     DELTA    ss    5          22.000                     ms
HistogramCollectBenchmark.recordAndCollect                                  EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5  4030401033.400 ± 31096644.588   ns/op
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate                   EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           6.699 ±        0.145  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate.norm              EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5    31878712.000 ±   521844.907    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space          EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           2.109 ±       18.160  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space.norm     EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5    10066329.600 ± 86674133.674    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.count                        EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           1.000                 counts
HistogramCollectBenchmark.recordAndCollect:·gc.time                         EXPLICIT_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           2.000                     ms
HistogramCollectBenchmark.recordAndCollect                               EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5  9369722983.000 ± 37251886.770   ns/op
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate                EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5           3.656 ±        0.035  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate.norm           EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5    37862288.000 ±   306826.773    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space       EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5           0.972 ±        8.368  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space.norm  EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5    10066329.600 ± 86674133.674    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.count                     EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5           1.000                 counts
HistogramCollectBenchmark.recordAndCollect:·gc.time                      EXPONENTIAL_BUCKET_HISTOGRAM                     DELTA    ss    5           2.000                     ms
HistogramCollectBenchmark.recordAndCollect                               EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5  9579125600.000 ± 91322629.591   ns/op
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate                EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           4.393 ±        0.046  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.alloc.rate.norm           EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5    46466259.200 ±   580706.759    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space       EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           0.954 ±        8.212  MB/sec
HistogramCollectBenchmark.recordAndCollect:·gc.churn.G1_Eden_Space.norm  EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5    10066329.600 ± 86674133.674    B/op
HistogramCollectBenchmark.recordAndCollect:·gc.count                     EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           1.000                 counts
HistogramCollectBenchmark.recordAndCollect:·gc.time                      EXPONENTIAL_BUCKET_HISTOGRAM                CUMULATIVE    ss    5           3.000                     ms

@jack-berg jack-berg requested a review from a team November 2, 2022 18:08
@jack-berg
Copy link
Copy Markdown
Member Author

@jsuereth while there are opportunities to improve, it does appear that exponential histograms currently allocate less memory than explicit bucket histograms.

@codecov
Copy link
Copy Markdown

codecov Bot commented Nov 2, 2022

Codecov Report

Base: 91.26% // Head: 90.89% // Decreases project coverage by -0.36% ⚠️

Coverage data is based on head (a89a4fe) compared to base (568bdb4).
Patch has no changes to coverable lines.

❗ Current head a89a4fe differs from pull request most recent head 5d9f6d5. Consider uploading reports for the commit 5d9f6d5 to get more accurate results

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #4912      +/-   ##
============================================
- Coverage     91.26%   90.89%   -0.37%     
+ Complexity     4886     4804      -82     
============================================
  Files           552      545       -7     
  Lines         14431    14340      -91     
  Branches       1373     1383      +10     
============================================
- Hits          13170    13035     -135     
- Misses          874      898      +24     
- Partials        387      407      +20     
Impacted Files Coverage Δ
...opentelemetry/opentracingshim/SpanBuilderShim.java 77.89% <0.00%> (-15.91%) ⬇️
.../opentelemetry/sdk/logs/SdkReadWriteLogRecord.java 80.64% <0.00%> (-12.91%) ⬇️
...sdk/autoconfigure/MetricExporterConfiguration.java 89.23% <0.00%> (-10.77%) ⬇️
...emetry/sdk/testing/assertj/LongExemplarAssert.java 14.70% <0.00%> (-9.62%) ⬇️
.../opentelemetry/sdk/internal/ComponentRegistry.java 90.90% <0.00%> (-9.10%) ⬇️
...entelemetry/sdk/logs/export/LogRecordExporter.java 91.66% <0.00%> (-8.34%) ⬇️
...y/sdk/autoconfigure/SpanExporterConfiguration.java 94.73% <0.00%> (-5.27%) ⬇️
.../autoconfigure/LogRecordExporterConfiguration.java 93.44% <0.00%> (-5.07%) ⬇️
...a/io/opentelemetry/sdk/trace/internal/JcTools.java 28.57% <0.00%> (-3.01%) ⬇️
...ava/io/opentelemetry/opentracingshim/SpanShim.java 87.61% <0.00%> (-2.66%) ⬇️
... and 49 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@jack-berg
Copy link
Copy Markdown
Member Author

Please take a look @jsuereth 🙂

@github-actions
Copy link
Copy Markdown
Contributor

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions Bot added the Stale label Nov 29, 2022
@jack-berg jack-berg removed the Stale label Nov 29, 2022
@jack-berg jack-berg merged commit 14b64fc into open-telemetry:main Nov 30, 2022
dmarkwat pushed a commit to dmarkwat/opentelemetry-java that referenced this pull request Dec 30, 2022
@jack-berg jack-berg mentioned this pull request Feb 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants