-
Notifications
You must be signed in to change notification settings - Fork 737
Description
Description
Apologies, I'm not overly familiar with the Metrics side of OpenTelemetry - I'm also slightly unsure if the bug lays within this repository or in the otel API/SDK for Go itself.
We have a use case where we create and destroy gRPC clients throughout the long lifetime of our application. When we create new gRPC clients, we provide otelgrpc.UnaryClientInterceptor() as an interceptor. We've noticed slow memory leaks in our application and profiling reveals a large amount of the heap allocated by go.opentelemetry.io/otel/internal/global.(*meter).Int64Histogram. Looking into the implementation, it appears that whilst repeatedly calling MeterProvider.Meter with the same name returns the same Meter, repeatedly calling Meter.Int64Histogram with the same name creates a new Int64Histogram each time. All of these are tracked internally in the instruments slice of global.meter and this is where the memory leak is sitting.
Our interim solution has been to use a sync.OnceValue to only create the interceptors one during the lifetime of the application and to share these across the clients. It would be helpful to either have it documented that you should only create a single set of interceptors throughout the application lifetime, or to have the underlying issue resolved.
Environment
- OS: MacOS/Linux
- Architecture: x86, arm64
- Go Version: 1.21
otelgrpcversion: v0.42.0
Steps To Reproduce
- Create and close a large number of gRPC clients with the
UnaryClientInterceptor(). - Use pprof to confirm that heap is still in use for the
Int64Histograminstruments created byUnaryClientInterceptor()
Expected behavior
The Int64Histogram to be reused or for a warning to be included in documentation regarding calling UnaryClientInterceptor repeatedly.
