Skip to content

Spanner Healthchecker memory consumption #770

@twoism

Description

@twoism

We are still trying to track down the problem but in two of our services we have observedspanner.healthChecker consuming much more memory than expected with repeated calls to context.WithDeadline.

(pprof) top10
2308001 of 2340227 total (98.62%)
Dropped 758 nodes (cum <= 11701)
Showing top 10 nodes out of 84 (cum >= 70167)
      flat  flat%   sum%        cum   cum%
   1714972 73.28% 73.28%    1714972 73.28%  context.WithDeadline
    436926 18.67% 91.95%     436926 18.67%  runtime.deferproc.func1
     32768  1.40% 93.35%      65536  2.80%  encoding/asn1.parseField
     32768  1.40% 94.75%      32768  1.40%  github.com/lyft/spannerproxy/vendor/github.com/golang/protobuf/proto.(*Buffer).DecodeStringBytes
     32768  1.40% 96.15%      32768  1.40%  reflect.(*structType).Field
     32768  1.40% 97.55%      32768  1.40%  strconv.formatBits
     16384   0.7% 98.25%      16384   0.7%  syscall.anyToSockaddr
      8192  0.35% 98.60%      40960  1.75%  github.com/lyft/spannerproxy/vendor/github.com/golang/protobuf/proto.(*Buffer).dec_slice_struct
       455 0.019% 98.62%      65991  2.82%  crypto/x509.parseCertificate
         0     0% 98.62%      70167  3.00%  crypto/tls.(*Conn).Handshake

I am digging through the health checker code but any insight here would be helpful. Thanks!

Full heap profile here:

pprof001.svg.zip

Metadata

Metadata

Assignees

Labels

api: spannerIssues related to the Spanner API.priority: p0Highest priority. Critical issue. P0 implies highest priority.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions