-
Notifications
You must be signed in to change notification settings - Fork 4.1k
RocksDB block cache usage not accounted for in cgo memory metrics on MacOS #14209
Description
If I start a single-node cluster on my macbook and then start running the ledger example against it (go run main.go --concurrency 1 --generator few-few postgres://root@localhost:26257?sslmode=disable), the memory usage graph on the runtime tab of the admin UI will show continued memory usage growth of RSS memory, but not of go/cgo allocated/total:
The runtime stats printed out into the logs every 10 seconds show the same thing:
I170316 16:36:06.700545 121 server/status/runtime.go:228 runtime stats: 778 MiB RSS, 77 goroutines, 21 MiB/42 MiB/80 MiB GO alloc/idle/total, 1.0 MiB/2.3 MiB CGO alloc/total, 37936.33cgo/sec, 1.08/0.17 %(u/s)time, 0.00 %gc (58x)
This is odd, because I'd naively expect the go total and cgo total to approximately equal the total memory usage, but it turns out that the RocksDB block cache isn't accounted for, which can be confirmed by checking the value of the rocksdb_block_cache_usage metric at http://localhost:8080/_status/vars of the running server and seeing that it accounts for the missing memory.
This has reproduced reliably for me, including across a make clean and rebuild of the entire binary.
As far as we know this only affects MacOS, not Linux, and discussion offline has hypothesized that there may be some sort of compiler/linker toolchain issue behind this.
