Have you ever seen a case where when consuming, if the consumer can't keep up (let's say he is writing each message to slow disk), that the amount of memory explodes? I'm seeing that. I've run valgrind and don't see any leaks, etc. thinking librdkafka is caching msgs but I haven't changed any of the defaults in terms of fetch.message.max.bytes, etc.