-
Notifications
You must be signed in to change notification settings - Fork 1.8k
When using lz4 for message compression and sending, it will cause memory strain on the Kafka Broker #3252
Copy link
Copy link
Closed
Description
Description
Versions
| Sarama | Kafka | Go |
|---|---|---|
| 1.45.2 | 2.8.1 | 1.24 |
Configuration
// Enable LZ4 compression with custom settings
config.Producer.Compression = sarama.CompressionLZ4Logs
logs: CLICK ME
Additional Context
Sarama use github.com/pierrec/lz4/v4,in this lz4 lib default block size is 4MB.It means every produce request, broker will allocate 4MB buffer to decompress request, it will cause broker occurr a large number of gc, and request handle thread always hung at allocate buffer memory. In Issue KAFKA-10433, Broker caching the lz4 decompress buffer memory in thread local.But it also has two problems:
- Old version broker when use sarama lz4 produce also has this question
- If broker has many request handle threads,even if use thread local buffer also cause buffer memory nervous.
lz4WriterPool = sync.Pool{
New: func() interface{} {
return lz4.NewWriter(nil)
},
}
func NewWriter(w io.Writer) *Writer {
zw := &Writer{frame: lz4stream.NewFrame()}
zw.state.init(writerStates)
_ = zw.Apply(DefaultBlockSizeOption, DefaultChecksumOption, DefaultConcurrency, defaultOnBlockDone)
zw.Reset(w)
return zw
}
var (
DefaultBlockSizeOption = BlockSizeOption(Block4Mb)
DefaultChecksumOption = ChecksumOption(true)
DefaultConcurrency = ConcurrencyOption(1)
defaultOnBlockDone = OnBlockDoneOption(nil)
)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels