Try fix 02151_hash_table_sizes_stats.sh test#48178
Try fix 02151_hash_table_sizes_stats.sh test#48178tavplubix merged 2 commits intoClickHouse:masterfrom
Conversation
|
Our memory trackers work with sanitizers as well. So why does OOM happen? |
|
I have no idea, let's at least reduce the amount of noise |
|
Let's try to debug it. OOMs like this usually happen when we allocate a huge chunk of memory without checking that it doesn't exceed limitations. In other words, when we call ClickHouse/src/Interpreters/Aggregator.cpp Line 1175 in 73e98de ClickHouse/src/Interpreters/Aggregator.cpp Line 2809 in 73e98de Where At first, it logs and after that it throw an exception in another thread: The maximum was about 1GiB, but looks like Can it be the reason? Do we have other (less obvious) places in |
tavplubix
left a comment
There was a problem hiding this comment.
We can merge it to suppress the issue temporarily, but the root cause must be investigated. We definitely have some bugs with memory accounting in Aggregator.
Changelog category (leave one):