Limit number of analyze for one query#38185
Limit number of analyze for one query#38185alexey-milovidov merged 8 commits intoClickHouse:masterfrom
Conversation
There was a problem hiding this comment.
Also, we planned to use this counter to inhibit the second analysis step.
There was a problem hiding this comment.
In which cases second analysis step should be inhibited?
There was a problem hiding this comment.
It was said that the second analysis step is only an optimization. (Although I'm not sure)
If we already have made too many analyses, we can avoid further optimizations.
Also you can remove exponential blowup by memorizing the ASTs that were already optimized.
There was a problem hiding this comment.
Changed it to use max_pipeline_depth / 10 as a the threshold to disable the second analyze and max_pipeline_depth to cancel the query at all.
UPD: I've decided that it's not really good idea, it over-complicates logic, so just throw an exception.
885e058 to
b0d043b
Compare
|
@tavplubix thanks for noticing, I'll check |
|
StorageRabbitMQ uses a global context for storage, and that's why the counter overflowed. Probably it's better to find another solution than the counter in context, but for now, I can't come up with it. ClickHouse/src/Storages/RabbitMQ/StorageRabbitMQ.cpp Lines 1056 to 1058 in 1c0d267 |
|
@vdimir please resubmit this PR. |
|
@alexey-milovidov |
for writing to materialised view? may be just create a new query context there each time? |
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Ref #21557