Skip to content

[RFC] Significantly reduce memory usage in AggregatingInOrderTransform#15543

Merged
CurtizJ merged 2 commits intoClickHouse:masterfrom
azat:optimize_aggregation_in_order-fix-memory-usage
Oct 5, 2020
Merged

[RFC] Significantly reduce memory usage in AggregatingInOrderTransform#15543
CurtizJ merged 2 commits intoClickHouse:masterfrom
azat:optimize_aggregation_in_order-fix-memory-usage

Conversation

@azat
Copy link
Copy Markdown
Member

@azat azat commented Oct 2, 2020

Changelog category (leave one):

  • Bug Fix

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order

Detailed description / Documentation draft:
Clean the aggregates pools (Arena's objects) between flushes, this will
reduce memory usage significantly (since Arena is not intended for
memory reuse in the already full Chunk's)

Before this patch you cannot run SELECT FROM huge_table GROUP BY
primary_key SETTINGS optimize_aggregation_in_order=1 (and the whole
point of optimize_aggregation_in_order got lost), while after, this
should be possible.

P.S. marked as bug fix, since it seems that this change should be backported (but personally I don't need this, so feel free to adjust this as you wish).

Refs: #9113

Details

HEAD:

  • 8195a309dfd4db2f9c906c322cb9a001008d6c6c w/ log message
  • 2a2f858 w/o log message

@robot-clickhouse robot-clickhouse added the pr-bugfix Pull request with bugfix, not backported by default label Oct 2, 2020
@azat azat force-pushed the optimize_aggregation_in_order-fix-memory-usage branch 2 times, most recently from f6cd61e to 8195a30 Compare October 2, 2020 20:00
azat added 2 commits October 3, 2020 00:56
Clean the aggregates pools (Arena's objects) between flushes, this will
reduce memory usage significantly (since Arena is not intended for
memory reuse in the already full Chunk's)

Before this patch you cannot run SELECT FROM huge_table GROUP BY
primary_key SETTINGS optimize_aggregation_in_order=1 (and the whole
point of optimize_aggregation_in_order got lost), while after, this
should be possible.
@azat azat force-pushed the optimize_aggregation_in_order-fix-memory-usage branch from 8195a30 to 2a2f858 Compare October 2, 2020 21:57
Copy link
Copy Markdown
Member

@CurtizJ CurtizJ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@CurtizJ
Copy link
Copy Markdown
Member

CurtizJ commented Oct 5, 2020

The test 'read_in_order_many_parts' is too slow to run as a whole. Investigate whether the create and fill queries can be sped up looks suspicios, but it's already broken in master. Actually this test wasn't slowed down.

@CurtizJ CurtizJ self-assigned this Oct 5, 2020
@CurtizJ CurtizJ merged commit a49591b into ClickHouse:master Oct 5, 2020
robot-clickhouse pushed a commit that referenced this pull request Oct 5, 2020
robot-clickhouse pushed a commit that referenced this pull request Oct 5, 2020
robot-clickhouse pushed a commit that referenced this pull request Oct 5, 2020
robot-clickhouse pushed a commit that referenced this pull request Oct 5, 2020
@azat azat deleted the optimize_aggregation_in_order-fix-memory-usage branch October 5, 2020 20:19
CurtizJ added a commit that referenced this pull request Oct 6, 2020
Backport #15543 to 20.7: [RFC] Significantly reduce memory usage in AggregatingInOrderTransform
CurtizJ added a commit that referenced this pull request Oct 6, 2020
Backport #15543 to 20.8: [RFC] Significantly reduce memory usage in AggregatingInOrderTransform
CurtizJ added a commit that referenced this pull request Oct 6, 2020
Backport #15543 to 20.9: [RFC] Significantly reduce memory usage in AggregatingInOrderTransform
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-bugfix Pull request with bugfix, not backported by default

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants