Skip to content

Reduce lock contention for MergeTree tables (by renaming parts without holding lock)#61973

Merged
alexey-milovidov merged 9 commits intoClickHouse:masterfrom
azat:mt/rename-without-lock
Jun 6, 2024
Merged

Reduce lock contention for MergeTree tables (by renaming parts without holding lock)#61973
alexey-milovidov merged 9 commits intoClickHouse:masterfrom
azat:mt/rename-without-lock

Conversation

@azat
Copy link
Copy Markdown
Member

@azat azat commented Mar 27, 2024

Changelog category (leave one):

  • Not for changelog (changelog entry is not required)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

This change was reverted

Reduce lock contention for MergeTree tables (by renaming parts without holding lock)

Under heavy load, or not so heavy but with fsync_part_directory=1, time that renameTo() holds DataPartsLock will be increased, and this will affect almost every operation with this table.

On one of production clusters I saw ~60 seconds with fsync_part_directory=1.

Move the renameTo() out from the critical section.

@azat azat changed the title [RFC] Rename parts without holding DataPartsLock [RFC] Reduce lock contention for MergeTree tables (by renaming parts without holding DataPartsLock) Mar 27, 2024
@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-performance Pull request with some performance improvements label Mar 27, 2024
@robot-ch-test-poll4
Copy link
Copy Markdown
Contributor

robot-ch-test-poll4 commented Mar 27, 2024

This is an automated comment for commit 6cfd5b2 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
A SyncIf it fails, ask a maintainer for help⏳ pending
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR❌ failure
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Mergeable CheckChecks if all other necessary checks are successful⏳ pending
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
PR CheckChecks correctness of the PR's body✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success

@azat azat force-pushed the mt/rename-without-lock branch from 3c10ef3 to 21e9798 Compare March 27, 2024 11:39
@azat azat changed the title [RFC] Reduce lock contention for MergeTree tables (by renaming parts without holding DataPartsLock) [RFC] Reduce lock contention for MergeTree tables (by renaming parts without holding lock) Mar 27, 2024
@azat azat marked this pull request as draft March 27, 2024 12:51
@azat azat force-pushed the mt/rename-without-lock branch from 21e9798 to 757c78f Compare March 27, 2024 20:16
@azat azat marked this pull request as ready for review March 27, 2024 20:16
@azat
Copy link
Copy Markdown
Member Author

azat commented Mar 28, 2024

Likely related test failures:

Stateless tests (asan) [2/4] — Tests are not finished, fail: 1, passed: 383, skipped: 6
Stateless tests (tsan) [5/5] — Tests are not finished, fail: 3, passed: 302, skipped: 2
Stateless tests (tsan, s3 storage) [5/5] — Tests are not finished, fail: 2, passed: 298, skipped: 4

2024.03.28 06:23:18.782173 [ 937 ] {} <Fatal> : Logical error: 'Trying to remove detached part detached/attaching_all_21_21_0 with path store/b9e/b9ea0283-b7ac-48cd-ab64-54e326a59f96/ in remove function. It shouldn't happen'.

Unrelated test failures:

Stress test (asan) — Found signal in gdb.log

Stateless tests (asan) [1/4] — fail: 1, passed: 1579, skipped: 17
Stateless tests (debug, s3 storage) [5/6] — fail: 1, passed: 1047, skipped: 27
Stateless tests (msan) [5/6] — fail: 1, passed: 1066, skipped: 8

Stateless tests (coverage) [2/6] — Some queries hung, fail: 1, passed: 1021, skipped: 3

Stateless tests (release) — fail: 1, passed: 6280, skipped: 9

@azat azat marked this pull request as draft March 28, 2024 19:34
@azat
Copy link
Copy Markdown
Member Author

azat commented Mar 29, 2024

Related:

Stateless tests (coverage) [4/6] — fail: 1, passed: 1092, skipped: 2

  • 00993_system_parts_race_condition_drop_zookeeper
2024-03-29 19:57:24 [d922bb02cd61] 2024.03.29 00:56:54.586084 [ 57841 ] {bf240267-0620-4294-afc1-479c58e6be89} <Error> executeQuery: std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in file_size: No such file or directory ["/var/lib/clickhouse/store/ea7/ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95/-8_0_138_2_2/data.cmrk3"]

Unrelated:

Stateless tests (tsan) [3/5] — Tests are not finished, fail: 2, passed: 483, skipped: 6
Stateless tests (msan) [3/6] — Tests are not finished, fail: 1, passed: 367, skipped: 7
Stateless tests (debug) [3/5] — Tests are not finished, fail: 5, passed: 484, skipped: 7
Stateless tests (asan) [1/4] — Tests are not finished, fail: 2, passed: 569, skipped: 8

#62062

@azat azat force-pushed the mt/rename-without-lock branch from 8878b54 to adcf7d5 Compare April 2, 2024 15:33
@azat
Copy link
Copy Markdown
Member Author

azat commented Apr 2, 2024

Still some issues:

Stateless tests (debug) [5/5] — Tests are not finished, fail: 9, passed: 361, skipped: 2

2024.04.03 02:03:25.724784 [ 1120 ] {} <Fatal> : Logical error: 'Trying to remove detached part detached/all_200_200_0 with path store/6f2/6f24250b-14de-45e1-8b44-f0f25710765d/ in remove function. It shouldn't happen'.

@azat azat force-pushed the mt/rename-without-lock branch 3 times, most recently from 5c1e6a3 to e08e975 Compare April 11, 2024 12:14
@azat azat marked this pull request as ready for review April 11, 2024 14:43
@azat
Copy link
Copy Markdown
Member Author

azat commented Apr 11, 2024

Stateless tests (release, old analyzer, s3, DatabaseReplicated) [4/4] — fail: 1, passed: 1539, skipped: 27

  • 03093_bug37909_query_does_not_finish - fixed in 4f6b6e3

@azat
Copy link
Copy Markdown
Member Author

azat commented Apr 15, 2024

This one is ready. Can someone take a look please?

I have some doubts about that now the part is added to the data_parts_vector before rename and it may leads to removing detached parts, and triggering LOGICAL_ERROR, I've fixed it in the rollback code by checking if it is detached part, then revert it to temporary state, instead of removing it. But in general renaming under lock, is very bad, since it can hold the lock for way more longer time than it should be, and during this time you can't work with the table at all.

@azat
Copy link
Copy Markdown
Member Author

azat commented Apr 29, 2024

Can someone take a look at this one?

@alexey-milovidov alexey-milovidov changed the title [RFC] Reduce lock contention for MergeTree tables (by renaming parts without holding lock) Reduce lock contention for MergeTree tables (by renaming parts without holding lock) May 9, 2024
@CurtizJ CurtizJ self-assigned this May 9, 2024
@azat
Copy link
Copy Markdown
Member Author

azat commented May 28, 2024

Kind ping

@alexey-milovidov
Copy link
Copy Markdown
Member

@azat, need to fix all the tests.

@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 5, 2024

@alexey-milovidov tests failures does not looks related, but it is quite old, so I will do a rebase at least.

@azat azat force-pushed the mt/rename-without-lock branch from e08e975 to ec2b3a3 Compare June 5, 2024 08:20
@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 5, 2024

ClickHouse special build check — 12/13 artifact groups are OK

@azat azat force-pushed the mt/rename-without-lock branch from ec2b3a3 to ffa2ab7 Compare June 5, 2024 12:47
@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 5, 2024

Stateless tests (release) — fail: 1, passed: 6629, skipped: 13

@azat azat mentioned this pull request Jun 5, 2024
31 tasks
azat added 9 commits June 5, 2024 19:38
…t holding lock)

Under heavy load, or not so heavy but with fsync_part_directory=1,
time that renameTo() holds DataPartsLock will be increased, and this
will affect almost every operation with this table.

On one of production clusters I saw ~60 seconds with
fsync_part_directory=1.

Move the renameTo() out from the critical section.

v2: instead of using DataPartsLock.lock.lock()/unlock() move the renameTo() into MergeTreeData::Transaction::commit()
Signed-off-by: Azat Khuzhin <[email protected]>
Since there is an assertion that does not allows to remove detached
parts during cleanup, which sounds good in general, but breaks this new
code.

Signed-off-by: Azat Khuzhin <[email protected]>
The problem was that with this patch set renameMergedTemporaryPart() is
called without temporary_directory_lock holded (in MergeTask), since it
is reseted just before calling renameMergedTemporaryPart(), and this can
be seen in logs:

    2024.03.29 19:56:42.126919 [ 1341 ] {ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95::-8_0_138_2_2} <Trace> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95) (MergerMutator): Merged 50 parts: [-8_0_0_0_2, -8_138_138_0] -> -8_0_138_2_2
    2024.03.29 19:56:42.127034 [ 1341 ] {ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95::-8_0_138_2_2} <Debug> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95): Committing part -8_0_138_2_2 to zookeeper
    2024.03.29 19:56:42.128462 [ 884 ] {} <Warning> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95): Removing temporary directory /var/lib/clickhouse/store/ea7/ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95/tmp_merge_-8_0_138_2_2/
    2024.03.29 19:56:42.128647 [ 1341 ] {ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95::-8_0_138_2_2} <Debug> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95): Part -8_0_138_2_2 committed to zookeeper
    ...
    2024.03.29 19:56:54.586084 [ 57841 ] {bf240267-0620-4294-afc1-479c58e6be89} <Error> executeQuery: std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in file_size: No such file or directory ["/var/lib/clickhouse/store/ea7/ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95/-8_0_138_2_2/data.cmrk3"]

This should fix failures of 00993_system_parts_race_condition_drop_zookeeper in [1].

  [1]: https://s3.amazonaws.com/clickhouse-test-reports/61973/f6f826c85dd5b7bb8db16286fd10dcf441a440f7/stateless_tests__coverage__[4_6].html

Though now it looks hackish...

Signed-off-by: Azat Khuzhin <[email protected]>
…d_parts_need_rename

CI founds [1]:

    Logical error: 'precommitted_parts.size() >= precommitted_parts_need_rename.size()'

  [1]: https://s3.amazonaws.com/clickhouse-test-reports/61973/5c1e6a3e956917bdbb7eaa467934e5b75f17a923/stateless_tests__tsan__s3_storage__[5_5].html

The problem is that after precommitted_parts cleaned from detached parts
it may be less then precommitted_parts_need_rename, so to avoid this,
let's just copy it to a new container.

Signed-off-by: Azat Khuzhin <[email protected]>
@azat azat force-pushed the mt/rename-without-lock branch from ffa2ab7 to 6cfd5b2 Compare June 5, 2024 17:38
@alexey-milovidov alexey-milovidov merged commit ce244e1 into ClickHouse:master Jun 6, 2024
@azat azat deleted the mt/rename-without-lock branch June 6, 2024 03:57
@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 6, 2024

Stateless tests (debug, s3 storage) [3/6] fail: 1, passed: 1071, skipped: 24

Stateless tests (tsan, s3 storage) [1/5] fail: 1, passed: 1329, skipped: 24

@alexey-milovidov
Copy link
Copy Markdown
Member

@azat, please resubmit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-not-for-changelog This PR should not be mentioned in the changelog pr-performance Pull request with some performance improvements pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants