Skip to content

Lock-free parts rename for ReplicatedMergeTree to avoid INSERT affect SELECT (resubmit)#64955

Merged
alexey-milovidov merged 9 commits intoClickHouse:masterfrom
azat:mt/rename-without-lock-v2
Sep 22, 2024
Merged

Lock-free parts rename for ReplicatedMergeTree to avoid INSERT affect SELECT (resubmit)#64955
alexey-milovidov merged 9 commits intoClickHouse:masterfrom
azat:mt/rename-without-lock-v2

Conversation

@azat
Copy link
Copy Markdown
Member

@azat azat commented Jun 7, 2024

Changelog category (leave one):

  • Performance Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Lock-free parts rename to avoid INSERT affect SELECT (due to parts lock) (under normal circumstances with fsync_part_directory, QPS of SELECT with INSERT in parallel, increased 2x, under heavy load the effect is even bigger). Note, this only includes ReplicatedMergeTree for now

Under heavy load, or not so heavy but with fsync_part_directory=1,
rename could take longer then usual, and this will increase the time
that DataPartsLock will be held, while DataPartsLock is the bottleneck
for MergeTree (any SELECT requires it to obtain list of parts).

On one of production clusters I saw ~60 seconds with
fsync_part_directory=1.

So this patch moves rename out of critical section, by explicitly rename
the parts, to make it more clear when it is done without holding a lock.

Note, that this patch fixes the bottleneck only for ReplicatedMergeTree,
MergeTree is not that simple, and should be done separately.

Benchmark

Patched

┌─log_comment─────────────────────────────┬───QPS─┬─queries─┬────lock_wait_avg_us─┬─duration_q95─┐
│ lockfree-rename.90977.mt_without_fsync  │ 384.3 │    3843 │ 0.41217798594847777 │            2 │
│ lockfree-rename.90977.mt_with_fsync     │ 422.2 │    4222 │ 0.06395073424917101 │            2 │ # this part had been reverted!
│ lockfree-rename.90977.rmt_without_fsync │ 368.6 │    3686 │  1.9834508952794356 │            2 │
│ lockfree-rename.90977.rmt_with_fsync    │   417 │    4170 │ 0.22853717026378897 │            2 │
└─────────────────────────────────────────┴───────┴─────────┴─────────────────────┴──────────────┘

Upstream

┌─log_comment─────────────────────────────┬───QPS─┬─queries─┬───lock_wait_avg_us─┬─duration_q95─┐
│ lockfree-rename.91765.mt_with_fsync     │ 196.2 │    1962 │  2436.145259938838 │           19 │
│ lockfree-rename.91765.rmt_with_fsync    │ 203.9 │    2039 │ 2214.7631191760665 │           18 │
│ lockfree-rename.91765.rmt_without_fsync │ 394.6 │    3946 │  2.725798276735935 │            2 │
│ lockfree-rename.91765.mt_without_fsync  │ 411.2 │    4112 │  15.97932879377432 │            2 │
└─────────────────────────────────────────┴───────┴─────────┴────────────────────┴──────────────┘

Original - https://gist.github.com/azat/7df0643cebb4ca9835d97734b18647b3

Original PR: #61973 (cc @alexey-milovidov )
Revert: #64899 (cc @alesapin )

@robot-ch-test-poll robot-ch-test-poll added the pr-performance Pull request with some performance improvements label Jun 7, 2024
@robot-ch-test-poll
Copy link
Copy Markdown
Contributor

robot-ch-test-poll commented Jun 7, 2024

This is an automated comment for commit 617033f with description of existing statuses. It's updated for the latest CI running

✅ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
BuildsThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@azat azat force-pushed the mt/rename-without-lock-v2 branch from 036dd31 to 8c07fe2 Compare June 7, 2024 13:10
@azat azat changed the title Reduce lock contention for MergeTree tables (by renaming parts without holding lock) (resubmit) Lockfree parts rename to avoid INSERT affect SELECT (due to parts lock) (resubmit) Jun 7, 2024
@azat azat changed the title Lockfree parts rename to avoid INSERT affect SELECT (due to parts lock) (resubmit) Lock-free parts rename to avoid INSERT affect SELECT (due to parts lock) (resubmit) Jun 7, 2024
@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 7, 2024

I've added benchmark into the description, which basically shows that without any other load (the ideal scenario) QPS of SELECTs that executed in parallel with INSERT, from table with fsync_part_directory=1 decreased ~2x (q95 goes from 2ms to 19ms and waiting for the data parts lock increased to 2ms)

And this, is without any load, numbers will got higher under load.

Also note, that under load, you can have problems even without fsync_part_directory, since some rename may stuck as well

@azat azat force-pushed the mt/rename-without-lock-v2 branch from 8c07fe2 to f8f66b0 Compare June 10, 2024 20:08
@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 12, 2024

Test failures does related:

  • 00276_sample
  • 00054_merge_tree_partitions

@azat azat marked this pull request as draft June 12, 2024 09:54
@azat azat force-pushed the mt/rename-without-lock-v2 branch from f8f66b0 to 70691ed Compare June 12, 2024 18:03
@azat azat marked this pull request as ready for review June 12, 2024 18:04
Comment on lines 189 to 202
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was the problem, for now, I've reverted this lock-free renames for plain MergeTree

@azat azat changed the title Lock-free parts rename to avoid INSERT affect SELECT (due to parts lock) (resubmit) Lock-free parts rename for ReplicatedMergeTree to avoid INSERT affect SELECT (resubmit) Jun 12, 2024
@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 27, 2024

Can someone take a look please?

@alexey-milovidov
Copy link
Copy Markdown
Member

The error in 02479_mysql_connect_to_self is haunting me - either we have too low timeouts for MySQL in tests and it is normal if sometimes the connection takes a long time, or the MySQL library does not handle EINTR from profiler, or something else. Let's take a look.

@alexey-milovidov
Copy link
Copy Markdown
Member

I don't understand why Unit tests failed... we have to check.

@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 30, 2024

I don't understand why Unit tests failed... we have to check.

a) it is a bug in gdb (images have gdb 12.1) - https://sourceware.org/bugzilla/show_bug.cgi?id=29185
b) it is some kind of issue of unit tests likely, maybe the spot instance got a ACPI shutdown signal? Then I think CI retry logic should be improved

@alexey-milovidov
Copy link
Copy Markdown
Member

@azat, then make a pull request with update to a new gdb version.

@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 30, 2024

@azat, then make a pull request with update to a new gdb version.

I don't think that this will help, since by some reason it got SIGTERM, and my guess is due to ACPI shutdown, like I wrote, the CI retry logic should be improved I guess

@azat
Copy link
Copy Markdown
Member Author

azat commented Jun 30, 2024

We need to fix this test: https://s3.amazonaws.com/clickhouse-test-reports/64955/70691ed3e9ff39d0b4f02e55319e3054e55ad933/stateless_tests__debug__[1_5].html

I will take a look, but it has nothing to do with this changes, it is better to track it here - #50911

@alexey-milovidov
Copy link
Copy Markdown
Member

Let's resolve conflicts and merge (if it will be mergeable).

@azat azat force-pushed the mt/rename-without-lock-v2 branch from 70691ed to e61e13c Compare June 30, 2024 20:56
@azat
Copy link
Copy Markdown
Member Author

azat commented Jul 1, 2024

Test failures does not looks related:

Stateless tests (coverage) [6/6] — fail: 1, passed: 1094, skipped: 3

  • 02922_deduplication_with_zero_copy - Can't get data for node '/clickhouse/zero_copy/zero_copy_s3/ae122e6d-0db8-467e-8de8-bee4ac5c92ec/all_1_1_0/s3_xxs_jxivwnhdvwcjoryhuhblqxuygidgz/1': node doesn't exist

Stateless tests (debug) [4/5] — fail: 1, passed: 1291, skipped: 19

  • 02834_remote_session_log - mysqlxx::ConnectionLost: Lost connection to MySQL server during query

Stateless tests (debug, s3 storage) [6/6] — fail: 9, passed: 979, skipped: 15

2024.06.30 20:15:42.857905 [ 2417 ] {9c733732-76a1-4fc5-81e2-643cd76721f9::2_2_2_1_4} <Fatal> : Logical error: 'Expected ReadBufferFromFile, but got DB::EmptyReadBuffer'.
Details
2024.06.30 20:15:42.857905 [ 2417 ] {9c733732-76a1-4fc5-81e2-643cd76721f9::2_2_2_1_4} <Fatal> : Logical error: 'Expected ReadBufferFromFile, but got DB::EmptyReadBuffer'.
2024.06.30 20:50:10.124384 [ 50498 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
2024.06.30 20:50:10.124668 [ 50498 ] {} <Fatal> BaseDaemon: (version 24.7.1.656, build id: 75E1A5225127BB742190A3760D8AB4BBFC75B1C2, git hash: 403e062d4ff62c3b155528ba35bff5963c31aee4) (from thread 2417) Received signal 6
2024.06.30 20:50:10.124905 [ 50498 ] {} <Fatal> BaseDaemon: Signal description: Aborted
2024.06.30 20:50:10.125037 [ 50498 ] {} <Fatal> BaseDaemon: 
2024.06.30 20:50:10.125227 [ 50498 ] {} <Fatal> BaseDaemon: Stack trace: 0x000000001388a6aa 0x000000001388a5a0 0x0000000013cbb151 0x00007f7002d04520 0x00007f7002d589fd 0x00007f7002d04476 0x00007f7002cea7f3 0x000000001382ae82 0x000000001382aff5 0x000000001382b5a0 0x000000000ab6220a 0x000000000ab5f9e9 0x000000000ab5f7e5 0x000000001cff18fd 0x000000001cff6398 0x000000001cff5e92 0x000000001d5ce900 0x000000001d589824 0x000000001d5896ef 0x000000001d58f8ef 0x000000001c059385 0x000000001bfe8dd7 0x000000001bfe32ca 0x000000001d9366c7 0x000000001d94ba3f 0x0000000022cbe299 0x0000000022cbeae8 0x0000000022c49c8e 0x0000000022c4641a 0x0000000022c4515e 0x00007f7002d56ac3 0x00007f7002de8850
2024.06.30 20:50:10.125434 [ 50498 ] {} <Fatal> BaseDaemon: ########################################
2024.06.30 20:50:10.125774 [ 50498 ] {} <Fatal> BaseDaemon: (version 24.7.1.656, build id: 75E1A5225127BB742190A3760D8AB4BBFC75B1C2, git hash: 403e062d4ff62c3b155528ba35bff5963c31aee4) (from thread 2417) (query_id: 9c733732-76a1-4fc5-81e2-643cd76721f9::2_2_2_1_4) (query: ) Received signal Aborted (6)
2024.06.30 20:50:10.126034 [ 50498 ] {} <Fatal> BaseDaemon: 
2024.06.30 20:50:10.126135 [ 50498 ] {} <Fatal> BaseDaemon: Stack trace: 0x000000001388a6aa 0x000000001388a5a0 0x0000000013cbb151 0x00007f7002d04520 0x00007f7002d589fd 0x00007f7002d04476 0x00007f7002cea7f3 0x000000001382ae82 0x000000001382aff5 0x000000001382b5a0 0x000000000ab6220a 0x000000000ab5f9e9 0x000000000ab5f7e5 0x000000001cff18fd 0x000000001cff6398 0x000000001cff5e92 0x000000001d5ce900 0x000000001d589824 0x000000001d5896ef 0x000000001d58f8ef 0x000000001c059385 0x000000001bfe8dd7 0x000000001bfe32ca 0x000000001d9366c7 0x000000001d94ba3f 0x0000000022cbe299 0x0000000022cbeae8 0x0000000022c49c8e 0x0000000022c4641a 0x0000000022c4515e 0x00007f7002d56ac3 0x00007f7002de8850
2024.06.30 20:50:10.483055 [ 50498 ] {} <Fatal> BaseDaemon: 0. /build/src/Common/StackTrace.cpp:349: StackTrace::tryCapture() @ 0x000000001388a6aa
2024.06.30 20:50:10.676125 [ 50498 ] {} <Fatal> BaseDaemon: 1. /build/src/Common/StackTrace.cpp:323: StackTrace::StackTrace(ucontext_t const&) @ 0x000000001388a5a0
2024.06.30 20:50:11.022430 [ 50498 ] {} <Fatal> BaseDaemon: 2. /build/src/Daemon/BaseDaemon.cpp:158: signalHandler(int, siginfo_t*, void*) @ 0x0000000013cbb151
2024.06.30 20:50:11.022684 [ 50498 ] {} <Fatal> BaseDaemon: 3. ? @ 0x00007f7002d04520
2024.06.30 20:50:11.022830 [ 50498 ] {} <Fatal> BaseDaemon: 4. ? @ 0x00007f7002d589fd
2024.06.30 20:50:11.022979 [ 50498 ] {} <Fatal> BaseDaemon: 5. ? @ 0x00007f7002d04476
2024.06.30 20:50:11.023118 [ 50498 ] {} <Fatal> BaseDaemon: 6. ? @ 0x00007f7002cea7f3
2024.06.30 20:50:11.257525 [ 50498 ] {} <Fatal> BaseDaemon: 7. /build/src/Common/Exception.cpp:0: DB::abortOnFailedAssertion(String const&) @ 0x000000001382ae82
2024.06.30 20:50:11.506572 [ 50498 ] {} <Fatal> BaseDaemon: 8. /build/src/Common/Exception.cpp:65: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x000000001382aff5
2024.06.30 20:50:11.751451 [ 50498 ] {} <Fatal> BaseDaemon: 9. /build/src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001382b5a0
2024.06.30 20:50:11.899275 [ 50498 ] {} <Fatal> BaseDaemon: 10. /build/src/Common/Exception.h:95: DB::Exception::Exception(String&&, int, bool) @ 0x000000000ab6220a
2024.06.30 20:50:12.097911 [ 50498 ] {} <Fatal> BaseDaemon: 11. /build/src/Common/Exception.h:68: DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000ab5f9e9
2024.06.30 20:50:12.277556 [ 50498 ] {} <Fatal> BaseDaemon: 12. /build/src/Common/Exception.h:113: DB::Exception::Exception<String>(int, FormatStringHelperImpl<std::type_identity<String>::type>, String&&) @ 0x000000000ab5f7e5
2024.06.30 20:50:13.185144 [ 50498 ] {} <Fatal> BaseDaemon: 13. /build/src/Storages/MergeTree/MergeTask.cpp:562: DB::MergeTask::VerticalMergeStage::prepareVerticalMergeForAllColumns() const @ 0x000000001cff18fd
2024.06.30 20:50:13.950082 [ 50498 ] {} <Fatal> BaseDaemon: 14. /build/src/Storages/MergeTree/MergeTask.cpp:891: DB::MergeTask::VerticalMergeStage::execute() @ 0x000000001cff6398
2024.06.30 20:50:14.750855 [ 50498 ] {} <Fatal> BaseDaemon: 15. /build/src/Storages/MergeTree/MergeTask.cpp:951: DB::MergeTask::execute() @ 0x000000001cff5e92
2024.06.30 20:50:14.892867 [ 50498 ] {} <Fatal> BaseDaemon: 16. /build/src/Storages/MergeTree/MergePlainMergeTreeTask.cpp:57: DB::MergePlainMergeTreeTask::executeStep() @ 0x000000001d5ce900
2024.06.30 20:50:15.787768 [ 50498 ] {} <Fatal> BaseDaemon: 17. /build/src/Storages/MergeTree/MergePlainMergeTreeTask.h:104: DB::executeHere(std::shared_ptr<DB::MergePlainMergeTreeTask>) @ 0x000000001d589824
2024.06.30 20:50:16.500116 [ 50498 ] {} <Fatal> BaseDaemon: 18. /build/src/Storages/StorageMergeTree.cpp:1170: DB::StorageMergeTree::merge(bool, String const&, bool, bool, std::vector<String, std::allocator<String>> const&, bool, std::shared_ptr<DB::MergeTreeTransaction> const&, PreformattedMessage&, bool) @ 0x000000001d5896ef
2024.06.30 20:50:17.129328 [ 50498 ] {} <Fatal> BaseDaemon: 19. /build/src/Storages/StorageMergeTree.cpp:1582: DB::StorageMergeTree::optimize(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::IAST> const&, bool, bool, std::vector<String, std::allocator<String>> const&, bool, std::shared_ptr<DB::Context const>) @ 0x000000001d58f8ef
2024.06.30 20:50:17.232362 [ 50498 ] {} <Fatal> BaseDaemon: 20. /build/src/Interpreters/InterpreterOptimizeQuery.cpp:84: DB::InterpreterOptimizeQuery::execute() @ 0x000000001c059385
2024.06.30 20:50:17.655183 [ 50498 ] {} <Fatal> BaseDaemon: 21. /build/src/Interpreters/executeQuery.cpp:1225: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000001bfe8dd7
2024.06.30 20:50:18.105156 [ 50498 ] {} <Fatal> BaseDaemon: 22. /build/src/Interpreters/executeQuery.cpp:1391: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000001bfe32ca
2024.06.30 20:50:18.595800 [ 50498 ] {} <Fatal> BaseDaemon: 23. /build/src/Server/TCPHandler.cpp:521: DB::TCPHandler::runImpl() @ 0x000000001d9366c7
2024.06.30 20:50:19.135275 [ 50498 ] {} <Fatal> BaseDaemon: 24. /build/src/Server/TCPHandler.cpp:2353: DB::TCPHandler::run() @ 0x000000001d94ba3f
2024.06.30 20:50:19.169984 [ 50498 ] {} <Fatal> BaseDaemon: 25. /build/base/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x0000000022cbe299
2024.06.30 20:50:19.221942 [ 50498 ] {} <Fatal> BaseDaemon: 26. /build/base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x0000000022cbeae8
2024.06.30 20:50:19.431616 [ 50498 ] {} <Fatal> BaseDaemon: 27. /build/base/poco/Foundation/src/ThreadPool.cpp:205: Poco::PooledThread::run() @ 0x0000000022c49c8e
2024.06.30 20:50:19.484743 [ 50498 ] {} <Fatal> BaseDaemon: 28. /build/base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x0000000022c4641a
2024.06.30 20:50:19.534619 [ 50498 ] {} <Fatal> BaseDaemon: 29. /build/base/poco/Foundation/src/Thread_POSIX.cpp:335: Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000022c4515e
2024.06.30 20:50:19.535514 [ 50498 ] {} <Fatal> BaseDaemon: 30. ? @ 0x00007f7002d56ac3
2024.06.30 20:50:19.535632 [ 50498 ] {} <Fatal> BaseDaemon: 31. ? @ 0x00007f7002de8850
2024.06.30 20:50:19.535745 [ 50498 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2024.06.30 20:50:26.610739 [ 50498 ] {} <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
2024.06.30 20:50:26.611218 [ 50498 ] {} <Fatal> BaseDaemon: Changed settings: connect_timeout_with_failover_ms = 2000, connect_timeout_with_failover_secure_ms = 3000, idle_connection_timeout = 36000, s3_check_objects_after_upload = true, stream_like_engine_allow_direct_select = true, replication_wait_for_inactive_replica_timeout = 30, allow_nonconst_timezone_arguments = true, log_queries = true, insert_quorum_timeout = 60000, fsync_metadata = false, http_send_timeout = 60., http_receive_timeout = 60., opentelemetry_start_trace_probability = 0.10000000149011612, max_untracked_memory = 1048576, memory_profiler_step = 1048576, allow_introspection_functions = true, database_atomic_wait_for_drop_and_detach_synchronously = true, distributed_ddl_entry_format_version = 6, async_insert_busy_timeout_max_ms = 5000, enable_filesystem_cache = true, enable_filesystem_cache_on_write_operations = true, filesystem_cache_segments_batch_size = 10, load_marks_asynchronously = true, allow_prefetched_read_pool_for_remote_filesystem = false, allow_prefetched_read_pool_for_local_filesystem = false, filesystem_prefetch_max_memory_usage = 1073741824, workload = 'default', insert_keeper_max_retries = 100, insert_keeper_retry_initial_backoff_ms = 1, insert_keeper_retry_max_backoff_ms = 1, insert_keeper_fault_injection_probability = 0.009999999776482582
2024.06.30 20:50:30.523429 [ 924 ] {} <Fatal> Application: Child process was terminated by signal 6.

Stateless tests (tsan, s3 storage) [1/5] — Tests are not finished, fail: 8, passed: 963, skipped: 18

2024.06.30 16:05:24.454915 [ 13316 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_1_10} <Fatal> : Logical error: 'Expected ReadBufferFromFile, but got DB::EmptyReadBuffer'.
Details
2024.06.30 16:05:22.264070 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> Planner: Query to stage Complete
2024.06.30 16:05:22.264531 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> Planner: Query from stage FetchColumns to stage Complete
2024.06.30 16:05:22.268022 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> QueryPlanOptimizePrewhere: The min valid primary key position for moving to the tail of PREWHERE is -1
2024.06.30 16:05:22.271304 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Key condition: unknown
2024.06.30 16:05:22.271495 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> PartitionPruner: Partition 0 gets pruned
2024.06.30 16:05:22.271587 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> PartitionPruner: Partition 2 gets pruned
2024.06.30 16:05:22.271640 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> PartitionPruner: Partition 3 gets pruned
2024.06.30 16:05:22.271691 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> PartitionPruner: Partition 4 gets pruned
2024.06.30 16:05:22.271748 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): MinMax index condition: unknown
2024.06.30 16:05:22.271864 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Filtering marks by primary and secondary keys
2024.06.30 16:05:22.272197 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Selected 1/5 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges
2024.06.30 16:05:22.272283 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Spreading mark ranges among streams (default reading)
2024.06.30 16:05:22.272473 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> MergeTreeReadPool: min_marks_for_concurrent_read=1000
2024.06.30 16:05:22.272550 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Reading approx. 2 rows with 1 streams
2024.06.30 16:05:22.288344 [ 8062 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> AggregatingTransform: Aggregating
2024.06.30 16:05:22.288467 [ 8062 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> Aggregator: Aggregation method: without_key
2024.06.30 16:05:22.288860 [ 8062 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> AggregatingTransform: Aggregated. 2 to 1 rows (from 0.00 B) in 0.015697071 sec. (127.412 rows/sec., 0.00 B/sec.)
2024.06.30 16:05:22.288957 [ 8062 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> Aggregator: Merging aggregated data
2024.06.30 16:05:22.289083 [ 8062 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> Aggregator: Statistics updated for key=17820923619348510831: new sum_of_sizes=1, median_size=1
2024.06.30 16:05:22.295545 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> MutateTask: Mutating part 1_2_2_0 to mutation version 6
2024.06.30 16:05:22.297468 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> MutationsInterpreter(test_phhjmeb7.t_light): Will use old analyzer to prepare mutation
2024.06.30 16:05:22.301176 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 1_2_2_0, total 2 rows starting from the beginning of the part
2024.06.30 16:05:22.407078 [ 1541 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_6} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7): Renaming temporary part tmp_mut_1_2_2_0_6 to 1_2_2_0_6 with tid (1, 1, 00000000-0000-0000-0000-000000000000).
2024.06.30 16:05:22.681095 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> Planner: Query to stage Complete
2024.06.30 16:05:22.685304 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> Planner: Query from stage FetchColumns to stage Complete
2024.06.30 16:05:22.688828 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> QueryPlanOptimizePrewhere: The min valid primary key position for moving to the tail of PREWHERE is -1
2024.06.30 16:05:22.689426 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Key condition: unknown
2024.06.30 16:05:22.689627 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> PartitionPruner: Partition 0 gets pruned
2024.06.30 16:05:22.689754 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> PartitionPruner: Partition 1 gets pruned
2024.06.30 16:05:22.689863 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> PartitionPruner: Partition 2 gets pruned
2024.06.30 16:05:22.689970 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> PartitionPruner: Partition 3 gets pruned
2024.06.30 16:05:22.690151 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): MinMax index condition: (column 0 in [4, 4])
2024.06.30 16:05:22.690391 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Filtering marks by primary and secondary keys
2024.06.30 16:05:22.692005 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Selected 1/5 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges
2024.06.30 16:05:22.692141 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Spreading mark ranges among streams (default reading)
2024.06.30 16:05:22.692381 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> MergeTreeReadPool: min_marks_for_concurrent_read=1000
2024.06.30 16:05:22.692461 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Reading approx. 2 rows with 1 streams
2024.06.30 16:05:22.701309 [ 18549 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> AggregatingTransform: Aggregating
2024.06.30 16:05:22.701456 [ 18549 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> Aggregator: Aggregation method: without_key
2024.06.30 16:05:22.701940 [ 18549 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> AggregatingTransform: Aggregated. 1 to 1 rows (from 0.00 B) in 0.008885535 sec. (112.542 rows/sec., 0.00 B/sec.)
2024.06.30 16:05:22.702028 [ 18549 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> Aggregator: Merging aggregated data
2024.06.30 16:05:22.702125 [ 18549 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> Aggregator: Statistics updated for key=7044743620978322012: new sum_of_sizes=1, median_size=1
2024.06.30 16:05:22.712748 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> MutateTask: Mutating part 1_2_2_0_6 to mutation version 7
2024.06.30 16:05:22.713362 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> MutationsInterpreter(test_phhjmeb7.t_light): Will use old analyzer to prepare mutation
2024.06.30 16:05:22.716183 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 1_2_2_0_6, total 2 rows starting from the beginning of the part
2024.06.30 16:05:22.838558 [ 1555 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_7} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7): Renaming temporary part tmp_mut_1_2_2_0_7 to 1_2_2_0_7 with tid (1, 1, 00000000-0000-0000-0000-000000000000).
2024.06.30 16:05:23.022425 [ 1544 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_8} <Trace> MutateTask: Mutating part 1_2_2_0_7 to mutation version 8
2024.06.30 16:05:23.023143 [ 1544 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_8} <Debug> MutationsInterpreter(test_phhjmeb7.t_light): Will use old analyzer to prepare mutation
2024.06.30 16:05:23.024058 [ 1544 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_8} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 1_2_2_0_7, total 2 rows starting from the beginning of the part
2024.06.30 16:05:23.131818 [ 1544 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_8} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7): Renaming temporary part tmp_mut_1_2_2_0_8 to 1_2_2_0_8 with tid (1, 1, 00000000-0000-0000-0000-000000000000).
2024.06.30 16:05:23.195129 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Planner: Query to stage Complete
2024.06.30 16:05:23.195872 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Planner: Query from stage FetchColumns to stage Complete
2024.06.30 16:05:23.200031 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> QueryPlanOptimizePrewhere: The min valid primary key position for moving to the tail of PREWHERE is 0
2024.06.30 16:05:23.200583 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Key condition: (column 0 in (-Inf, 2])
2024.06.30 16:05:23.200770 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): MinMax index condition: unknown
2024.06.30 16:05:23.200922 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Filtering marks by primary and secondary keys
2024.06.30 16:05:23.201527 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Running binary search on index range for part 1_2_2_0_8 (2 marks)
2024.06.30 16:05:23.201553 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Running binary search on index range for part 0_1_1_0_8 (2 marks)
2024.06.30 16:05:23.201640 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (LEFT) boundary mark: 0
2024.06.30 16:05:23.201682 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (LEFT) boundary mark: 0
2024.06.30 16:05:23.201739 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (RIGHT) boundary mark: 1
2024.06.30 16:05:23.201802 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (RIGHT) boundary mark: 1
2024.06.30 16:05:23.201883 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found continuous range in 0 steps
2024.06.30 16:05:23.201939 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found continuous range in 0 steps
2024.06.30 16:05:23.202268 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Running binary search on index range for part 3_4_4_0_8 (2 marks)
2024.06.30 16:05:23.202336 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Running binary search on index range for part 4_5_5_0_8 (2 marks)
2024.06.30 16:05:23.202379 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (LEFT) boundary mark: 0
2024.06.30 16:05:23.202431 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (LEFT) boundary mark: 0
2024.06.30 16:05:23.202480 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (RIGHT) boundary mark: 1
2024.06.30 16:05:23.202511 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (RIGHT) boundary mark: 1
2024.06.30 16:05:23.202622 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found empty range in 0 steps
2024.06.30 16:05:23.202631 [ 2785 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found empty range in 0 steps
2024.06.30 16:05:23.202891 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Running binary search on index range for part 2_3_3_0_8 (2 marks)
2024.06.30 16:05:23.202972 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (LEFT) boundary mark: 0
2024.06.30 16:05:23.203072 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found (RIGHT) boundary mark: 1
2024.06.30 16:05:23.203183 [ 3512 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Found continuous range in 0 steps
2024.06.30 16:05:23.203691 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Selected 5/5 parts by partition key, 3 parts by primary key, 3/5 marks by primary key, 3 marks to read from 3 ranges
2024.06.30 16:05:23.203889 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Spreading mark ranges among streams (default reading)
2024.06.30 16:05:23.204345 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> MergeTreeReadPool: min_marks_for_concurrent_read=1000
2024.06.30 16:05:23.204520 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (SelectExecutor): Reading approx. 6 rows with 3 streams
2024.06.30 16:05:23.215583 [ 18573 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> AggregatingTransform: Aggregating
2024.06.30 16:05:23.215713 [ 18573 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: An entry for key=14002322430937705421 found in cache: sum_of_sizes=3, median_size=1
2024.06.30 16:05:23.215817 [ 18573 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: Aggregation method: without_key
2024.06.30 16:05:23.216033 [ 18573 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> AggregatingTransform: Aggregated. 1 to 1 rows (from 0.00 B) in 0.008062514 sec. (124.031 rows/sec., 0.00 B/sec.)
2024.06.30 16:05:23.216161 [ 18552 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: An entry for key=14002322430937705421 found in cache: sum_of_sizes=3, median_size=1
2024.06.30 16:05:23.216273 [ 18552 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: Aggregation method: without_key
2024.06.30 16:05:23.216462 [ 18552 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> AggregatingTransform: Aggregated. 0 to 1 rows (from 0.00 B) in 0.008414781 sec. (0.000 rows/sec., 0.00 B/sec.)
2024.06.30 16:05:23.218874 [ 8041 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> AggregatingTransform: Aggregating
2024.06.30 16:05:23.218990 [ 8041 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: An entry for key=14002322430937705421 found in cache: sum_of_sizes=3, median_size=1
2024.06.30 16:05:23.219121 [ 8041 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: Aggregation method: without_key
2024.06.30 16:05:23.219663 [ 8041 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> AggregatingTransform: Aggregated. 1 to 1 rows (from 0.00 B) in 0.011564772 sec. (86.469 rows/sec., 0.00 B/sec.)
2024.06.30 16:05:23.219750 [ 8041 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> Aggregator: Merging aggregated data
2024.06.30 16:05:23.228809 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> MutateTask: Mutating part 1_2_2_0_8 to mutation version 9
2024.06.30 16:05:23.229512 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> MutationsInterpreter(test_phhjmeb7.t_light): Will use old analyzer to prepare mutation
2024.06.30 16:05:23.232762 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 1_2_2_0_8, total 2 rows starting from the beginning of the part
2024.06.30 16:05:23.238399 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 1_2_2_0_8, total 2 rows starting from the beginning of the part
2024.06.30 16:05:23.324525 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Debug> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7) (DataPartWriter): Spent 0 ms calculating index i_c for the part 1_2_2_0_9
2024.06.30 16:05:23.380705 [ 1548 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_9} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7): Renaming temporary part tmp_mut_1_2_2_0_9 to 1_2_2_0_9 with tid (1, 1, 00000000-0000-0000-0000-000000000000).
2024.06.30 16:05:23.440990 [ 1545 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_10} <Trace> MutateTask: Mutating part 1_2_2_0_9 to mutation version 10
2024.06.30 16:05:23.533906 [ 1545 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_0_10} <Trace> test_phhjmeb7.t_light (6b7c96e5-117f-4d0c-b04c-7408f09bebc7): Renaming temporary part tmp_mut_1_2_2_0_10 to 1_2_2_0_10 with tid (1, 1, 00000000-0000-0000-0000-000000000000).
2024.06.30 16:05:24.446041 [ 13316 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_1_10} <Debug> MergeTask::PrepareStage: Merging 1 parts: from 1_2_2_0_10 to 1_2_2_0_10 into Wide with storage Full
2024.06.30 16:05:24.446688 [ 13316 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_1_10} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Vertical
2024.06.30 16:05:24.447446 [ 13316 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_1_10} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 1_2_2_0_10, total 2 rows starting from the beginning of the part
2024.06.30 16:05:24.454915 [ 13316 ] {6b7c96e5-117f-4d0c-b04c-7408f09bebc7::1_2_2_1_10} <Fatal> : Logical error: 'Expected ReadBufferFromFile, but got DB::EmptyReadBuffer'.

@azat
Copy link
Copy Markdown
Member Author

azat commented Jul 5, 2024

OK, let's try to rebase to resolve some tests issues

@azat azat force-pushed the mt/rename-without-lock-v2 branch from e61e13c to e353978 Compare July 5, 2024 18:35
@azat
Copy link
Copy Markdown
Member Author

azat commented Jul 8, 2024

Tidy was broken (fixed in #66070), let's do one more rebase

… SELECT

Under heavy load, or not so heavy but with fsync_part_directory=1,
rename could take longer then usual, and this will increase the time
that DataPartsLock will be held, while DataPartsLock is the bottleneck
for MergeTree (any SELECT requires it to obtain list of parts).

On one of production clusters I saw ~60 seconds with
fsync_part_directory=1.

So this patch moves rename out of critical section, by explicitly rename
the parts, to make it more clear when it is done without holding a lock.

Note, that this patch fixes the bottleneck only for ReplicatedMergeTree,
MergeTree is not that simple, and should be done separately.

v2: instead of using DataPartsLock.lock.lock()/unlock() move the renameTo() into MergeTreeData::Transaction::commit()
v3: Remove unused locked_parts from MergeTreeData::Transaction
v4: Fix replacing parts with empty
v5: Use renameParts() explicitly to avoid leaving parts in detached

    Since there is an assertion that does not allows to remove detached
    parts during cleanup, which sounds good in general, but breaks this new
    code.
v6: Avoid race between cleanup thread and renameMergedTemporaryPart()

    The problem was that with this patch set renameMergedTemporaryPart() is
    called without temporary_directory_lock holded (in MergeTask), since it
    is reseted just before calling renameMergedTemporaryPart(), and this can
    be seen in logs:

        2024.03.29 19:56:42.126919 [ 1341 ] {ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95::-8_0_138_2_2} <Trace> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95) (MergerMutator): Merged 50 parts: [-8_0_0_0_2, -8_138_138_0] -> -8_0_138_2_2
        2024.03.29 19:56:42.127034 [ 1341 ] {ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95::-8_0_138_2_2} <Debug> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95): Committing part -8_0_138_2_2 to zookeeper
        2024.03.29 19:56:42.128462 [ 884 ] {} <Warning> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95): Removing temporary directory /var/lib/clickhouse/store/ea7/ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95/tmp_merge_-8_0_138_2_2/
        2024.03.29 19:56:42.128647 [ 1341 ] {ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95::-8_0_138_2_2} <Debug> test_btnct5cr.alter_table_0 (ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95): Part -8_0_138_2_2 committed to zookeeper
        ...
        2024.03.29 19:56:54.586084 [ 57841 ] {bf240267-0620-4294-afc1-479c58e6be89} <Error> executeQuery: std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in file_size: No such file or directory ["/var/lib/clickhouse/store/ea7/ea7a3fd2-cf47-4ec7-91a5-51c69fba1b95/-8_0_138_2_2/data.cmrk3"]

    This should fix failures of 00993_system_parts_race_condition_drop_zookeeper in [1].

      [1]: https://s3.amazonaws.com/clickhouse-test-reports/61973/f6f826c85dd5b7bb8db16286fd10dcf441a440f7/stateless_tests__coverage__[4_6].html

    Though now it looks hackish...
v7: Avoid race between cleanup thread and renameMergedTemporaryPart()
v8: Require explicit rename of parts in transaction
v9: Do not remove detached parts in Transaction::rollback
v10: Fix possible assertion when size of precommitted_parts <= precommitted_parts_need_rename

    CI founds [1]:

        Logical error: 'precommitted_parts.size() >= precommitted_parts_need_rename.size()'

      [1]: https://s3.amazonaws.com/clickhouse-test-reports/61973/5c1e6a3e956917bdbb7eaa467934e5b75f17a923/stateless_tests__tsan__s3_storage__[5_5].html

    The problem is that after precommitted_parts cleaned from detached parts
    it may be less then precommitted_parts_need_rename, so to avoid this,
    let's just copy it to a new container.
v11: Use rename_in_transaction=false instead of explicit renameParts()

    Later the lock could be splitted, but let's not mix too much changes in
    once.
v12: Rename in transaction for INSERT into MergeTree
v13: Rename in transaction for INSERT into ReplicatedMergeTree
v14: revert rename_in_transaction for MergeTree, this is not correct for now (see comment in the code)

Signed-off-by: Azat Khuzhin <[email protected]>
@azat azat force-pushed the mt/rename-without-lock-v2 branch from e353978 to 6c49586 Compare July 8, 2024 09:59
@alexey-milovidov
Copy link
Copy Markdown
Member

Just in case, I don't recommend rebasing, because it involves git push --force and then my conflict resolutions in the private repository are rewritten.

@azat
Copy link
Copy Markdown
Member Author

azat commented Jul 8, 2024

Just in case, I don't recommend rebasing, because it involves git push --force and then my conflict resolutions in the private repository are rewritten.

Got it (didn't knew about how conflict resolution works with private fork, now it is clear, thanks)

@azat
Copy link
Copy Markdown
Member Author

azat commented Jul 8, 2024

@alexey-milovidov BTW likely I can fix this, if you will tell me for which commit you fixed the confclits

@azat
Copy link
Copy Markdown
Member Author

azat commented Jul 25, 2024

@azat
Copy link
Copy Markdown
Member Author

azat commented Aug 5, 2024

Performance Comparison (aarch64) [2/4] — Failed to parse the report.

@azat
Copy link
Copy Markdown
Member Author

azat commented Aug 5, 2024

Can someone please take a look?

azat added 2 commits August 10, 2024 19:52
Merge with master after:
- integration tests had been fixed in private fork
- conflicts had been resolved (by Alexey)
Merge with master after conflict had been resolved one more time (by
Alexey)
@azat
Copy link
Copy Markdown
Member Author

azat commented Aug 13, 2024

@alexey-milovidov
Copy link
Copy Markdown
Member

I don't know why no one reviewed this PR yet.
I started reading it... The code is fairly complex. Maybe I will finish reading it tomorrow.

@alexey-milovidov
Copy link
Copy Markdown
Member

I already reviewed it a while ago :)

@alexey-milovidov alexey-milovidov self-assigned this Sep 22, 2024
@alexey-milovidov alexey-milovidov added this pull request to the merge queue Sep 22, 2024
Merged via the queue into ClickHouse:master with commit cd99613 Sep 22, 2024
@robot-clickhouse robot-clickhouse added the pr-synced-to-cloud The PR is synced to the cloud repo label Sep 22, 2024
@azat azat deleted the mt/rename-without-lock-v2 branch September 22, 2024 06:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-performance Pull request with some performance improvements pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants