Skip to content

Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session#65198

Merged
antonio2368 merged 6 commits intoClickHouse:masterfrom
aalexfvk:fix_keeper_close_session_bug
Jun 18, 2024
Merged

Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session#65198
antonio2368 merged 6 commits intoClickHouse:masterfrom
aalexfvk:fix_keeper_close_session_bug

Conversation

@aalexfvk
Copy link
Copy Markdown
Contributor

@aalexfvk aalexfvk commented Jun 13, 2024

When a session is closed (e.g. as result of dead session cleaning), but we get actual user commands for this session further in the log (e.g. as a consequences of network delay), this commands can be executed if closing session was only pre-committed.
But on another replica (or after restart and processing changelog) closing can be committed early and further commands for this session is not executed because authentication data for this session is already deleted during commit of close session command.
We end up with different state on different replicas and digest mismatch.

Replica 1: <Debug> KeeperStateMachine: Session ID response 195390 with timeout 10000
Replica 2: <Debug> KeeperStateMachine: Session ID response 195390 with timeout 10000
Replica 2: <Information> KeeperTCPHandler: Received session ID 195390
[...]
Replica_1: <Information> KeeperDispatcher: Found dead session 195390, will try to close it
Replica_1: <Debug> RaftInstance: Receive a client_request message from 0 with LastLogIndex=0, LastLogTerm 0, EntriesLength=90, CommitIndex=0 and Term=0  (<== CLOSE session here)
Replica_1: <Debug> RaftInstance: append at log_idx 23611171, timestamp 1717766452466765 (CLOSE session idx)
[...]
Replica_1: <Debug> RaftInstance: Receive a client_request message from 0 with LastLogIndex=0, LastLogTerm 0, EntriesLength=13, CommitIndex=0 and Term=0  (<== Multi command here)
Replica_1: <Debug> RaftInstance: append at log_idx 23611232, timestamp 1717766453476196 (Multi command idx)
[...]
Replica 2: <Information> KeeperTCPHandler: Got exception processing session #195390: Code: 210. DB::NetException: I/O error: Broken pipe, while writing to socket ([2a02:6b8:c42:1d2d:0:1589:f400:5f19]:2181 -> [2a02:6b8:c1e:c9f:0:1589:bcab:5451]:59492). (NETWORK_ERROR), 
[...]
Replica 2: <Fatal> KeeperStateMachine: Digest for nodes is not matching after preprocessing request of type 'Multi' at log index 23611232.
Expected digest - 16666091226484601422, actual digest - 14449310820632219137 (digest V4). Keeper will terminate to avoid inconsistencies.
Extra information about the request:
XID = 252
OpNum = Multi
Additional info:
SubRequest
XID = 0
OpNum = Create
Additional info:
path = /_system/write_sli_part/shard1/blocks/20240607_13597184531106237950_5119518006640417998
is_ephemeral = false
is_sequential = false
SubRequest
XID = 0
OpNum = Remove
Additional info:
path = /_system/write_sli_part/shard1/blocks/20240607_13597184531106237950_5119518006640417998
version = -1
SubRequest
XID = 0
OpNum = Create
Additional info:
path = /_system/write_sli_part/shard1/block_numbers/20240607/block-
is_ephemeral = true
is_sequential = true

2024.06.07 16:20:59.499691 [ 832 ] {} <Fatal> BaseDaemon: (version 24.3.3.102 (official build), build id: EF9E1BD0781C858153E899F2D95A044F4DD82F9B, git hash: 7e7f3bdd9be3ced03925d1d602037db8687e6401) (from thread 967) Terminate called without an active e
xception
2024.06.07 16:20:59.501393 [ 1445159 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
2024.06.07 16:20:59.501423 [ 1445159 ] {} <Fatal> BaseDaemon: (version 24.3.3.102 (official build), build id: EF9E1BD0781C858153E899F2D95A044F4DD82F9B, git hash: 7e7f3bdd9be3ced03925d1d602037db8687e6401) (from thread 967) Received signal 6
2024.06.07 16:20:59.501440 [ 1445159 ] {} <Fatal> BaseDaemon: Signal description: Aborted
2024.06.07 16:20:59.501449 [ 1445159 ] {} <Fatal> BaseDaemon: 
2024.06.07 16:20:59.501513 [ 1445159 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fe26f26ee87 0x00007fe26f2707f1 0x000000000ce6d24e 0x0000000017118703 0x00000000171186fb 0x00000000129f2df6 0x00000000129f290d 0x00000000129f110e 0x0000000014b6936b 0x0000000014b31763 0x0000000014b16cc8 0x0000000014b17a0a 0x0000000014b23973 0x0000000014b242f9 0x0000000014ad2a5e 0x0000000014acb15d 0x0000000014ad8dba 0x00007fe26f6286db 0x00007fe26f35161f
2024.06.07 16:20:59.501586 [ 1445159 ] {} <Fatal> BaseDaemon: ########################################
2024.06.07 16:20:59.501598 [ 1445159 ] {} <Fatal> BaseDaemon: (version 24.3.3.102 (official build), build id: EF9E1BD0781C858153E899F2D95A044F4DD82F9B, git hash: 7e7f3bdd9be3ced03925d1d602037db8687e6401) (from thread 967) (no query) Received signal Aborted (6)
2024.06.07 16:20:59.501617 [ 1445159 ] {} <Fatal> BaseDaemon: 
2024.06.07 16:20:59.501624 [ 1445159 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fe26f26ee87 0x00007fe26f2707f1 0x000000000ce6d24e 0x0000000017118703 0x00000000171186fb 0x00000000129f2df6 0x00000000129f290d 0x00000000129f110e 0x0000000014b6936b 0x0000000014b31763 0x0000000014b16cc8 0x0000000014b17a0a 0x0000000014b23973 0x0000000014b242f9 0x0000000014ad2a5e 0x0000000014acb15d 0x0000000014ad8dba 0x00007fe26f6286db 0x00007fe26f35161f
2024.06.07 16:20:59.501677 [ 1445159 ] {} <Fatal> BaseDaemon: 2. ? @ 0x000000000003ee87
2024.06.07 16:20:59.501688 [ 1445159 ] {} <Fatal> BaseDaemon: 3. ? @ 0x00000000000407f1
2024.06.07 16:20:59.502031 [ 1445159 ] {} <Fatal> BaseDaemon: 4. terminate_handler() @ 0x000000000ce6d24e
2024.06.07 16:20:59.502270 [ 1445159 ] {} <Fatal> BaseDaemon: 5. std::__terminate(void (*)()) @ 0x0000000017118703
2024.06.07 16:20:59.502279 [ 1445159 ] {} <Fatal> BaseDaemon: 6. ? @ 0x00000000171186fb
2024.06.07 16:20:59.503639 [ 1445159 ] {} <Fatal> BaseDaemon: 7. DB::(anonymous namespace)::assertDigest(DB::KeeperStorage::Digest const&, DB::KeeperStorage::Digest const&, Coordination::ZooKeeperRequest const&, unsigned long, bool) (.llvm.14466982511560073521) @ 0x00000000129f2df6
2024.06.07 16:20:59.503926 [ 1445159 ] {} <Fatal> BaseDaemon: 8. DB::KeeperStateMachine::preprocess(DB::KeeperStorage::RequestForSession const&) @ 0x00000000129f290d
2024.06.07 16:20:59.504238 [ 1445159 ] {} <Fatal> BaseDaemon: 9. DB::KeeperStateMachine::pre_commit(unsigned long, nuraft::buffer&) @ 0x00000000129f110e
2024.06.07 16:20:59.504500 [ 1445159 ] {} <Fatal> BaseDaemon: 10. nuraft::raft_server::handle_append_entries(nuraft::req_msg&) @ 0x0000000014b6936b
2024.06.07 16:20:59.506330 [ 1445159 ] {} <Fatal> BaseDaemon: 11. nuraft::raft_server::process_req(nuraft::req_msg&, nuraft::raft_server::req_ext_params const&) @ 0x0000000014b31763
2024.06.07 16:20:59.506572 [ 1445159 ] {} <Fatal> BaseDaemon: 12. nuraft::rpc_session::read_complete(std::shared_ptr<nuraft::buffer>, std::shared_ptr<nuraft::buffer>) @ 0x0000000014b16cc8
2024.06.07 16:20:59.506802 [ 1445159 ] {} <Fatal> BaseDaemon: 13. nuraft::rpc_session::read_log_data(std::shared_ptr<nuraft::buffer>, boost::system::error_code const&, unsigned long) @ 0x0000000014b17a0a
2024.06.07 16:20:59.507079 [ 1445159 ] {} <Fatal> BaseDaemon: 14. boost::asio::detail::read_op<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::any_io_executor>, boost::asio::mutable_buffers_1, boost::asio::mutable_buffer const*, boost::asio::detail::transfer_all_t, std::__bind<void (nuraft::rpc_session::*)(std::shared_ptr<nuraft::buffer>, boost::system::error_code const&, unsigned long), std::shared_ptr<nuraft::rpc_session> const&, std::shared_ptr<nuraft::buffer>&, std::placeholders::__ph<1> const&, std::placeholders::__ph<2> const&>>::operator()(boost::system::error_code, unsigned long, int) @ 0x0000000014b23973
2024.06.07 16:20:59.508106 [ 1445159 ] {} <Fatal> BaseDaemon: 15. boost::asio::detail::reactive_socket_recv_op<boost::asio::mutable_buffers_1, boost::asio::detail::read_op<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::any_io_executor>, boost::asio::mutable_buffers_1, boost::asio::mutable_buffer const*, boost::asio::detail::transfer_all_t, std::__bind<void (nuraft::rpc_session::*)(std::shared_ptr<nuraft::buffer>, boost::system::error_code const&, unsigned long), std::shared_ptr<nuraft::rpc_session> const&, std::shared_ptr<nuraft::buffer>&, std::placeholders::__ph<1> const&, std::placeholders::__ph<2> const&>>, boost::asio::any_io_executor>::do_complete(void*, boost::asio::detail::scheduler_operation*, boost::system::error_code const&, unsigned long) @ 0x0000000014b242f9
2024.06.07 16:20:59.508763 [ 1445159 ] {} <Fatal> BaseDaemon: 16. boost::asio::detail::scheduler::run(boost::system::error_code&) @ 0x0000000014ad2a5e
2024.06.07 16:20:59.510443 [ 1445159 ] {} <Fatal> BaseDaemon: 17. nuraft::asio_service_impl::worker_entry() @ 0x0000000014acb15d
2024.06.07 16:20:59.510708 [ 1445159 ] {} <Fatal> BaseDaemon: 18. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, std::__bind<void (nuraft::asio_service_impl::*)(), nuraft::asio_service_impl*>>>(void*) @ 0x0000000014ad8dba
2024.06.07 16:20:59.510744 [ 1445159 ] {} <Fatal> BaseDaemon: 19. start_thread @ 0x00000000000076db
2024.06.07 16:20:59.510759 [ 1445159 ] {} <Fatal> BaseDaemon: 20. ? @ 0x000000000012161f
2024.06.07 16:21:08.449831 [ 569570 ] {} <Information> KeeperTCPHandler: Requesting session ID for the new client
2024.06.07 16:21:11.722014 [ 1445159 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: F3691ADAC982D94BEEF75EF7C8E42F75)
2024.06.07 16:21:11.722058 [ 1445159 ] {} <Information> SentryWriter: Not sending crash report
2024.06.07 16:21:11.722087 [ 1445159 ] {} <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues

Changelog category (leave one):

  • Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

CI Settings (Only check the boxes if you know what you are doing):

  • Allow: All Required Checks
  • Allow: Stateless tests
  • Allow: Stateful tests
  • Allow: Integration Tests
  • Allow: Performance tests
  • Allow: Normal Builds
  • Allow: Special Builds
  • Allow: All NOT Required Checks
  • Allow: batch 1, 2 for multi-batch jobs
  • Allow: batch 3, 4, 5, 6 for multi-batch jobs

  • Exclude: Style check
  • Exclude: Fast test
  • Exclude: All with ASAN
  • Exclude: All with TSAN, MSAN, UBSAN, Coverage
  • Exclude: All with aarch64, release, debug

  • Do not test
  • Upload binaries for special builds
  • Disable merge-commit
  • Disable CI cache

@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added pr-critical-bugfix pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud labels Jun 13, 2024
@robot-clickhouse-ci-1
Copy link
Copy Markdown
Contributor

robot-clickhouse-ci-1 commented Jun 13, 2024

This is an automated comment for commit d3e9751 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help❌ failure
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR❌ failure
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors❌ failure
Successful checks
Check nameDescriptionStatus
A SyncIf it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
PR CheckChecks correctness of the PR's body✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@aalexfvk aalexfvk marked this pull request as ready for review June 13, 2024 10:53
@aalexfvk aalexfvk force-pushed the fix_keeper_close_session_bug branch from c33967d to 2d37f7b Compare June 13, 2024 10:55
@antonio2368 antonio2368 self-assigned this Jun 13, 2024
@aalexfvk aalexfvk force-pushed the fix_keeper_close_session_bug branch from 46a05af to 68ab0ed Compare June 13, 2024 14:49
Copy link
Copy Markdown
Member

@antonio2368 antonio2368 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow nice find and fix!

@aalexfvk
Copy link
Copy Markdown
Contributor Author

Get the following error during FastTest. Will continue research it

2024.06.14 09:27:24.523602 [ 180943 ] {} <Debug> KeeperTCPHandler: Received close event with xid 2147483647 for session id #2
2024.06.14 09:27:24.523604 [ 178806 ] {} <Trace> KeeperDispatcher: Processing requests batch, size: 1, bytes: 0
2024.06.14 09:27:24.523669 [ 178815 ] {} <Error> KeeperStorage: Inconsistency found between uncommitted and committed data. Keeper will terminate to avoid undefined behaviour.

@aalexfvk
Copy link
Copy Markdown
Contributor Author

It looks like failed tests are no longer relevant to the changes.

@antonio2368
Copy link
Copy Markdown
Member

@aalexfvk let's try merging with master one more time

robot-clickhouse added a commit that referenced this pull request Jun 18, 2024
robot-clickhouse added a commit that referenced this pull request Jun 18, 2024
@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-backports-created-cloud deprecated label, NOOP label Jun 18, 2024
@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Jun 18, 2024
robot-ch-test-poll3 added a commit that referenced this pull request Jun 18, 2024
Backport #65198 to 24.5: Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session
robot-clickhouse-ci-2 added a commit that referenced this pull request Jun 18, 2024
Backport #65198 to 24.3: Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session
robot-ch-test-poll3 added a commit that referenced this pull request Jun 18, 2024
Backport #65198 to 24.4: Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session
antonio2368 added a commit that referenced this pull request Jun 24, 2024
Backport #65198 to 23.8: Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-backports-created-cloud deprecated label, NOOP pr-critical-bugfix pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants