Skip to content

Add SYSTEM UNLOAD PRIMARY KEY#62738

Merged
tavplubix merged 16 commits intoClickHouse:masterfrom
pamarcos:pamarcos/system-unload-primary-key
Apr 29, 2024
Merged

Add SYSTEM UNLOAD PRIMARY KEY#62738
tavplubix merged 16 commits intoClickHouse:masterfrom
pamarcos:pamarcos/system-unload-primary-key

Conversation

@pamarcos
Copy link
Copy Markdown
Member

@pamarcos pamarcos commented Apr 17, 2024

Add SYSTEM UNLOAD PRIMARY KEY for a given table or for all tables

Closes #60643

Changelog category (leave one):

  • New Feature

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Add SYSTEM UNLOAD PRIMARY KEY

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

@tavplubix tavplubix self-assigned this Apr 17, 2024
@tavplubix tavplubix added the can be tested Allows running workflows for external contributors label Apr 17, 2024
@robot-ch-test-poll robot-ch-test-poll added the pr-feature Pull request with new product feature label Apr 17, 2024
@robot-ch-test-poll
Copy link
Copy Markdown
Contributor

robot-ch-test-poll commented Apr 17, 2024

This is an automated comment for commit f6b19e1 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
A SyncThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS❌ failure
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR⏳ pending
Mergeable CheckChecks if all other necessary checks are successful❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
PR CheckThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@nikitamikhaylov
Copy link
Copy Markdown
Member

Please update the description, PARTITION KEY != PRIMARY KEY.

@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added the submodule changed At least one submodule changed in this PR. label Apr 18, 2024
@robot-ch-test-poll3 robot-ch-test-poll3 removed the submodule changed At least one submodule changed in this PR. label Apr 18, 2024
@pamarcos
Copy link
Copy Markdown
Member Author

I based 03127_system_unload_primary_key on 02993_lazy_index_loading, but it's reported as flaky by the CI when using certain parameters. I don't see any error in the CH server log that gives me any clue. I've also seen that this same check was also failing in the PR for lazy loading of primary keys in memory.

In any case, I've made an attempt to make it pass for more scenarios, but I've failed. Since I'm not very savvy about the internals of your CI, I'd appreciate a hand to figure out whether this might really be an issue because of my changes or if you think there's a way to make a test for this feature that works no matter what parameters are used by the flakiness-looking CI job.

@pamarcos
Copy link
Copy Markdown
Member Author

Alright, I think I found the issue with the flaky tests.
Entirely my fault. I didn't know that these tests were run in parallel, and my test unloads all primary keys at some point, so of course that's affecting other tests.

I'll check to see whether I can split the test in two to let a new test run sequentially so that no other tests interfere with it

Comment on lines +363 to +365
std::scoped_lock lock(index_mutex);
index.clear();
index_loaded = false;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will happen if some queries are still using the index?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an excellent question 🤔

I took a look and since read queries that use getIndex get a reference to underlying index as it is, inflight queries would be affected by the SYSTEM UNLOAD PRIMARY KEY that clears it.

There are several solutions to this, but the one that I found simpler to implement, to reason about and doesn't need expensive copies is to promote index to a shared_ptr. Here's the commit with it.

I saw that the IColumn interface is using a copy-on-write shared pointer, so my 2nd option was to create a new std::vector where I assign every column from the original ìndex. However, that implies increasing the refcount for every single column, so IMO it's cheaper and easier to understand doing that in the parent directly.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, using shared_ptr is good solution, I would do exactly the same

@tavplubix
Copy link
Copy Markdown
Member

ClickHouse special build check failure is related, other failures are not

Removing the reference for Index makes the const unnecessary.
Constness for it is still preserved correctly because Columns
are inmutable pointers to IColumn.
@pamarcos
Copy link
Copy Markdown
Member Author

ClickHouse special build check failure is related, other failures are not

Fixed in f6b19e1

Leaving here for reference the command I used to ensure to run clang-tidy in parallel on all modified files of my branch:

git diff --stat master..HEAD | grep -P "src.*\.cpp" | awk '{print $1}' | xargs -i -P0 bash -c "echo analyzing {}; clang-tidy-17 -p build_master {}"

@pamarcos
Copy link
Copy Markdown
Member Author

pamarcos commented Apr 27, 2024

I think the tests that failed now are unrelated to my changes.

Apart from that, I haven't figured out an easy way to test that unloading primary keys while some queries are using them interfere with each other. I've tested manually, but I don't think I can run easily create a functional test for it to verify its behavior.

The only alternative I can think of is to do a unit test that does something along these lines:

const auto & index = part->getIndex();
EXPECT_EQ(index->size(), 10);
part->unloadIndex();
EXPECT_EQ(index->size(), 10);
const auto & new_index = part->getIndex();
EXPECT_EQ(new_index->size(), 0);

However, the issue is getting a IMergeTreeDataPart that I can work with in a unit test. My initial thought was using MergeTreeDataPartBuilder::build for that. Then I found how difficult is to build a dummy MergeTreeData. I haven't seen any unit test creating a mock/fake of it. In fact, I haven't seen any mock using gmock. I could create a mock of MergeTreeData, but I'm not sure it's worth doing the effort for this when there's no prior work such as this done before. Generating mocks with gmock is time consuming and I don't want to add unnecessary noise so that everyone has to update the mock whenever something is modified in MergeTreeData just for this. I remembered there was a Python generator script to automatically generate mocks, but it was removed in 1.12.0 on Sep 14, 2021. I guess because it was not very good generating the mocks and even difficult to maintain 😞

Let me know what you think.

@tavplubix
Copy link
Copy Markdown
Member

Yes, failures are unrelated

I haven't figured out an easy way to test that unloading primary keys while some queries are using them interfere with each other.

This will be tested by Stress tests. They run 30 instances of Stateless tests in parallel and they don't care about no-parallel tag, so SYSTEM UNLOAD PRIMARY KEY from 03128_system_unload_primary_key will interfere with random tests

@tavplubix tavplubix added this pull request to the merge queue Apr 29, 2024
Merged via the queue into ClickHouse:master with commit 37950a5 Apr 29, 2024
@robot-ch-test-poll1 robot-ch-test-poll1 added the pr-synced-to-cloud The PR is synced to the cloud repo label Apr 29, 2024
@pamarcos pamarcos deleted the pamarcos/system-unload-primary-key branch April 30, 2024 05:38
/// and we need to use the same set of index columns across all parts.
for (const auto & part : parts)
loaded_columns = std::min(loaded_columns, part.data_part->getIndex().size());
loaded_columns = std::min(loaded_columns, part.data_part->getIndex()->size());
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this cached value now broken? If you change primary_key_ratio_of_unique_prefix_values_to_skip_suffix_columns and unload the index, the loaded_columns will be incorrect and any query using this IndexAccess might have bad values or crash (if some columns where removed).

loaded_columns is only ok if we also cache and save the index of all the parts.

cc @tavplubix @nickitat

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confirmed by introducing sleep in getValue and doing concurrent setting change, queries and unloading of the PK. It hits the chassert:

{fa241230-2c8c-4b65-bff9-f84ee8f5feec} <Fatal> : Logical error: 'index->size() >= loaded_columns'
2024.05.14 14:34:41.614779 [ 229319 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
2024.05.14 14:34:41.614935 [ 229319 ] {} <Fatal> BaseDaemon: (version 24.5.1.1, build id: 2DF79347B1B8FBD97ABA348E494E7B9B1CB78632, git hash: 11753dfbaebe7e929a43f7ba20d2e369430d9463) (from thread 228422) Received signal 6
2024.05.14 14:34:41.615089 [ 229319 ] {} <Fatal> BaseDaemon: Signal description: Aborted
2024.05.14 14:34:41.615182 [ 229319 ] {} <Fatal> BaseDaemon: 
2024.05.14 14:34:41.615286 [ 229319 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007ffff7e2de44 0x00007ffff7dd5a30 0x00007ffff7dbd4c3 0x00000000143c32da 0x000000002015f39d 0x000000002015c981 0x00000000201579c6 0x00000000200f3ef2 0x00000000200ffe2e 0x000000002010155c 0x00000000200a3f80 0x00000000200cdf31 0x000000001d39ce86 0x000000001d39cb71 0x000000001dad60a3 0x000000001dad03de 0x000000001f6ebd99 0x000000001f702cf2 0x00000000251c4b19 0x00000000251c537d 0x00000000253cd921 0x00000000253ca11a 0x00000000253c8bd5 0x00007ffff7e2bded 0x00007ffff7eaf0dc
2024.05.14 14:34:41.615412 [ 229319 ] {} <Fatal> BaseDaemon: ########################################
2024.05.14 14:34:41.615635 [ 229319 ] {} <Fatal> BaseDaemon: (version 24.5.1.1, build id: 2DF79347B1B8FBD97ABA348E494E7B9B1CB78632, git hash: 11753dfbaebe7e929a43f7ba20d2e369430d9463) (from thread 228422) (query_id: 2494f339-9bc1-41cc-b2d8-19c785d15966) (query: SELECT count() FROM t where not ignore(*);) Received signal Aborted (6)
2024.05.14 14:34:41.615898 [ 229319 ] {} <Fatal> BaseDaemon: 
2024.05.14 14:34:41.616049 [ 229319 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007ffff7e2de44 0x00007ffff7dd5a30 0x00007ffff7dbd4c3 0x00000000143c32da 0x000000002015f39d 0x000000002015c981 0x00000000201579c6 0x00000000200f3ef2 0x00000000200ffe2e 0x000000002010155c 0x00000000200a3f80 0x00000000200cdf31 0x000000001d39ce86 0x000000001d39cb71 0x000000001dad60a3 0x000000001dad03de 0x000000001f6ebd99 0x000000001f702cf2 0x00000000251c4b19 0x00000000251c537d 0x00000000253cd921 0x00000000253ca11a 0x00000000253c8bd5 0x00007ffff7e2bded 0x00007ffff7eaf0dc
2024.05.14 14:34:41.616288 [ 229319 ] {} <Fatal> BaseDaemon: 4. ? @ 0x00007ffff7e2de44
2024.05.14 14:34:41.616429 [ 229319 ] {} <Fatal> BaseDaemon: 5. ? @ 0x00007ffff7dd5a30
2024.05.14 14:34:41.616598 [ 229319 ] {} <Fatal> BaseDaemon: 6. ? @ 0x00007ffff7dbd4c3
2024.05.14 14:34:41.693317 [ 229319 ] {} <Fatal> BaseDaemon: 7. /mnt/ch/ClickHouse/src/Common/Exception.cpp:0: DB::abortOnFailedAssertion(String const&) @ 0x00000000143c32da
2024.05.14 14:34:41.937273 [ 229319 ] {} <Fatal> BaseDaemon: 8. /mnt/ch/ClickHouse/src/Processors/QueryPlan/PartsSplitter.cpp:150: (anonymous namespace)::IndexAccess::getValue(unsigned long, unsigned long) const @ 0x000000002015f39d
2024.05.14 14:34:42.285853 [ 229319 ] {} <Fatal> BaseDaemon: 9. /mnt/ch/ClickHouse/src/Processors/QueryPlan/PartsSplitter.cpp:751: (anonymous namespace)::splitIntersectingPartsRangesIntoLayers(DB::RangesInDataParts, unsigned long, std::shared_ptr<Poco::Logger> const&) @ 0x000000002015c981
2024.05.14 14:34:42.603459 [ 229319 ] {} <Fatal> BaseDaemon: 10. /mnt/ch/ClickHouse/src/Processors/QueryPlan/PartsSplitter.cpp:934: DB::splitPartsWithRangesByPrimaryKey(DB::KeyDescription const&, std::shared_ptr<DB::ExpressionActions>, DB::RangesInDataParts, unsigned long, std::shared_ptr<DB::Context const>, std::function<DB::Pipe (DB::RangesInDataParts)>&&, bool, bool) @ 0x00000000201579c6
2024.05.14 14:34:42.995839 [ 229319 ] {} <Fatal> BaseDaemon: 11. /mnt/ch/ClickHouse/src/Processors/QueryPlan/ReadFromMergeTree.cpp:807: DB::ReadFromMergeTree::spreadMarkRangesAmongStreams(DB::RangesInDataParts&&, unsigned long, std::vector<String, std::allocator<String>> const&) @ 0x00000000200f3ef2
2024.05.14 14:34:43.412668 [ 229319 ] {} <Fatal> BaseDaemon: 12. /mnt/ch/ClickHouse/src/Processors/QueryPlan/ReadFromMergeTree.cpp:1935: DB::ReadFromMergeTree::spreadMarkRanges(DB::RangesInDataParts&&, unsigned long, DB::ReadFromMergeTree::AnalysisResult&, std::shared_ptr<DB::ActionsDAG>&) @ 0x00000000200ffe2e
2024.05.14 14:34:43.802381 [ 229319 ] {} <Fatal> BaseDaemon: 13. /mnt/ch/ClickHouse/src/Processors/QueryPlan/ReadFromMergeTree.cpp:2032: DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x000000002010155c
2024.05.14 14:34:43.851744 [ 229319 ] {} <Fatal> BaseDaemon: 14. /mnt/ch/ClickHouse/src/Processors/QueryPlan/ISourceStep.cpp:20: DB::ISourceStep::updatePipeline(std::vector<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>, std::allocator<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x00000000200a3f80
2024.05.14 14:34:43.997606 [ 229319 ] {} <Fatal> BaseDaemon: 15. /mnt/ch/ClickHouse/src/Processors/QueryPlan/QueryPlan.cpp:188: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x00000000200cdf31
2024.05.14 14:34:44.089048 [ 229319 ] {} <Fatal> BaseDaemon: 16. /mnt/ch/ClickHouse/src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:223: DB::InterpreterSelectQueryAnalyzer::buildQueryPipeline() @ 0x000000001d39ce86
2024.05.14 14:34:44.189719 [ 229319 ] {} <Fatal> BaseDaemon: 17. /mnt/ch/ClickHouse/src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:192: DB::InterpreterSelectQueryAnalyzer::execute() @ 0x000000001d39cb71
2024.05.14 14:34:44.426371 [ 229319 ] {} <Fatal> BaseDaemon: 18. /mnt/ch/ClickHouse/src/Interpreters/executeQuery.cpp:1198: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000001dad60a3
2024.05.14 14:34:44.665615 [ 229319 ] {} <Fatal> BaseDaemon: 19. /mnt/ch/ClickHouse/src/Interpreters/executeQuery.cpp:1393: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000001dad03de
2024.05.14 14:34:44.886185 [ 229319 ] {} <Fatal> BaseDaemon: 20. /mnt/ch/ClickHouse/src/Server/TCPHandler.cpp:522: DB::TCPHandler::runImpl() @ 0x000000001f6ebd99
2024.05.14 14:34:45.128180 [ 229319 ] {} <Fatal> BaseDaemon: 21. /mnt/ch/ClickHouse/src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x000000001f702cf2
2024.05.14 14:34:45.138595 [ 229319 ] {} <Fatal> BaseDaemon: 22. /mnt/ch/ClickHouse/base/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x00000000251c4b19
2024.05.14 14:34:45.154126 [ 229319 ] {} <Fatal> BaseDaemon: 23. /mnt/ch/ClickHouse/base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x00000000251c537d
2024.05.14 14:34:45.170743 [ 229319 ] {} <Fatal> BaseDaemon: 24. /mnt/ch/ClickHouse/base/poco/Foundation/src/ThreadPool.cpp:188: Poco::PooledThread::run() @ 0x00000000253cd921
2024.05.14 14:34:45.186580 [ 229319 ] {} <Fatal> BaseDaemon: 25. /mnt/ch/ClickHouse/base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x00000000253ca11a
2024.05.14 14:34:45.201583 [ 229319 ] {} <Fatal> BaseDaemon: 26. /mnt/ch/ClickHouse/base/poco/Foundation/src/Thread_POSIX.cpp:335: Poco::ThreadImpl::runnableEntry(void*) @ 0x00000000253c8bd5
2024.05.14 14:34:45.201842 [ 229319 ] {} <Fatal> BaseDaemon: 27. ? @ 0x00007ffff7e2bded
2024.05.14 14:34:45.202040 [ 229319 ] {} <Fatal> BaseDaemon: 28. ? @ 0x00007ffff7eaf0dc
2024.05.14 14:34:45.202244 [ 229319 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2024.05.14 14:34:45.202518 [ 229319 ] {} <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
2024.05.14 14:34:45.203110 [ 229319 ] {} <Fatal> BaseDaemon: Changed settings: min_compress_block_size = 1770298, max_compress_block_size = 241108, max_block_size = 45947, max_insert_threads = 13, max_threads = 7, max_read_buffer_size = 959253, use_uncompressed_cache = false, compile_aggregate_expressions = true, compile_sort_description = false, min_count_to_compile_sort_description = 0, group_by_two_level_threshold = 108124, group_by_two_level_threshold_bytes = 46225353, enable_memory_bound_merging_of_aggregation_results = false, min_chunk_bytes_for_parallel_parsing = 3608449, merge_tree_coarse_index_granularity = 23, min_bytes_to_use_direct_io = 7413113399, min_bytes_to_use_mmap_io = 10737418240, insert_deduplicate = true, merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability = 0.9700000286102295, http_response_buffer_size = 8163244, distributed_ddl_task_timeout = 120, joined_subquery_requires_alias = false, max_bytes_before_external_group_by = 1, max_bytes_before_external_sort = 6665888579, max_bytes_before_remerge_sort = 1315671325, max_execution_time = 0., max_expanded_ast_elements = 50000, join_algorithm = 'hash', max_memory_usage = 50000000000, memory_usage_overcommit_max_wait_microseconds = 50000, log_query_threads = false, log_comment = '03151_unload_index_race.sh', send_logs_level = 'warning', optimize_read_in_order = false, optimize_aggregation_in_order = true, aggregation_in_order_max_block_bytes = 29881666, read_in_order_two_level_merge_threshold = 70, max_partitions_per_insert_block = 100, optimize_if_transform_strings_to_enum = true, optimize_substitute_columns = true, optimize_append_index = true, enable_global_with_statement = true, database_replicated_initial_query_timeout_sec = 120, database_replicated_enforce_synchronous_settings = true, database_replicated_always_detach_permanently = true, distributed_ddl_output_mode = 'none', local_filesystem_read_method = 'read', merge_tree_compact_parts_min_granules_to_multibuffer_read = 69, filesystem_cache_segments_batch_size = 50, use_page_cache_for_disks_without_file_cache = true, allow_prefetched_read_pool_for_remote_filesystem = false, filesystem_prefetch_step_bytes = 104857600, filesystem_prefetch_step_marks = 50, filesystem_prefetch_min_bytes_for_single_read_task = 8388608, filesystem_prefetch_max_memory_usage = 134217728, filesystem_prefetches_limit = 0, insert_keeper_max_retries = 1000, insert_keeper_retry_initial_backoff_ms = 1, insert_keeper_retry_max_backoff_ms = 1, insert_keeper_fault_injection_probability = 0., optimize_distinct_in_order = false, session_timezone = 'Mexico/BajaSur', allow_experimental_database_replicated = true, input_format_null_as_default = false

I'm trying to find a consistent reproducer if possible to verify the fix and will send a PR

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#63778

In the end I didn't need the thread fuzzer sleep. Increasing the number of readers without sleep also reproduced the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

can be tested Allows running workflows for external contributors pr-feature Pull request with new product feature pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement SYSTEM UNLOAD PRIMARY KEY

8 participants