Skip to content

Updating a nullable column can lead to segmentation fault #71283

@xavierleune

Description

@xavierleune

Hi,

I'm using the version 24.9.42.2 in docker, I have a column with datatype Nullable(String).

When playing a query to remove all Null values, it leads to a segfault, the server keeps crashing a few seconds after start. Here is the query:
ALTER TABLE mytable UPDATE column='' WHERE column IS NULL

The only way to get the server back is to kill the mutation before the crash. The interesting part is that the query has been executed, even if the mutation did not complete (so I can remove the Nullable from the type after killing the mutation).

The logs of the seg fault:

2024.10.30 19:10:18.818601 [ 771 ] {} <Fatal> BaseDaemon: (version 24.9.2.42 (official build), build id: 5B9D439755D898BE820CFDC35F4F8B5383F2DEF6, git hash: de7c791a2eadce4093409574d6560d2332b0dd18) (from thread 742) (query_id: 01e46aea-9a72-4555-b6ea-d905db4a9724::all_4180718_4198831_1769_4199137) (query: ) Received signal Segmentation fault (11)
2024.10.30 19:10:18.818619 [ 771 ] {} <Fatal> BaseDaemon: Address: 0x1. Access: read. Address not mapped to object.
2024.10.30 19:10:18.818633 [ 771 ] {} <Fatal> BaseDaemon: Stack trace: 0x000000000d963af5 0x00007a7bdd894420 0x0000000010f45b7e 0x0000000010e8608e 0x0000000010e86c88 0x0000000012c2d828 0x0000000012c3a768 0x0000000012c5c9c8 0x00000000135bb5af 0x0000000012c590fd 0x00000000135b11c8 0x000000001312dbe7 0x0000000013147ca7 0x000000001313b770 0x000000001313d90e 0x000000000d740b38 0x000000000d745c71 0x000000000d74409f 0x00007a7bdd888609 0x00007a7bdd7ad353
2024.10.30 19:10:18.818752 [ 771 ] {} <Fatal> BaseDaemon: 0. signalHandler(int, siginfo_t*, void*) @ 0x000000000d963af5
2024.10.30 19:10:18.818797 [ 771 ] {} <Fatal> BaseDaemon: 1. ? @ 0x00007a7bdd894420
2024.10.30 19:10:18.818863 [ 771 ] {} <Fatal> BaseDaemon: 2. DB::SerializationVariant::enumerateStreams(DB::ISerialization::EnumerateStreamsSettings&, std::function<void (DB::ISerialization::SubstreamPath const&)> const&, DB::ISerialization::SubstreamData const&) const @ 0x0000000010f45b7e
2024.10.30 19:10:18.818925 [ 771 ] {} <Fatal> BaseDaemon: 3. DB::IDataType::getSubcolumnData(std::basic_string_view<char, std::char_traits<char>>, DB::ISerialization::SubstreamData const&, bool) @ 0x0000000010e8608e
2024.10.30 19:10:18.818958 [ 771 ] {} <Fatal> BaseDaemon: 4. DB::IDataType::getSubcolumn(std::basic_string_view<char, std::char_traits<char>>, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&) const @ 0x0000000010e86c88
2024.10.30 19:10:18.818995 [ 771 ] {} <Fatal> BaseDaemon: 5. DB::IMergeTreeReader::evaluateMissingDefaults(DB::Block, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) const @ 0x0000000012c2d828
2024.10.30 19:10:18.819027 [ 771 ] {} <Fatal> BaseDaemon: 6. DB::MergeTreeRangeReader::read(unsigned long, DB::MarkRanges&) @ 0x0000000012c3a768
2024.10.30 19:10:18.819057 [ 771 ] {} <Fatal> BaseDaemon: 7. DB::MergeTreeReadTask::read(DB::MergeTreeReadTask::BlockSizeParams const&) @ 0x0000000012c5c9c8
2024.10.30 19:10:18.819086 [ 771 ] {} <Fatal> BaseDaemon: 8. DB::MergeTreeInOrderSelectAlgorithm::readFromTask(DB::MergeTreeReadTask&, DB::MergeTreeReadTask::BlockSizeParams const&) @ 0x00000000135bb5af
2024.10.30 19:10:18.819112 [ 771 ] {} <Fatal> BaseDaemon: 9. DB::MergeTreeSelectProcessor::read() @ 0x0000000012c590fd
2024.10.30 19:10:18.819141 [ 771 ] {} <Fatal> BaseDaemon: 10. DB::MergeTreeSource::tryGenerate() @ 0x00000000135b11c8
2024.10.30 19:10:18.819178 [ 771 ] {} <Fatal> BaseDaemon: 11. DB::ISource::work() @ 0x000000001312dbe7
2024.10.30 19:10:18.819226 [ 771 ] {} <Fatal> BaseDaemon: 12. DB::ExecutionThreadContext::executeTask() @ 0x0000000013147ca7
2024.10.30 19:10:18.819275 [ 771 ] {} <Fatal> BaseDaemon: 13. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001313b770
2024.10.30 19:10:18.819315 [ 771 ] {} <Fatal> BaseDaemon: 14. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001313d90e
2024.10.30 19:10:18.819351 [ 771 ] {} <Fatal> BaseDaemon: 15. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false, true>, void*>) @ 0x000000000d740b38
2024.10.30 19:10:18.819399 [ 771 ] {} <Fatal> BaseDaemon: 16. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false, true>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000d745c71
2024.10.30 19:10:18.819470 [ 771 ] {} <Fatal> BaseDaemon: 17. void* std::__thread_proxy[abi:v15007]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000d74409f
2024.10.30 19:10:18.819499 [ 771 ] {} <Fatal> BaseDaemon: 18. ? @ 0x00007a7bdd888609
2024.10.30 19:10:18.819515 [ 771 ] {} <Fatal> BaseDaemon: 19. ? @ 0x00007a7bdd7ad353
2024.10.30 19:10:19.147310 [ 770 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 3F7974745B13E7E0E2C85EE5BBB6FA02)
2024.10.30 19:10:19.147614 [ 770 ] {} <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
2024.10.30 19:10:19.147745 [ 770 ] {} <Fatal> BaseDaemon: Changed settings: use_uncompressed_cache = false, load_balancing = 'in_order', log_queries = true, join_use_nulls = true, max_memory_usage = 10000000000, workload = 'default', allow_experimental_variant_type = true
2024.10.30 19:10:19.150987 [ 771 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 3F7974745B13E7E0E2C85EE5BBB6FA02)
2024.10.30 19:10:19.151239 [ 771 ] {} <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
2024.10.30 19:10:19.151329 [ 771 ] {} <Fatal> BaseDaemon: Changed settings: use_uncompressed_cache = false, load_balancing = 'in_order', log_queries = true, join_use_nulls = true, max_memory_usage = 10000000000, workload = 'default', allow_experimental_variant_type = true

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    potential bugTo be reviewed by developers and confirmed/rejected.st-need-infoWe need extra data to continue (waiting for response). Either some details or a repro of the issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions