Skip to content

S3 request per second rate throttling#43014

Merged
serxa merged 9 commits intomasterfrom
disk-s3-throttler
Nov 16, 2022
Merged

S3 request per second rate throttling#43014
serxa merged 9 commits intomasterfrom
disk-s3-throttler

Conversation

@serxa
Copy link
Copy Markdown
Member

@serxa serxa commented Nov 7, 2022

Changelog category (leave one):

  • New Feature

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Added S3 PUTs and GETs request per second rate throttling. Settings s3_max_get_rps, s3_max_get_burst, s3_max_put_rps, s3_max_put_burst are used to configure token bucket throttler. Can be used with both S3 ObjectStorage and S3 table function. Different limits can be configured for different S3 disks or endpoints.

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

@robot-ch-test-poll robot-ch-test-poll added the pr-feature Pull request with new product feature label Nov 7, 2022
@kssenii kssenii self-assigned this Nov 7, 2022
@nickitat
Copy link
Copy Markdown
Member

nickitat commented Nov 8, 2022

could you pls clarify. is this meant to not do wasteful requests that will be anyway rate limited by aws or also as a way to limit client's spending on s3 requests? in the latter case, does this quota accumulates if not used, i.e. is it true that I can make at least limit_per_second*86400 requests each day no matter how they are distributed over time.

@serxa
Copy link
Copy Markdown
Member Author

serxa commented Nov 8, 2022

@nickitat This is client-side throttling. It uses token bucket algorithm. It accumulates limited amount of quota (tokens, requests) that can be used before throttling kicks in. No more than s3_max_get_burst tokens can be accumulated. Tokens are generated at s3_max_get_rps tokens/s rate. So mathematically speaking it guarantees that client will send no more than s3_max_get_burst + s3_max_get_rps * (t2 - t1) GET requests in any (t1, t2] time interval.

const S3::URI & s3_uri_, const String & access_key_id_, const String & secret_access_key_, const ContextPtr & context_)
: s3_uri(s3_uri_)
, client(makeS3Client(s3_uri_, access_key_id_, secret_access_key_, context_))
, max_single_read_retries(context_->getSettingsRef().s3_max_single_read_retries)
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Backup test failed, but the only relevant change is here. Previously, separate variable was used max_single_read_retries instead of request_settings.max_single_read_retries. I'll try to reproduce and check if this change matters.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fail is reproducible 100%...

, read_settings(context_->getReadSettings())
, request_settings(context_->getStorageS3Settings().getSettings(s3_uri.uri.toString()).request_settings)
{
request_settings.max_single_read_retries = context_->getSettingsRef().s3_max_single_read_retries; // FIXME: Avoid taking value for endpoint
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning previous behaviour fixes tests. Problem was with BackupReaderS3 only, but I returned writer behaviour also. Taking settings from per-endpoint storage hides values from settings. I think fixing test would be better, but I'm not sure how. Maybe @vitlibar can do this later.

@serxa
Copy link
Copy Markdown
Member Author

serxa commented Nov 11, 2022

Looks like there is something wrong with destruction of WriteBufferFromS3. AST fuzzer finds it https://pastila.nl/?00079986/c954f8fb8e96037ce01e690dd62dc26a

@serxa
Copy link
Copy Markdown
Member Author

serxa commented Nov 11, 2022

Somehow WriteBuffer is not finalized in destructor (due to throttling stalling a client? strange)

server.log
2022.11.11 18:21:27.425446 [ 150 ] {18ef878a-fbaa-4a8e-97e3-d7d83a7c7830}  WriteBufferFromS3: WriteBufferFromS3 is not finalized in destructor. It's a bug
2022.11.11 18:21:27.425865 [ 148 ] {}  BaseDaemon: Received signal -1
2022.11.11 18:21:27.426027 [ 148 ] {}  BaseDaemon: (version 22.11.1.1, build id: A12D6F82C997B6D1D65AF039FB4F12D1CC77E95D) (from thread 150) Terminate called for uncaught exception:
2022.11.11 18:21:27.426102 [ 148 ] {}  BaseDaemon: Code: 159. DB::Exception: Timeout exceeded: elapsed 28.076487826 seconds, maximum: 10. (TIMEOUT_EXCEEDED), Stack trace (when copying this message, always include the lines below):
2022.11.11 18:21:27.426165 [ 148 ] {}  BaseDaemon: 
2022.11.11 18:21:27.426234 [ 148 ] {}  BaseDaemon: 0. /build/build_docker/../contrib/libcxx/include/exception:134: std::exception::capture() @ 0x17d66c02 in /workspace/clickhouse
2022.11.11 18:21:27.426291 [ 148 ] {}  BaseDaemon: 1. /build/build_docker/../contrib/libcxx/include/exception:112: std::exception::exception[abi:v15003]() @ 0x17d66bcd in /workspace/clickhouse
2022.11.11 18:21:27.426363 [ 148 ] {}  BaseDaemon: 2. /build/build_docker/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string, std::__1::allocator> const&, int) @ 0x2e50f420 in /workspace/clickhouse
2022.11.11 18:21:27.426450 [ 148 ] {}  BaseDaemon: 3. /build/build_docker/../src/Common/Exception.cpp:67: DB::Exception::Exception(std::__1::basic_string, std::__1::allocator> const&, int, bool) @ 0x2046014e in /workspace/clickhouse
2022.11.11 18:21:27.426509 [ 148 ] {}  BaseDaemon: 4. /build/build_docker/../src/Common/Exception.h:37: DB::Exception::Exception(int, fmt::v8::basic_format_string::type, fmt::v8::type_identity::type>, double&&, double&&) @ 0x26ddc092 in /workspace/clickhouse
2022.11.11 18:21:27.426568 [ 148 ] {}  BaseDaemon: 5. /build/build_docker/../src/QueryPipeline/ExecutionSpeedLimits.cpp:110: bool DB::handleOverflowMode(DB::OverflowMode, int, fmt::v8::basic_format_string::type, fmt::v8::type_identity::type>, double&&, double&&) @ 0x26ddb516 in /workspace/clickhouse
2022.11.11 18:21:27.426626 [ 148 ] {}  BaseDaemon: 6. /build/build_docker/../src/QueryPipeline/ExecutionSpeedLimits.cpp:126: DB::ExecutionSpeedLimits::checkTimeLimit(Stopwatch const&, DB::OverflowMode) const @ 0x26ddb449 in /workspace/clickhouse
2022.11.11 18:21:27.426680 [ 148 ] {}  BaseDaemon: 7. /build/build_docker/../src/Interpreters/ProcessList.cpp:413: DB::QueryStatus::checkTimeLimit() @ 0x282a4c3c in /workspace/clickhouse
2022.11.11 18:21:27.426734 [ 148 ] {}  BaseDaemon: 8. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:163: DB::PipelineExecutor::checkTimeLimit() @ 0x2986ec60 in /workspace/clickhouse
2022.11.11 18:21:27.426788 [ 148 ] {}  BaseDaemon: 9. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:173: DB::PipelineExecutor::finalizeExecution() @ 0x2986f35c in /workspace/clickhouse
2022.11.11 18:21:27.426842 [ 148 ] {}  BaseDaemon: 10. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:109: DB::PipelineExecutor::execute(unsigned long) @ 0x2986ec00 in /workspace/clickhouse
2022.11.11 18:21:27.426897 [ 148 ] {}  BaseDaemon: 11. /build/build_docker/../src/Processors/Executors/CompletedPipelineExecutor.cpp:43: DB::threadFunction(DB::CompletedPipelineExecutor::Data&, std::__1::shared_ptr, unsigned long) @ 0x2986d212 in /workspace/clickhouse
2022.11.11 18:21:27.426955 [ 148 ] {}  BaseDaemon: 12. /build/build_docker/../src/Processors/Executors/CompletedPipelineExecutor.cpp:80: DB::CompletedPipelineExecutor::execute()::$_0::operator()() const @ 0x2986d0e1 in /workspace/clickhouse
2022.11.11 18:21:27.427009 [ 148 ] {}  BaseDaemon: 13. /build/build_docker/../contrib/libcxx/include/__functional/invoke.h:394: decltype(std::declval()()) std::__1::__invoke[abi:v15003](DB::CompletedPipelineExecutor::execute()::$_0&) @ 0x2986d095 in /workspace/clickhouse
2022.11.11 18:21:27.427068 [ 148 ] {}  BaseDaemon: 14. /build/build_docker/../contrib/libcxx/include/tuple:1789: decltype(auto) std::__1::__apply_tuple_impl[abi:v15003]&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x2986d079 in /workspace/clickhouse
2022.11.11 18:21:27.427128 [ 148 ] {}  BaseDaemon: 15. /build/build_docker/../contrib/libcxx/include/tuple:1798: decltype(auto) std::__1::apply[abi:v15003]&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::__1::tuple<>&) @ 0x2986cfdd in /workspace/clickhouse
2022.11.11 18:21:27.427186 [ 148 ] {}  BaseDaemon: 16. /build/build_docker/../src/Common/ThreadPool.h:196: ThreadFromGlobalPoolImpl::ThreadFromGlobalPoolImpl(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'()::operator()() @ 0x2986cee2 in /workspace/clickhouse
2022.11.11 18:21:27.427244 [ 148 ] {}  BaseDaemon: 17. /build/build_docker/../contrib/libcxx/include/__functional/invoke.h:394: decltype(std::declval BaseDaemon: Received signal 6
2022.11.11 18:21:27.427643 [ 401 ] {}  BaseDaemon: ########################################
2022.11.11 18:21:27.428044 [ 401 ] {}  BaseDaemon: (version 22.11.1.1, build id: A12D6F82C997B6D1D65AF039FB4F12D1CC77E95D) (from thread 150) (query_id: 18ef878a-fbaa-4a8e-97e3-d7d83a7c7830) (query: INSERT INTO FUNCTION s3('http://localhost:11111/test/request-throttler.csv', 'test', 'testtest', 'CSV', 'number UInt64') SETTINGS s3_max_single_part_upload_size = 10000, s3_truncate_on_insert = 1 SELECT number FROM numbers(1000000) SETTINGS s3_max_single_part_upload_size = 10000, s3_truncate_on_insert = 1) Received signal Aborted (6)
2022.11.11 18:21:27.428248 [ 401 ] {}  BaseDaemon: 
2022.11.11 18:21:27.428438 [ 401 ] {}  BaseDaemon: Stack trace: 0x7f5369bac00b 0x7f5369b8b859 0x20787d8a 0x31fe88b2 0x31fe87e6 0x26a92298 0x26a92439 0x23889dd4 0x23889d3c 0x23887a99 0x28ea0b9e 0x28ea104e 0x28ea0ff5 0x28ea0fd9 0x28ea020e 0x17d59274 0x17d59219 0x26d9490c 0x26d98e2d 0x26d98dd5 0x26d98d39 0x26da53f8 0x26da5398 0x26d9683e 0x26da5e4d 0x26da5df5 0x26da5dd9 0x26da5b0e 0x17d59274 0x17d59219 0x2069030c 0x26db56d0 0x26db99a2 0x2868c06a 0x297e9024 0x297f88c5 0x2e34bc99 0x2e34c4dc 0x2e59ba14 0x2e5987ba 0x2e59759e
2022.11.11 18:21:27.428615 [ 401 ] {}  BaseDaemon: 4. gsignal @ 0x7f5369bac00b in ?
2022.11.11 18:21:27.428789 [ 401 ] {}  BaseDaemon: 5. abort @ 0x7f5369b8b859 in ?
2022.11.11 18:21:27.528112 [ 401 ] {}  BaseDaemon: 6. /build/build_docker/../src/Daemon/BaseDaemon.cpp:0: terminate_handler() @ 0x20787d8a in /workspace/clickhouse
2022.11.11 18:21:27.590403 [ 401 ] {}  BaseDaemon: 7. /build/build_docker/../contrib/libcxxabi/src/cxa_handlers.cpp:59: std::__terminate(void (*)()) @ 0x31fe88b2 in /workspace/clickhouse
2022.11.11 18:21:27.652753 [ 401 ] {}  BaseDaemon: 8. /build/build_docker/../contrib/libcxxabi/src/cxa_handlers.cpp:89: std::terminate() @ 0x31fe87e6 in /workspace/clickhouse
2022.11.11 18:21:27.762301 [ 401 ] {}  BaseDaemon: 9. /build/build_docker/../src/IO/WriteBufferFromS3.cpp:0: DB::WriteBufferFromS3::~WriteBufferFromS3() @ 0x26a92298 in /workspace/clickhouse
2022.11.11 18:21:27.871619 [ 401 ] {}  BaseDaemon: 10. /build/build_docker/../src/IO/WriteBufferFromS3.cpp:137: DB::WriteBufferFromS3::~WriteBufferFromS3() @ 0x26a92439 in /workspace/clickhouse
2022.11.11 18:21:27.920920 [ 401 ] {}  BaseDaemon: 11. /build/build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:49: std::__1::default_delete::operator()[abi:v15003](DB::WriteBuffer*) const @ 0x23889dd4 in /workspace/clickhouse
2022.11.11 18:21:27.970438 [ 401 ] {}  BaseDaemon: 12. /build/build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:306: std::__1::unique_ptr>::reset[abi:v15003](DB::WriteBuffer*) @ 0x23889d3c in /workspace/clickhouse
2022.11.11 18:21:28.016487 [ 401 ] {}  BaseDaemon: 13. /build/build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:259: std::__1::unique_ptr>::~unique_ptr[abi:v15003]() @ 0x23887a99 in /workspace/clickhouse
2022.11.11 18:21:28.593828 [ 401 ] {}  BaseDaemon: 14. /build/build_docker/../src/Storages/StorageS3.cpp:571: DB::StorageS3Sink::~StorageS3Sink() @ 0x28ea0b9e in /workspace/clickhouse
2022.11.11 18:21:29.178120 [ 401 ] {}  BaseDaemon: 15. /build/build_docker/../contrib/libcxx/include/__memory/construct_at.h:64: void std::__1::__destroy_at[abi:v15003](DB::StorageS3Sink*) @ 0x28ea104e in /workspace/clickhouse
2022.11.11 18:21:29.762612 [ 401 ] {}  BaseDaemon: 16. /build/build_docker/../contrib/libcxx/include/__memory/construct_at.h:89: void std::__1::destroy_at[abi:v15003](DB::StorageS3Sink*) @ 0x28ea0ff5 in /workspace/clickhouse
2022.11.11 18:21:30.340316 [ 401 ] {}  BaseDaemon: 17. /build/build_docker/../contrib/libcxx/include/__memory/allocator_traits.h:321: void std::__1::allocator_traits>::destroy[abi:v15003](std::__1::allocator&, DB::StorageS3Sink*) @ 0x28ea0fd9 in /workspace/clickhouse
2022.11.11 18:21:30.916246 [ 401 ] {}  BaseDaemon: 18. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:309: std::__1::__shared_ptr_emplace>::__on_zero_shared() @ 0x28ea020e in /workspace/clickhouse
2022.11.11 18:21:31.015805 [ 401 ] {}  BaseDaemon: 19. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:175: std::__1::__shared_count::__release_shared[abi:v15003]() @ 0x17d59274 in /workspace/clickhouse
2022.11.11 18:21:31.115253 [ 401 ] {}  BaseDaemon: 20. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:215: std::__1::__shared_weak_count::__release_shared[abi:v15003]() @ 0x17d59219 in /workspace/clickhouse
2022.11.11 18:21:31.279521 [ 401 ] {}  BaseDaemon: 21. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:703: std::__1::shared_ptr::~shared_ptr[abi:v15003]() @ 0x26d9490c in /workspace/clickhouse
2022.11.11 18:21:31.464659 [ 401 ] {}  BaseDaemon: 22. /build/build_docker/../contrib/libcxx/include/__memory/construct_at.h:64: void std::__1::__destroy_at[abi:v15003], 0>(std::__1::shared_ptr*) @ 0x26d98e2d in /workspace/clickhouse
2022.11.11 18:21:31.650185 [ 401 ] {}  BaseDaemon: 23. /build/build_docker/../contrib/libcxx/include/__memory/construct_at.h:89: void std::__1::destroy_at[abi:v15003], 0>(std::__1::shared_ptr*) @ 0x26d98dd5 in /workspace/clickhouse
2022.11.11 18:21:31.823536 [ 401 ] {}  BaseDaemon: 24. /build/build_docker/../contrib/libcxx/include/__memory/allocator_traits.h:321: void std::__1::allocator_traits>>::destroy[abi:v15003], void, void>(std::__1::allocator>&, std::__1::shared_ptr*) @ 0x26d98d39 in /workspace/clickhouse
2022.11.11 18:21:32.011961 [ 401 ] {}  BaseDaemon: 25. /build/build_docker/../contrib/libcxx/include/vector:833: std::__1::vector, std::__1::allocator>>::__base_destruct_at_end[abi:v15003](std::__1::shared_ptr*) @ 0x26da53f8 in /workspace/clickhouse
2022.11.11 18:21:32.199382 [ 401 ] {}  BaseDaemon: 26. /build/build_docker/../contrib/libcxx/include/vector:827: std::__1::vector, std::__1::allocator>>::__clear[abi:v15003]() @ 0x26da5398 in /workspace/clickhouse
2022.11.11 18:21:32.365993 [ 401 ] {}  BaseDaemon: 27. /build/build_docker/../contrib/libcxx/include/vector:436: std::__1::vector, std::__1::allocator>>::~vector[abi:v15003]() @ 0x26d9683e in /workspace/clickhouse
2022.11.11 18:21:32.557645 [ 401 ] {}  BaseDaemon: 28. /build/build_docker/../contrib/libcxx/include/__memory/construct_at.h:64: void std::__1::__destroy_at[abi:v15003], std::__1::allocator>>, 0>(std::__1::vector, std::__1::allocator>>*) @ 0x26da5e4d in /workspace/clickhouse
2022.11.11 18:21:32.750053 [ 401 ] {}  BaseDaemon: 29. /build/build_docker/../contrib/libcxx/include/__memory/construct_at.h:89: void std::__1::destroy_at[abi:v15003], std::__1::allocator>>, 0>(std::__1::vector, std::__1::allocator>>*) @ 0x26da5df5 in /workspace/clickhouse
2022.11.11 18:21:32.938097 [ 401 ] {}  BaseDaemon: 30. /build/build_docker/../contrib/libcxx/include/__memory/allocator_traits.h:321: void std::__1::allocator_traits, std::__1::allocator>>>>::destroy[abi:v15003], std::__1::allocator>>, void, void>(std::__1::allocator, std::__1::allocator>>>&, std::__1::vector, std::__1::allocator>>*) @ 0x26da5dd9 in /workspace/clickhouse
2022.11.11 18:21:33.126305 [ 401 ] {}  BaseDaemon: 31. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:309: std::__1::__shared_ptr_emplace, std::__1::allocator>>, std::__1::allocator, std::__1::allocator>>>>::__on_zero_shared() @ 0x26da5b0e in /workspace/clickhouse
2022.11.11 18:21:33.225387 [ 401 ] {}  BaseDaemon: 32. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:175: std::__1::__shared_count::__release_shared[abi:v15003]() @ 0x17d59274 in /workspace/clickhouse
2022.11.11 18:21:33.324508 [ 401 ] {}  BaseDaemon: 33. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:215: std::__1::__shared_weak_count::__release_shared[abi:v15003]() @ 0x17d59219 in /workspace/clickhouse
2022.11.11 18:21:33.713720 [ 401 ] {}  BaseDaemon: 34. /build/build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:703: std::__1::shared_ptr, std::__1::allocator>>>::~shared_ptr[abi:v15003]() @ 0x2069030c in /workspace/clickhouse
2022.11.11 18:21:33.868667 [ 401 ] {}  BaseDaemon: 35. /build/build_docker/../src/QueryPipeline/QueryPipeline.cpp:40: DB::QueryPipeline::~QueryPipeline() @ 0x26db56d0 in /workspace/clickhouse
2022.11.11 18:21:34.025119 [ 401 ] {}  BaseDaemon: 36. /build/build_docker/../src/QueryPipeline/QueryPipeline.cpp:546: DB::QueryPipeline::reset() @ 0x26db99a2 in /workspace/clickhouse
2022.11.11 18:21:34.278141 [ 401 ] {}  BaseDaemon: 37. /build/build_docker/../src/QueryPipeline/BlockIO.h:48: DB::BlockIO::onException() @ 0x2868c06a in /workspace/clickhouse
2022.11.11 18:21:34.573290 [ 401 ] {}  BaseDaemon: 38. /build/build_docker/../src/Server/TCPHandler.cpp:453: DB::TCPHandler::runImpl() @ 0x297e9024 in /workspace/clickhouse
2022.11.11 18:21:34.895077 [ 401 ] {}  BaseDaemon: 39. /build/build_docker/../src/Server/TCPHandler.cpp:1902: DB::TCPHandler::run() @ 0x297f88c5 in /workspace/clickhouse
2022.11.11 18:21:34.959147 [ 401 ] {}  BaseDaemon: 40. /build/build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x2e34bc99 in /workspace/clickhouse
2022.11.11 18:21:35.030379 [ 401 ] {}  BaseDaemon: 41. /build/build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x2e34c4dc in /workspace/clickhouse
2022.11.11 18:21:35.107098 [ 401 ] {}  BaseDaemon: 42. /build/build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x2e59ba14 in /workspace/clickhouse
2022.11.11 18:21:35.180714 [ 401 ] {}  BaseDaemon: 43. /build/build_docker/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x2e5987ba in /workspace/clickhouse
2022.11.11 18:21:35.253635 [ 401 ] {}  BaseDaemon: 44. /build/build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x2e59759e in /workspace/clickhouse
2022.11.11 18:21:36.480306 [ 401 ] {}  BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read. (calculated checksum: EC7EF06506E70D829CED084DB53BBF7F)

@serxa
Copy link
Copy Markdown
Member Author

serxa commented Nov 11, 2022

During AST fuzzer test we cannot access minio Connection refused for 10 times with exponential backoff retry interval from 50 ms up to 12800 ms. This sums up to 28 seconds, which exceed query execution timeout 10 seconds, so an exception is thrown and QueryPipeline is destructed, after which we got not finalised WriteBufferFromS3 buffer during stack unwinding. So there are actually two problems:

  1. Minio is not accessible. I'm not sure if AST fuzzer is meant to be working with minio tests? maybe I have to just turn it off?
  2. Anyway, stack unwinding should not lead to an error. (even in debug and under AST fuzzer)

@kssenii
Copy link
Copy Markdown
Member

kssenii commented Nov 14, 2022

Minio is not accessible. I'm not sure if AST fuzzer is meant to be working with minio tests? maybe I have to just turn it off?

There is no minio.sh script execution in https://github.com/ClickHouse/ClickHouse/blob/master/docker/test/fuzzer/run-fuzzer.sh, so there is no s3 MinIO connection.
But I do not understand why it stroke only in this PR?

@serxa
Copy link
Copy Markdown
Member Author

serxa commented Nov 14, 2022

Yes, looks like that. I'm going to add filtering of tests with "needs s3" in that script in separate PR. Script takes some random tests every time, maybe we missed it, but strange anyways. I'll also create separate issue for fixing terminate() call. So let's merge this PR.

@tavplubix
Copy link
Copy Markdown
Member

But I do not understand why it stroke only in this PR?

Because we have only two tests that insert into table function s3: 02207_s3_content_type.sh and the new one. But AST Fuzzer (unfortunately) is not smart enough to extract queries from .sh scripts, so it parses .sql tests only. So only the new test triggers the issue.

Script takes some random tests every time, maybe we missed it, but strange anyways.

FYI AST Fuzzer runs queries from new tests more aggressively.

I'll also create separate issue for fixing terminate() call. So let's merge this PR.

Yes, the bug that was found by AST Fuzzer already exists for quite a long time and this PR does not introduce new bugs (at least at the first glance). But anyway it breaks tests and increases "noise" in the CI, so I reverted it.

@serxa
Copy link
Copy Markdown
Member Author

serxa commented Nov 18, 2022

finally merged here #43335

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-feature Pull request with new product feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants