Skip to content

Limiting the number of parsing threads for the S3 source#53668

Merged
pufit merged 3 commits intomasterfrom
pufit/fix_s3_threads
Aug 22, 2023
Merged

Limiting the number of parsing threads for the S3 source#53668
pufit merged 3 commits intomasterfrom
pufit/fix_s3_threads

Conversation

@pufit
Copy link
Copy Markdown
Member

@pufit pufit commented Aug 22, 2023

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

More careful thread management will improve the speed of the S3 table function over a large number of files by more than ~25%.

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

pufit added 2 commits August 21, 2023 21:21
# Conflicts:
#	src/Storages/StorageS3.cpp
#	src/Storages/StorageS3.h
#	src/Storages/StorageURL.cpp
#	src/Storages/StorageURL.h
@robot-ch-test-poll robot-ch-test-poll added the pr-improvement Pull request with some product improvements label Aug 22, 2023
@robot-ch-test-poll
Copy link
Copy Markdown
Contributor

robot-ch-test-poll commented Aug 22, 2023

This is an automated comment for commit e42da94 with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🔴 failure

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟡 pending
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🟢 success
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🔴 failure
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

@CheSema CheSema self-assigned this Aug 22, 2023
@pufit pufit merged commit 9265333 into master Aug 22, 2023
@pufit pufit deleted the pufit/fix_s3_threads branch August 22, 2023 23:09
@robot-ch-test-poll robot-ch-test-poll added the pr-backports-created-cloud deprecated label, NOOP label Sep 1, 2023

const size_t max_download_threads = local_context->getSettingsRef().max_download_threads;
const size_t max_threads = local_context->getSettingsRef().max_threads;
const size_t max_parsing_threads = num_streams >= max_threads ? 1 : (max_threads / num_streams);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, isn't num_streams usually equal to max_threads? This seems to disable all parallelism when reading from one file (or a small number of them). I tried it, and indeed it's super slow on hits.parquet now.

This works in StorageURL because it clamps num_streams to min(num_streams, num_files) first, but here we don't know the number of files yet.

I guess we should do the first ListObjectsV2 request right here, and defer the continuation (if needed) to DisclosedGlobIterator. (I guess it would be a method in iterator_wrapper that would return the complete list if it's cheap enough to obtain, and nothing otherwise).

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, maybe we should also use max_download_threads instead of max_threads for parquet reading+parsing for remote files? Not sure why I didn't do it this way in the first place.

(I guess the way to do it is to calculate both max_threads / num_streams and max_download_threads / num_streams here, then make registerInputFormatParquet use is_remote_fs ? max_download_threads : max_parsing_threads as the number of threads. Which is kind of silly because is_remote_fs is always true in this case, so the max_threads / num_streams value will go unused. The reason I made RandomAccessInputCreator take max_parsing_threads and max_download_threads separately is because I was going to make ParquetBlockInputFormat do reading in a separate thread pool from parsing. But then everything worked fine without it, so it doesn't seem worth the effort anymore. But maybe the separation of max_parsing_threads and max_download_threads should stay, just in case, idk.)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, yes, my bad. I was confused by StorageURL and completely forgot that we don't have this num_streams limitation in StorageS3. Will try to fix it ASAP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-backports-created-cloud deprecated label, NOOP pr-improvement Pull request with some product improvements

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants