Skip to content

Add system.error_log#65381

Merged
pamarcos merged 18 commits intoClickHouse:masterfrom
pamarcos:system-error-log
Jun 21, 2024
Merged

Add system.error_log#65381
pamarcos merged 18 commits intoClickHouse:masterfrom
pamarcos:system-error-log

Conversation

@pamarcos
Copy link
Copy Markdown
Member

@pamarcos pamarcos commented Jun 18, 2024

Add system.error_log based on system.metric_log.
Contains history of error values from table system.errors, periodically flushed to disk.

Changelog category (leave one):

  • New Feature

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

  • Add system.error_log which contains history of error values from table system.errors, periodically flushed to disk.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

#64501

@pamarcos pamarcos marked this pull request as draft June 18, 2024 08:44
@robot-ch-test-poll robot-ch-test-poll added the pr-feature Pull request with new product feature label Jun 18, 2024
@robot-ch-test-poll
Copy link
Copy Markdown
Contributor

robot-ch-test-poll commented Jun 18, 2024

This is an automated comment for commit 71f8937 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR❌ failure
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Cloud fork sync (only for ClickHouse Inc. employees)If it fails, ask a maintainer for help✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
PR CheckChecks correctness of the PR's body✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@pamarcos pamarcos added the can be tested Allows running workflows for external contributors label Jun 18, 2024
@clickhouse-ci clickhouse-ci bot added the manual approve Manual approve required to run CI label Jun 18, 2024
@pamarcos pamarcos marked this pull request as ready for review June 18, 2024 15:43
@pamarcos pamarcos self-assigned this Jun 18, 2024
@pamarcos pamarcos requested a review from nikitamikhaylov June 20, 2024 08:36
@nikitamikhaylov nikitamikhaylov self-assigned this Jun 20, 2024
@pamarcos
Copy link
Copy Markdown
Member Author

pamarcos commented Jun 21, 2024

Checking the tests that failed. I'll be checking the boxes once the analysis of the error is finished.

2024-06-21 03:04:04 01052_window_view_proc_tumble_to_now:                                   [ FAIL ] 129.79 sec. - result differs with reference: 
2024-06-21 03:04:04 --- /usr/share/clickhouse-test/queries/0_stateless/01052_window_view_proc_tumble_to_now.reference	2024-06-21 02:42:49.969755914 +0500
2024-06-21 03:04:04 +++ /tmp/clickhouse-test/0_stateless/01052_window_view_proc_tumble_to_now.stdout	2024-06-21 03:04:04.237564301 +0500
2024-06-21 03:04:04 @@ -1,2 +0,0 @@
2024-06-21 03:04:04 -OK
2024-06-21 03:04:04 -1
2024-06-21 03:04:04 
2024-06-21 03:04:04 
2024-06-21 03:04:04 Settings used in the test: --max_insert_threads 5 --group_by_two_level_threshold 964634 --group_by_two_level_threshold_bytes 37819916 --distributed_aggregation_memory_efficient 1 --fsync_metadata 0 --output_format_parallel_formatting 0 --input_format_parallel_parsing 1 --min_chunk_bytes_for_parallel_parsing 15395160 --max_read_buffer_size 889142 --prefer_localhost_replica 1 --max_block_size 86750 --max_joined_block_size_rows 51996 --max_threads 29 --optimize_append_index 0 --optimize_if_chain_to_multiif 0 --optimize_if_transform_strings_to_enum 1 --optimize_read_in_order 1 --optimize_or_like_chain 0 --optimize_substitute_columns 1 --enable_multiple_prewhere_read_steps 0 --read_in_order_two_level_merge_threshold 33 --optimize_aggregation_in_order 0 --aggregation_in_order_max_block_bytes 4061019 --use_uncompressed_cache 1 --min_bytes_to_use_direct_io 10737418240 --min_bytes_to_use_mmap_io 1783071701 --local_filesystem_read_method pread_threadpool --remote_filesystem_read_method read --local_filesystem_read_prefetch 1 --filesystem_cache_segments_batch_size 10 --read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0 --throw_on_error_from_cache_on_write_operations 0 --remote_filesystem_read_prefetch 0 --allow_prefetched_read_pool_for_remote_filesystem 0 --filesystem_prefetch_max_memory_usage 32Mi --filesystem_prefetches_limit 10 --filesystem_prefetch_min_bytes_for_single_read_task 16Mi --filesystem_prefetch_step_marks 50 --filesystem_prefetch_step_bytes 100Mi --compile_aggregate_expressions 1 --compile_sort_description 1 --merge_tree_coarse_index_granularity 18 --optimize_distinct_in_order 0 --max_bytes_before_external_sort 5289239065 --max_bytes_before_external_group_by 10737418240 --max_bytes_before_remerge_sort 2629019942 --min_compress_block_size 1680648 --max_compress_block_size 362760 --merge_tree_compact_parts_min_granules_to_multibuffer_read 42 --optimize_sorting_by_input_stream_properties 0 --http_response_buffer_size 7633808 --http_wait_end_of_query True --enable_memory_bound_merging_of_aggregation_results 0 --min_count_to_compile_expression 3 --min_count_to_compile_aggregate_expression 3 --min_count_to_compile_sort_description 3 --session_timezone America/Mazatlan --prefer_warmed_unmerged_parts_seconds 9 --use_page_cache_for_disks_without_file_cache True --page_cache_inject_eviction False --merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0.59 --prefer_external_sort_block_bytes 100000000 --cross_join_min_rows_to_compress 0 --cross_join_min_bytes_to_compress 100000000 --min_external_table_block_size_bytes 0 --max_parsing_threads 1
2024-06-21 03:04:04 
2024-06-21 03:04:04 MergeTree settings used in test: --ratio_of_defaults_for_sparse_serialization 0.8483545377877905 --prefer_fetch_merged_part_size_threshold 1 --vertical_merge_algorithm_min_rows_to_activate 1 --vertical_merge_algorithm_min_columns_to_activate 69 --allow_vertical_merges_from_compact_to_wide_parts 0 --min_merge_bytes_to_use_direct_io 10737418240 --index_granularity_bytes 15547748 --merge_max_block_size 13634 --index_granularity 46160 --min_bytes_for_wide_part 140185875 --marks_compress_block_size 39946 --primary_key_compress_block_size 20714 --replace_long_file_name_to_hash 0 --max_file_name_length 0 --min_bytes_for_full_part_storage 323233178 --compact_parts_max_bytes_to_buffer 71407425 --compact_parts_max_granules_to_buffer 1 --compact_parts_merge_max_bytes_to_prefetch_part 26842182 --cache_populated_by_fetch 1 --concurrent_part_removal_threshold 4 --old_parts_lifetime 49
2024-06-21 03:04:04 
2024-06-21 03:04:04 Database: test_kgu4gf9n

The same error happened in #63677. It was reported in #56683 and fixed at #56870 by increasing the interval from 5 to 10 seconds.

I re-opened #56683 because the test is still flaky

________________ test_when_s3_broken_pipe_at_upload_is_retried _________________
[gw1] linux -- Python 3.10.12 /usr/bin/python3

cluster = <helpers.cluster.ClickHouseCluster object at 0xffa1e9b935e0>
broken_s3 = <helpers.s3_mocks.broken_s3.MockControl object at 0xffa1e939c8e0>

    def test_when_s3_broken_pipe_at_upload_is_retried(cluster, broken_s3):
        node = cluster.instances["node"]
    
        broken_s3.setup_fake_multpartuploads()
        broken_s3.setup_at_part_upload(
            count=3,
            after=2,
            action="broken_pipe",
        )
    
        insert_query_id = f"TEST_WHEN_S3_BROKEN_PIPE_AT_UPLOAD"
        node.query(
            f"""
            INSERT INTO
                TABLE FUNCTION s3(
                    'http://resolver:8083/root/data/test_when_s3_broken_pipe_at_upload_is_retried',
                    'minio', 'minio123',
                    'CSV', auto, 'none'
                )
            SELECT
                *
            FROM system.numbers
            LIMIT 1000000
            SETTINGS
                s3_max_single_part_upload_size=100,
                s3_min_upload_part_size=100000,
                s3_check_objects_after_upload=0
            """,
            query_id=insert_query_id,
        )
    
        create_multipart, upload_parts, s3_errors = get_multipart_counters(
            node, insert_query_id, log_type="QueryFinish"
        )
    
        assert create_multipart == 1
        assert upload_parts == 69
>       assert s3_errors == 3
E       assert 4 == 3

test_checking_s3_blobs_paranoid/test.py:315: AssertionError

I've been able to reproduce the problem in local and created a new issue with it

E               ERROR: for hdfs1  Head "https://registry-1.docker.io/v2/sequenceiq/hadoop-docker/manifests/2.7.0": received unexpected HTTP status: 503 Service Unavailable
2024-06-21 02:01:27,421: WARNING: connection: Failed to connect to localhost:9001

@pamarcos pamarcos added this pull request to the merge queue Jun 21, 2024
Merged via the queue into ClickHouse:master with commit 932e4bf Jun 21, 2024
@pamarcos pamarcos deleted the system-error-log branch June 21, 2024 13:34
@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-synced-to-cloud The PR is synced to the cloud repo label Jun 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

can be tested Allows running workflows for external contributors manual approve Manual approve required to run CI pr-feature Pull request with new product feature pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants