Skip to content

Partially clean warnings.cmake#84263

Merged
alexey-milovidov merged 5 commits intoClickHouse:masterfrom
Algunenano:clean_warnings_cmake
Jul 26, 2025
Merged

Partially clean warnings.cmake#84263
alexey-milovidov merged 5 commits intoClickHouse:masterfrom
Algunenano:clean_warnings_cmake

Conversation

@Algunenano
Copy link
Copy Markdown
Member

Changelog category (leave one):

  • Not for changelog (changelog entry is not required)

Changelog entry (a user-readable short description of the changes that goes into CHANGELOG.md):

Partially clean warnings.cmake

  • Added comments to the disabled warnings I've checked. More are pending
  • vla-cxx-extension: It's already on by default.
  • c++98-compat: Unused.
  • ctad-maybe-unsupported: Most of the changes were required in unit test code or in Decimal, where it's better to be explicit about which underlying type you want to use than hope you guessed correctly.

References #76944

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

@clickhouse-gh
Copy link
Copy Markdown
Contributor

clickhouse-gh bot commented Jul 23, 2025

Workflow [PR], commit [4d1b554]

Summary:

job_name test_name status info comment
Stateless tests (amd_tsan, parallel, 2/2) failure
02730_dictionary_hashed_load_factor_element_count FAIL
Integration tests (release, 3/4) failure
Job Timeout Expired FAIL
Integration tests (tsan, 1/6) failure
test_keeper_invalid_digest/test.py::test_keeper_invalid_digest FAIL

@clickhouse-gh clickhouse-gh bot added the pr-not-for-changelog This PR should not be mentioned in the changelog label Jul 23, 2025
Copy link
Copy Markdown
Member

@alexey-milovidov alexey-milovidov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, but I don't understand why we have to give up template arguments inference from constructors.

@alexey-milovidov alexey-milovidov self-assigned this Jul 23, 2025
@Algunenano
Copy link
Copy Markdown
Member Author

Ok, but I don't understand why we have to give up template arguments inference from constructors.

This one is kind of controversial because there are use cases where it's obvious and others where it's not.

I've enabled a few cases by allowing it explicitly or by declaring the deduction manually but there are multiple cases where it wasn't obvious at all and in those cases I preferred to look into it and declare them explicitly.

There are a some sources about pifalls when using CTAD:

@Algunenano
Copy link
Copy Markdown
Member Author

Looking at the failure of 02730_dictionary_hashed_load_factor_element_count:

2025-07-24 01:12:52 02730_dictionary_hashed_load_factor_element_count:                      [ FAIL ] 6.90 sec.
2025-07-24 01:12:52 Reason: result differs with reference:  
2025-07-24 01:12:52 --- /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/tests/queries/0_stateless/02730_dictionary_hashed_load_factor_element_count.reference	2025-07-24 01:02:42.855837056 +1000
2025-07-24 01:12:52 +++ /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/tests/queries/0_stateless/02730_dictionary_hashed_load_factor_element_count.stdout	2025-07-24 01:12:51.959695161 +1000
2025-07-24 01:12:52 @@ -1,2 +1,2 @@
2025-07-24 01:12:52  dict_sharded	1	1000000	0.4768
2025-07-24 01:12:52 -dict_sharded_multi	5	1000000	0.4768
2025-07-24 01:12:52 +dict_sharded_multi	5	997709	0.4757
2025-07-24 01:12:52 
2025-07-24 01:12:52 

The dictionary has only loaded 997k rows instead of the expected 1M:

/tmp $ grep 02730_dictionary_hashed_load_factor_element_count clickhouse-server.log | grep RELOAD
2025.07.23 17:12:46.615332 [ 33796 ] {bb033bcb-82b4-4d37-8cb7-c860586a1ca1} <Debug> executeQuery: (from [::1]:47894) (comment: 02730_dictionary_hashed_load_factor_element_count.sql) (query 6, line 8) SYSTEM RELOAD DICTIONARY dict_sharded; (stage: Complete)
2025.07.23 17:12:49.034346 [ 33796 ] {0fa5b23e-b65f-4565-af0c-177d78064c51} <Debug> executeQuery: (from [::1]:47894) (comment: 02730_dictionary_hashed_load_factor_element_count.sql) (query 10, line 13) SYSTEM RELOAD DICTIONARY dict_sharded_multi; (stage: Complete)
/tmp $ grep 0fa5b23e-b65f-4565-af0c-177d78064c51 clickhouse-server.log | grep rows | grep Finished | awk '{ sum += $20 } END { print sum }'
997709

Where did the missing 2291 go?

The previous dictionary, dict_sharded did count 1M rows, so the source table must be correct:

$ grep bb033bcb-82b4-4d37-8cb7-c860586a1ca1 clickhouse-server.log | grep rows | grep Finished | awk '{ sum += $20 } END { print sum }'
1000000

Which means the issue should be somewhere around the sharding. It does not reproduce locally but I'm having a look to see if it has happened before or I can reproduce it somehow

@alexey-milovidov alexey-milovidov merged commit 120513e into ClickHouse:master Jul 26, 2025
118 of 123 checks passed
@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-synced-to-cloud The PR is synced to the cloud repo label Jul 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-not-for-changelog This PR should not be mentioned in the changelog pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants