You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,11 +33,10 @@
33
33
* Add new parameter to `S3` table engine and `s3` table function named `storage_class_name` which allows to specify intelligent tiring supported by AWS. Supported both in key-value format and in positional (deprecated) format). [#87122](https://github.com/ClickHouse/ClickHouse/pull/87122) ([alesapin](https://github.com/alesapin)).
34
34
*`ALTER UPDATE` for Iceberg table engine. [#86059](https://github.com/ClickHouse/ClickHouse/pull/86059) ([scanhex12](https://github.com/scanhex12)).
35
35
* Add system table `iceberg_metadata_log` to retrieve Iceberg metadata files during SELECT statements. [#86152](https://github.com/ClickHouse/ClickHouse/pull/86152) ([scanhex12](https://github.com/scanhex12)).
36
-
*⚠️WTF, fix this! `Iceberg` and `DeltaLake` tables support custom disk configuration. [#86778](https://github.com/ClickHouse/ClickHouse/pull/86778) ([scanhex12](https://github.com/scanhex12)).
36
+
*`Iceberg` and `DeltaLake` tables support custom disk configuration via storage level setting `disk`. [#86778](https://github.com/ClickHouse/ClickHouse/pull/86778) ([scanhex12](https://github.com/scanhex12)).
37
37
* Support Azure for data lakes disks. [#87173](https://github.com/ClickHouse/ClickHouse/pull/87173) ([scanhex12](https://github.com/scanhex12)).
38
38
* Support `Unity` catalog on top of Azure blob storage. [#80013](https://github.com/ClickHouse/ClickHouse/pull/80013) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
39
39
* Support more formats (`ORC`, `Avro`) in `Iceberg` writes. This closes [#86179](https://github.com/ClickHouse/ClickHouse/issues/86179). [#87277](https://github.com/ClickHouse/ClickHouse/pull/87277) ([scanhex12](https://github.com/scanhex12)).
40
-
* ⚠️WTF, revert this feature before the release! Support table engine `Alias`. [#76569](https://github.com/ClickHouse/ClickHouse/pull/76569) ([RinChanNOW](https://github.com/RinChanNOWWW)).
41
40
* Add a new system table `database_replicas` with information about database replicas. [#83408](https://github.com/ClickHouse/ClickHouse/pull/83408) ([Konstantin Morozov](https://github.com/k-morozov)).
42
41
* Added function `arrayExcept` that subtracts one array as a set from another. [#82368](https://github.com/ClickHouse/ClickHouse/pull/82368) ([Joanna Hulboj](https://github.com/jh0x)).
43
42
* Adds a new `system.aggregated_zookeeper_log` table. The table contains statistics (e.g. number of operations, average latency, errors) of ZooKeeper operations grouped by session id, parent path and operation type, and periodically flushed to disk. [#85102](https://github.com/ClickHouse/ClickHouse/pull/85102)[#87208](https://github.com/ClickHouse/ClickHouse/pull/87208) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
@@ -71,8 +70,8 @@
71
70
* Reduce memory usage in Iceberg writes. [#86544](https://github.com/ClickHouse/ClickHouse/pull/86544) ([scanhex12](https://github.com/scanhex12)).
72
71
73
72
#### Improvement
74
-
*⚠️WTF, fix this! Support writing multiple data files in Iceberg in a single insertion. Add new settings, `max_iceberg_data_file_rows` and `max_iceberg_data_file_bytes` to control the limits. [#86275](https://github.com/ClickHouse/ClickHouse/pull/86275) ([scanhex12](https://github.com/scanhex12)).
75
-
*⚠️WTF, fix this! Add rows/bytes limit for inserted data files in delta lake. Controlled by settings `delta_lake_insert_max_rows_in_data_file` and `delta_lake_insert_max_bytes_in_data_file`. [#86357](https://github.com/ClickHouse/ClickHouse/pull/86357) ([Kseniia Sumarokova](https://github.com/kssenii)).
73
+
* Support writing multiple data files in Iceberg in a single insertion. Add new settings, `iceberg_insert_max_rows_in_data_file` and `iceberg_insert_max_bytes_in_data_file` to control the limits. [#86275](https://github.com/ClickHouse/ClickHouse/pull/86275) ([scanhex12](https://github.com/scanhex12)).
74
+
* Add rows/bytes limit for inserted data files in delta lake. Controlled by settings `delta_lake_insert_max_rows_in_data_file` and `delta_lake_insert_max_bytes_in_data_file`. [#86357](https://github.com/ClickHouse/ClickHouse/pull/86357) ([Kseniia Sumarokova](https://github.com/kssenii)).
76
75
* Support more types for partitions in Iceberg writes. This closes [#86206](https://github.com/ClickHouse/ClickHouse/issues/86206). [#86298](https://github.com/ClickHouse/ClickHouse/pull/86298) ([scanhex12](https://github.com/scanhex12)).
77
76
* Make S3 retry strategy configurable and make settings of S3 disk can be hot reload if change the config XML file. [#82642](https://github.com/ClickHouse/ClickHouse/pull/82642) ([RinChanNOW](https://github.com/RinChanNOWWW)).
78
77
* Improved S3(Azure)Queue table engine to allow it to survive zookeeper connection loss without potential duplicates. Requires enabling S3Queue setting `use_persistent_processing_nodes` (changeable by `ALTER TABLE MODIFY SETTING`). [#85995](https://github.com/ClickHouse/ClickHouse/pull/85995) ([Kseniia Sumarokova](https://github.com/kssenii)).
Copy file name to clipboardExpand all lines: SECURITY.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,3 +74,4 @@ Removal criteria:
74
74
75
75
Notification process:
76
76
ClickHouse will post notifications within our OSS Trust Center and notify subscribers. Subscribers must log in to the Trust Center to download the notification. The notification will include the timeframe for public disclosure.
f"toLowCardinality('{info.repo_name}') AS repo, CAST({info.pr_number} AS UInt32) AS pull_request_number, '{info.sha}' AS commit_sha, toDateTime('{Utils.timestamp_to_str(check_start_time)}', 'UTC') AS check_start_time, toLowCardinality('{info.job_name}') AS check_name, toLowCardinality('{info.instance_type}') AS instance_type, '{info.instance_id}' AS instance_id"
[ -n"$EXTRA_COLUMNS_EXPRESSION" ] || { echo"ERROR: EXTRA_COLUMNS_EXPRESSION env must be defined";exit 1; }
18
17
EXTRA_COLUMNS=${EXTRA_COLUMNS:-"repo LowCardinality(String), pull_request_number UInt32, commit_sha String, check_start_time DateTime('UTC'), check_name LowCardinality(String), instance_type LowCardinality(String), instance_id String, INDEX ix_repo (repo) TYPE set(100), INDEX ix_pr (pull_request_number) TYPE set(100), INDEX ix_commit (commit_sha) TYPE set(100), INDEX ix_check_time (check_start_time) TYPE minmax, "}
@@ -166,9 +163,9 @@ function setup_logs_replication
166
163
SELECT ${EXTRA_COLUMNS_EXPRESSION_FOR_TABLE}, * FROM system.${table}
167
164
"||continue
168
165
done
169
-
)
166
+
}
170
167
171
-
functionstop_logs_replication
168
+
functionstop_logs_replication()
172
169
{
173
170
echo"Detach all logs replication"
174
171
clickhouse-client --query "select database||'.'||table from system.tables where database = 'system' and (table like '%_sender' or table like '%_watcher')"| {
0 commit comments