Skip to content

Performance degradation when inserting long rows through HTTP #8441

@SaltTan

Description

@SaltTan

I noticed performance degradation for inserts from log files in 19.13.5.
Here is my test:

clickhouse-client -q "select arrayStringConcat(arrayMap(x->toString (cityHash64(x)) , range(1000)),' ') from numbers(100000);" >test.json
clickhouse-client -q "CREATE TABLE log (row String) ENGINE = Memory"

time curl "http://localhost:8123/?query=INSERT%20INTO%20log%20format%20CSV" --data-binary @test.json

19.13.4.32
real	0m2.717s user	0m0.316s sys	0m1.156s
real	0m2.727s user	0m0.212s sys	0m1.264s

19.13.5.44
real	0m3.726s user	0m0.304s sys	0m1.188s
real	0m3.714s user	0m0.324s sys	0m1.152s

time clickhouse-client -q 'insert into log format CSV' <test.json

19.13.4.32
real	0m2.394s user	0m0.768s sys	0m0.668s
real	0m2.304s user	0m0.704s sys	0m0.696s

19.13.5.44
real	0m2.328s user	0m0.704s sys	0m0.684s
real	0m2.289s user	0m0.676s sys	0m0.704s

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions