Skip to content

Performance improvement for BLOB copying #7382

@asfernandes

Description

@asfernandes
  1. In copy_blob it allocates a buffer of size input->blb_max_segment, but I think it does not make sense in the case of stream blobs.

BLOB_APPEND creates stream blobs, so here is test:

execute block
as
    declare b1 blob sub_type text character set unicode_fss;
    declare b2 blob sub_type text character set utf8;
    declare i integer = 0;
begin
    while (i < 65000) do
    begin
        b1 = blob_append(b1, 'a');
        i = i + 1;
    end

    i = 0;
    while (i < 800) do
    begin
        -- copy_blob
        b2 = b1;
        i = i + 1;
    end
end!

In this test data is appended in blob character by character so input->blb_max_segment should be 1.

The slow down happens due to this and also due to the second problem.

  1. Calls to blob filters are wrapped with START_CHECK_FOR_EXCEPTIONS / END_CHECK_FOR_EXCEPTIONS. At least in Linux, this is very slow.

And it does not make sense to wrap builtin filters that is inside same engine library.

I propose to disable it in the case of builtin filters.

So here is some timings (in seconds) with release build in Linux/clang++:

- master (100 copies): 9.706
- change 1 (100 copies): 0.959

- master (200 copies): 19.173
- change 1 (200 copies): 0.653

- master (400 copies): 36.714
- change 1 (400 copies): 1.248

- master (800 copies): 73.184
- change 1 (800 copies): 2.994
- change 2 (800 copies): 8.342
- change 1 and 2 (800 copies): 2.144

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions