What happened?
When BCP import is performed, fields of type varchar (and some other types) from incoming records are allocated in MessageContext. These buffered fields are kept in memory for all incoming records until the end of BCP import call.
For example, for table like this, backend memory usage on BCP import will grow linearly if we increase the number of records:
create table tab1(col1 varchar(10))
insert into tab1 values('foobar')
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
insert into tab1 select col1 from tab1
select count(*) from tab1
> 1048576
Unlike #2455 and #2462, it is not trivial to free these allocations promptly because of two levels of batching - implicit one on protocol side and MAX_BUFFERED_TUPLES (1000) on executor side.
I'll file a PR with an experimental patch to track these allocations and free them as soon as possible.
Version
BABEL_3_X_DEV (Default)
Extension
babelfishpg_tsql (Default)
Which flavor of Linux are you using when you see the bug?
Fedora
Relevant log output
No response
Code of Conduct
What happened?
When BCP import is performed, fields of type
varchar(and some other types) from incoming records are allocated inMessageContext. These buffered fields are kept in memory for all incoming records until the end of BCP import call.For example, for table like this, backend memory usage on BCP import will grow linearly if we increase the number of records:
Unlike #2455 and #2462, it is not trivial to free these allocations promptly because of two levels of batching - implicit one on protocol side and
MAX_BUFFERED_TUPLES(1000) on executor side.I'll file a PR with an experimental patch to track these allocations and free them as soon as possible.
Version
BABEL_3_X_DEV (Default)
Extension
babelfishpg_tsql (Default)
Which flavor of Linux are you using when you see the bug?
Fedora
Relevant log output
No response
Code of Conduct