Skip to content

Conversation

@jt2594838
Copy link
Contributor

@jt2594838 jt2594838 commented Aug 5, 2025

When reading the WAL to construct a log batch in IoTConsensus, the memory control of the batch is based on the size of serialized entries, which can be significantly smaller than the memory size of PlanNodes.

Therefore, when the receiver deserializes the log batch, the memory footprint may be times larger than it was on the sender, and this could lead to OOM.

To avoid this, the receiver now returns its memory cost after deserializing the batch to the sender, so that the sender can adjust the subsequent batch size accordingly.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Aug 5, 2025

@jt2594838 jt2594838 merged commit e5f8a19 into master Aug 6, 2025
32 of 35 checks passed
jt2594838 added a commit that referenced this pull request Nov 27, 2025
…6812)

* more accurate mermory size (#15713)

* Fix stuck when stopping a DataNode with large unremovable WAL files (#15727)

* Fix stuck when stopping a DataNode with large unremovable WAL files

* spotless

* add shutdown hook watcher

* Fix logDispatcher stuck

* add re-interrupt

* Add a multiplier to avoid receiver OOM in IoTConsensus (#16102)

* Fix negative iot queue size & missing search index for deletion & missed request when performing empty table deleting (#16022)

* Fix double memory free of iotconsensus queue request during region deletion

* Fix missing searchIndex and lost deletion when no TsFile is involved.

* Fix ref count of IoTConsensus request not decreased in allocation failure (#16169)

* fix IoTConsensus memory management

* Fix ref count of IoTConsensus request not decreased in allocation failure

* fix log level

* remove irrelevant codes from 2.0

* Remove a table test

* Interrupt wal-delete thread when WALManager is closed (#15442)

---------

Co-authored-by: Xiangpeng Hu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants