-
Notifications
You must be signed in to change notification settings - Fork 586
Description
Bug Type (问题类型)
None
Before submit
- 我已经确认现有的 Issues 与 FAQ 中没有相同 / 重复问题 (I have confirmed and searched that there are no similar problems in the historical issue and documents)
Environment (环境信息)
- Server Version: 1.5.0 (Apache Release Version)
- Backend: RocksDB 5 nodes, SSD
Expected & Actual behavior (期望与实际表现)
When memory leaks occur in the graphserver during data writing, the distribution of object quantities in the JVM is as follows:
jmap -histo:live 51680 | head -n 10
num #instances #bytes class name (module)
1: 284880553 13509899520 [B ([email protected])
2: 284703909 9110525088 java.lang.String ([email protected])
3: 283905229 6813725496 org.apache.hugegraph.backend.id.IdGenerator$StringId
4: 567813 2284841352 [Lorg.apache.hugegraph.backend.id.Id;
5: 1384040 182210368 [Ljava.lang.Object; ([email protected])
6: 2270975 90839000 java.util.concurrent.ConcurrentLinkedDeque$Node ([email protected])
7: 1191421 76250944 java.util.LinkedHashMap$Entry ([email protected]
The issue was eventually traced to the CachedGraphTransaction, where there is an action to clear edge caches when writing vertices. If a large number of vertices are written, the commitMutation2Backend() method triggers this.notifyChanges(Cache.ACTION_INVALIDED, HugeType.VERTEX, vertexIds), which results in a backlog of tasks in the single-threaded thread pool within the EventHub, holding onto vertxId data and causing a memory leak.
Vertex/Edge example (问题点 / 边数据举例)
No response
Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
No response