-
Notifications
You must be signed in to change notification settings - Fork 614
Closed
Description
Describe your feedback
Improving JDBC Insert performance by using RowBinary protocol instead of INSERT INTO SQL
Code example
try (PreparedStatement stmt = conn.prepareStatement(insertQuery)) {
for (int i = 0; i < ROWS; i++) {
stmt.setInt(1, i);
stmt.setString(2, "name" + i);
stmt.addBatch();
}
long startBatchTime = System.currentTimeMillis();
stmt.executeBatch();
long endBatchTime = System.currentTimeMillis();
System.out.println("Insertion Time for Final Batch: " + (endBatchTime - startBatchTime) + " ms");
}@chernser:
RowBinary format works well with many common use cases. JDBC v2 should support it.
Integration should be done as follows:
- Complete RowBinaryWriter implementation and test as part of client-v2.
- Integrate the writer to JDBC v2 code.
Problems to solve:
- RowBinaryWriter is designed to work with writer API inside a writer callback. There is no good API to get output buffer outside of this callback. A new API may be need to be drafted specially for cases when output stream should be passed out of the caller scope.
- When writer operation return control - output stream is closed and request is finished. In case of JDBC it doesn't work. Because output stream should be open all the time until
execute()orexecuteBatch()is called. Simplest way is to create a stream and some sync primitives, but may be there is another way to solve the problem. - Currently most logic is within one class of
PreparedStatementImpl. However prepared statements may be INSERT or SELECT at a time. Would it make sense to split this two branches to make code super clear? Another concern that while mixing logic it leads to creating a "unused variable" problem in runtime because not everything useful for both cases.
This issued is blocked by #2177
chernser