Skip to content

feat(ilp): add gzip support to ILP/HTTP server#6165

Merged
bluestreak01 merged 44 commits intomasterfrom
nw_ilp_gzip
Oct 1, 2025
Merged

feat(ilp): add gzip support to ILP/HTTP server#6165
bluestreak01 merged 44 commits intomasterfrom
nw_ilp_gzip

Conversation

@nwoolmer
Copy link
Copy Markdown
Contributor

@nwoolmer nwoolmer commented Sep 19, 2025

ILP can be fairly inefficient with bandwidth. A 2x-3x increase in bandwidth would not be unusual for ILP streaming, due to its text protocol nature.

Upgrading the clients to a binary protocol is a WIP. In the meantime, this change allows the server to accept gzip encoded ILP.

In a local test, this will reduce overall throughput, due to the overhead of encoding and decoding the compressed buffer:

TSBS, cpu-only

  • Uncompressed, 1 thread: 568582 rows/sec
  • Compressed, 1 thread: 295322 rows/sec
  • Uncompressed, 8 threads: 1333824 rows/sec
  • Compressed, 8 threads: 785233 rows/sec

However, in aggregate, this can be an advantage; for example, when receiving text-heavy data from multiple sources (logs, documents). The encoding cpu-overhead is masked on the client-side, and the reduction in bandwidth usage raises the ceiling for maximum throughput.

When we next roll out client changes, we can review this and see if we'd like to introduce alternative encodings that may perform better, for example, zstd.

@nwoolmer nwoolmer added Enhancement Enhance existing functionality Compatibility Compatibility with third-party tools and services labels Sep 19, 2025
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Sep 19, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds gzip decompression support for Line HTTP ingestion: new native inflater init, Zip JNI method, gzip-aware processing and lifecycle in LineHttpProcessor/State, adaptive buffer accessor, logging tweak, and expanded gzip-focused tests and compatibility test renames.

Changes

Cohort / File(s) Summary
HTTP Line gzip handling
core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorImpl.java, core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorState.java
Detects Content-Encoding: gzip, calls Zip.inflateInitGzip(), stores inflater via setInflateStream, routes gzip chunks through inflateAndParse(...), manages gzip lifecycle with cleanupGzip() and isGzipEncoded(), and updates clear/close to free gzip resources.
Zip native & Java bridge
core/src/main/c/share/zip.c, core/src/main/java/io/questdb/std/Zip.java
Adds native Java_io_questdb_std_Zip_inflateInitGzip returning inflater pointer and exposes public static native long inflateInitGzip() in Zip.java; adjusts inflate native to return inflate result directly and annotates totalOut as unused.
Line WAL appender logging
core/src/main/java/io/questdb/cutlass/line/tcp/LineWalAppender.java
Minor logging change: pass throwable directly into the trace output instead of casting.
Adaptive recv buffer
core/src/main/java/io/questdb/cutlass/line/tcp/AdaptiveRecvBuffer.java
Adds public long getCurrentBufSize() and a suppression annotation on tryCompactOrGrowBuffer.
HTTP gzip tests
core/src/test/java/io/questdb/test/cutlass/http/line/LineHttpFailureTest.java
Adds gzip-focused tests: corrupted gzip, disconnect during decompression, empty gzip stream, and a valid gzip encoding test that compresses payloads and verifies ingestion; introduces gzip utilities and related imports.
InfluxDB client compat tests
compat/src/test/java/io/questdb/compat/InfluxDBClientFailureTest.java, compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java
Renames testGzipNotSupported()testGzipSupported() and adds testGzipSupportedLotsOfData(); expands ILP HTTP tests with gzip variants, per-thread gzip control (new EnableGzipFunction), helper methods for parallel tests, and multiple new test cases exercising gzip in single and parallel scenarios.
Compatibility test class rename
compat/src/test/java/io/questdb/compat/InfluxDBv2ClientCompatTest.java
Fixes public class name from InfluxDBv2ClientCombatTestInfluxDBv2ClientCompatTest.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant C as Client
  participant HP as LineHttpProcessorImpl
  participant ST as LineHttpProcessorState
  participant Z as Zip (JNI)

  rect #F2F8FF
    C->>HP: Send HTTP headers
    HP->>HP: Detect Content-Encoding: gzip?
    alt gzip
      HP->>Z: inflateInitGzip()
      alt init OK
        HP->>ST: setInflateStream(stream)
      else init error
        HP-->>C: 415 Encoding Not Supported
      end
    end
  end

  loop for each chunk
    C->>HP: onChunk(lo,hi)
    alt gzip
      HP->>ST: inflateAndParse(lo,hi)
      ST->>Z: inflate(stream, input)
      ST->>ST: process decompressed bytes
      alt inflate error
        ST-->>C: 415 Encoding Not Supported
      end
    else plain
      HP->>ST: parse(lo,hi)
    end
  end

  rect #F5FFF5
    C->>HP: onRequestComplete
    HP->>ST: cleanupGzip()
    HP-->>C: Complete (e.g., 204)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested labels

Core, Performance

Suggested reviewers

  • bluestreak01

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.94% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title concisely and accurately describes the primary change of enabling gzip support for ILP over HTTP, matching the core implementation scope of the pull request. It uses the conventional “feat” prefix and specifies the relevant subsystem (ilp) and the feature added (gzip support to ILP/HTTP server), making it clear for reviewers and history. It does not include unnecessary details beyond the main objective, so it meets the guidelines for clarity and specificity.
Description Check ✅ Passed The description focuses on the context and rationale for adding gzip support to the ILP HTTP server, including performance trade-offs and future considerations, which directly aligns with the changes in the code. It provides concrete benchmark data and outlines the use case without diverging into unrelated topics, making it clearly related to the changeset. This satisfies the lenient criteria of being on-topic.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nwoolmer nwoolmer changed the title feat(ilp): add gzip support to ILP/HTTP feat(ilp): add gzip support to ILP/HTTP server Sep 19, 2025
@puzpuzpuz
Copy link
Copy Markdown
Contributor

@nwoolmer have you considered alternatives like Snappy or zstd? Both should be a few times faster in compression/decompression while providing decent compression ratio.

@nwoolmer
Copy link
Copy Markdown
Contributor Author

@nwoolmer have you considered alternatives like Snappy or zstd? Both should be a few times faster in compression/decompression while providing decent compression ratio.

Indeed there are better options from a compression ratio and speed perspective. However, there is a non-functional requirement which is that it integrates with other tools including Telegraf. Telegraf output can be configured for gzip, but I am not familiar whether or not it supports other encodings.

Also, we have zlib already integrated on the Java-side, so this was an easy lift versus adding new compression code, potentially on the Rust side.

@puzpuzpuz
Copy link
Copy Markdown
Contributor

Also, we have zlib already integrated on the Java-side, so this was an easy lift versus adding new compression code, potentially on the Rust side.

We should have zstd and lz4 as a dependency in Rust code already. As for Telegraf, that's a fair argument, but we could support zlib along with faster alternatives - our clients would then use something more efficient while zlib will be there for 3rd-party compat.

@nwoolmer
Copy link
Copy Markdown
Contributor Author

Also, we have zlib already integrated on the Java-side, so this was an easy lift versus adding new compression code, potentially on the Rust side.

We should have zstd and lz4 as a dependency in Rust code already. As for Telegraf, that's a fair argument, but we could support zlib along with faster alternatives - our clients would then use something more efficient while zlib will be there for 3rd-party compat.

For now, this is server-only change to fulfill an internal requirement. Let's follow up with more improvements when we roll out the next client-wide changes.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Sep 26, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
3 out of 4 committers have signed the CLA.

✅ nwoolmer
✅ bluestreak01
✅ puzpuzpuz
❌ GitHub Actions - Rebuild Native Libraries


GitHub Actions - Rebuild Native Libraries seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@bluestreak01
Copy link
Copy Markdown
Member

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Sep 30, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@puzpuzpuz
Copy link
Copy Markdown
Contributor

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Oct 1, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/src/main/c/share/zip.c (1)

47-60: Add null-pointer guards in native Zip methods.

In core/src/main/c/share/zip.c, for each JNI entry (Java_io_questdb_std_Zip_setInput, Java_io_questdb_std_Zip_deflate, and similarly in inflate, deflateEnd, availOut, getAvailOut), add at the top:

if (ptr == 0) {
    jclass npe = (*e)->FindClass(e, "java/lang/NullPointerException");
    (*e)->ThrowNew(e, npe, "Null z_stream pointer");
    return /* or appropriate jint 0 */;
}

to fail fast instead of dereferencing a null pointer.

♻️ Duplicate comments (1)
core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorState.java (1)

185-217: Fix gzip inflate handling when the buffer fills up.

Zip.inflate can return Z_BUF_ERROR after writing bytes once the receive buffer runs out of space. Because bufPos is only advanced after the negative-return check, this branch always falls through to MESSAGE_TOO_LARGE, dropping valid gzip payloads while the parser never sees the bytes that were already produced. Move the bufPos update ahead of the status check (compute produced = len - Zip.getAvailOut(...)), process that chunk, and only reject after trying to compact/grow the buffer—mirroring the uncompressed path. This is the same issue I raised earlier; it still needs to be fixed.

-            int ret = Zip.inflate(inflateStream, p, len, false);
-
-            if (ret < 0) {
-                if (ret != Zip.Z_BUF_ERROR) {
+            int status = Zip.inflate(inflateStream, p, len, false);
+            int produced = len - Zip.getAvailOut(inflateStream);
+            if (produced > 0) {
+                recvBuffer.setBufPos(p + produced);
+            }
+
+            if (status < 0) {
+                if (status != Zip.Z_BUF_ERROR) {
                     reject(ENCODING_NOT_SUPPORTED, "gzip decompression error", fd);
                     cleanupGzip();
                     return;
                 }
 
-                if (recvBuffer.getBufPos() > pp) {
-                    // when inflate fails with Z_BUF_ERROR - this means either
-                    // have not processed buffer yet, or it is too small. Trigger buffer processing
-                    // to make space in the buffer
-                    currentStatus = processLocalBuffer();
-                    continue;
-                }
-
-                reject(MESSAGE_TOO_LARGE, "server buffer is too small", fd);
-                cleanupGzip();
-                return;
+                if (produced > 0) {
+                    currentStatus = processLocalBuffer();
+                    pp = recvBuffer.getBufPos();
+                    continue;
+                }
+                if (!recvBuffer.tryCompactOrGrowBuffer()) {
+                    reject(MESSAGE_TOO_LARGE, "server buffer is too small", fd);
+                    cleanupGzip();
+                    return;
+                }
+                continue;
             }
 
-            recvBuffer.setBufPos(p + (len - Zip.getAvailOut(inflateStream)));
-
-            if (ret == Zip.Z_STREAM_END) {
-                currentStatus = processLocalBuffer();
-                cleanupGzip();
-                break;
-            }
+            if (produced > 0) {
+                currentStatus = processLocalBuffer();
+                if (stopParse()) {
+                    return;
+                }
+                pp = recvBuffer.getBufPos();
+            }
+
+            if (status == Zip.Z_STREAM_END) {
+                cleanupGzip();
+                break;
+            }
🧹 Nitpick comments (4)
core/src/main/c/share/zip.c (2)

136-140: Duplicate API: getAvailOut duplicates availOut.

Zip_availOut(ptr) already exposes avail_out. Consider reusing it (or aliasing on Java) to keep a single native symbol and avoid drift.

Option A: remove getAvailOut and use availOut on Java.
Option B: keep both but document one as preferred to avoid confusion.


90-107: Optional: minor JNI hygiene.

  • Return -1 on OOM is fine but collides with Z_ERRNO; consider a distinct sentinel like INT_MIN for clearer differentiation on Java side.
  • Consider factoring stream allocation into a tiny static helper to reduce repetition in inflateInit/inflateInitGzip.

Also applies to: 128-134

compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java (2)

225-235: Add negative-path gzip tests.

Consider adding:

  • Corrupted gzip body → 400 with parse error.
  • Mismatched header (Content-Encoding: gzip, uncompressed body) → 400/415.
  • “Gzip bomb” expansion exceeding line.http.max.recv.buffer.size → request too large.

I can draft these tests with minimal scaffolding if helpful.


776-785: Avoid potential barrier hangs: add timeout.

Use a bounded await to fail fast on thread errors and avoid indefinite waits.

-                        startBarrier.await();
+                        startBarrier.await(60, java.util.concurrent.TimeUnit.SECONDS);
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1337902 and fd2cd48.

⛔ Files ignored due to path filters (5)
  • core/src/main/resources/io/questdb/bin/darwin-aarch64/libquestdb.dylib is excluded by !**/*.dylib
  • core/src/main/resources/io/questdb/bin/darwin-x86-64/libquestdb.dylib is excluded by !**/*.dylib
  • core/src/main/resources/io/questdb/bin/linux-aarch64/libquestdb.so is excluded by !**/*.so
  • core/src/main/resources/io/questdb/bin/linux-x86-64/libquestdb.so is excluded by !**/*.so
  • core/src/main/resources/io/questdb/bin/windows-x86-64/libquestdb.dll is excluded by !**/*.dll
📒 Files selected for processing (7)
  • compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java (4 hunks)
  • core/src/main/c/share/zip.c (1 hunks)
  • core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorImpl.java (4 hunks)
  • core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorState.java (8 hunks)
  • core/src/main/java/io/questdb/cutlass/line/tcp/LineWalAppender.java (1 hunks)
  • core/src/main/java/io/questdb/std/Zip.java (2 hunks)
  • core/src/test/java/io/questdb/test/cutlass/http/line/LineHttpFailureTest.java (13 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorImpl.java
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: New pull request (Check Changes Check changes)
  • GitHub Check: build
  • GitHub Check: build
  • GitHub Check: aarch64
🔇 Additional comments (6)
compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java (4)

215-218: Good coverage: gzip happy-path.

New gzip test reuses the helper and mirrors the uncompressed case. LGTM.


221-224: Helper refactor for parallel case looks good.

Switching to the helper with per-thread gzip control simplifies reuse.


702-710: Per-call gzip enable is fine.

Enabling gzip per InfluxDB client instance is clear and isolated. LGTM.


749-764: Helper encapsulation reads well.

Single-table gzip vs plain path validated identically. LGTM.

core/src/main/java/io/questdb/cutlass/line/tcp/LineWalAppender.java (1)

475-476: LGTM: cleaner throwable logging.

Passing the Throwable directly is correct and idiomatic.

core/src/main/c/share/zip.c (1)

133-134: Confirm inflate flush semantics: Only one call-site invokes Zip.inflate with flush=false (LineHttpProcessorState.java:187). No flush=true usage detected; ensure any future flush=true calls are confined to the final chunk to prevent premature Z_BUF_ERROR/Z_STREAM_END.

@puzpuzpuz
Copy link
Copy Markdown
Contributor

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Oct 1, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
compat/src/test/java/io/questdb/compat/InfluxDBClientFailureTest.java (1)

298-306: This will likely OOM or stall CI: avoid building a 10M-String list.

10,000,000 String objects + backing char arrays will consume multiple GB and can time out. Stream or batch instead.

Apply a bounded‑memory batching approach:

-                for (int i = 0; i < 10_000_000; i++) {
-                    points.add("m1,tag1=value1 f1=1i,x=12i");
-                }
-                InfluxDBUtils.assertRequestOk(influxDB, points, "m1,tag1=value1 f1=1i,x=12i");
+                final String line = "m1,tag1=value1 f1=1i,x=12i\n";
+                // ~100k lines total in 100 chunks to exercise gzip with large payloads without exhausting heap
+                final int batches = 100, perBatch = 1_000;
+                for (int b = 0; b < batches; b++) {
+                    StringBuilder sb = new StringBuilder(line.length() * perBatch);
+                    for (int i = 0; i < perBatch; i++) sb.append(line);
+                    points.add(sb.toString());
+                }
+                InfluxDBUtils.assertRequestOk(influxDB, points, line);

Optionally gate a truly huge test behind a system property and mark it nightly.

♻️ Duplicate comments (2)
core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorState.java (2)

178-228: Decompress more before parsing to reduce parser fragmentation.

Inflate until either stream end or the local buffer can’t accept more bytes, then call processLocalBuffer() once. This reduces parse calls and improves throughput.


178-228: Handle Z_BUF_ERROR by compacting/growing buffer before rejecting.

Currently, if the output buffer is full and inflate returns Z_BUF_ERROR with newBytes == 0, the code immediately returns 413. This breaks valid requests whose decompressed size fits within line.http.max.recv.buffer.size. Mirror the uncompressed path: try recvBuffer.tryCompactOrGrowBuffer() and retry before failing.

-            if (ret < 0) {
-                if (ret != Zip.Z_BUF_ERROR) {
-                    reject(ENCODING_NOT_SUPPORTED, "gzip decompression error", fd);
-                    cleanupGzip();
-                    return;
-                }
-
-                // inflate can return Z_BUF_ERROR after writing bytes once the recv buffer runs out of space
-                if (newBytes > 0) {
-                    currentStatus = processLocalBuffer();
-                    pp = recvBuffer.getBufPos();
-                    continue;
-                }
-
-                reject(MESSAGE_TOO_LARGE, "server buffer is too small", fd);
-                cleanupGzip();
-                return;
-            }
+            if (ret == Zip.Z_BUF_ERROR) {
+                // Output buffer exhausted.
+                if (newBytes > 0) {
+                    currentStatus = processLocalBuffer();
+                    pp = recvBuffer.getBufPos();
+                    if (stopParse()) {
+                        cleanupGzip();
+                        return;
+                    }
+                    continue;
+                }
+                if (recvBuffer.tryCompactOrGrowBuffer()) {
+                    // Retry inflate with more space.
+                    continue;
+                }
+                reject(MESSAGE_TOO_LARGE, "transaction is too large, either flush more frequently or increase buffer size \"line.http.max.recv.buffer.size\"", fd);
+                cleanupGzip();
+                return;
+            } else if (ret < 0) {
+                // Other zlib errors
+                reject(ENCODING_NOT_SUPPORTED, "gzip decompression error", fd);
+                cleanupGzip();
+                return;
+            }
🧹 Nitpick comments (5)
compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java (2)

733-734: Use locale‑independent uppercasing to avoid surprises.

Case transforms can differ by default locale (e.g., Turkish). Prefer ROOT.

Apply:

-        String tableNameUpper = tableName.toUpperCase();
+        String tableNameUpper = tableName.toUpperCase(java.util.Locale.ROOT);

If you prefer an import:

import java.util.Locale;
// ...
String tableNameUpper = tableName.toUpperCase(Locale.ROOT);

787-836: Parallel helper is solid; barrier start avoids head‑of‑line bias.

Minimal suggestion: name threads for easier log triage.

Example:

-                threadList.add(new Thread(() -> {
+                threadList.add(new Thread(() -> {
                     // ...
-                }));
+                }, "ilp-http-" + threadNo));
core/src/test/java/io/questdb/test/cutlass/http/line/LineHttpFailureTest.java (2)

492-496: Close GZIPOutputStream with try‑with‑resources to guarantee trailer flush.

Use TWR instead of manual finish(); it’s safer and clearer.

-ByteArrayOutputStream out = new ByteArrayOutputStream();
-GZIPOutputStream strm = new GZIPOutputStream(out);
-strm.write(b);
-strm.finish();
-byte[] outBytes = out.toByteArray();
+ByteArrayOutputStream out = new ByteArrayOutputStream();
+byte[] outBytes;
+try (GZIPOutputStream strm = new GZIPOutputStream(out)) {
+    strm.write(b);
+}
+outBytes = out.toByteArray();

473-522: Add coverage for Content‑Encoding edge cases.

Please add tests for:

  • Case‑insensitive header (e.g., “GZip”).
  • Multiple codings (e.g., “gzip, deflate” and “deflate, gzip”) → expect 415 until multi‑step decoding is supported.
core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorState.java (1)

117-122: Optionally reset the gzip flag on cleanup to avoid accidental reuse.

Not required, but harmless and clarifies lifecycle.

     public void cleanupGzip() {
         if (inflateStream != 0) {
             Zip.inflateEnd(inflateStream);
             inflateStream = 0;
+            isGzipEncoded = false;
         }
     }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fd2cd48 and 81ed797.

⛔ Files ignored due to path filters (5)
  • core/src/main/resources/io/questdb/bin/darwin-aarch64/libquestdb.dylib is excluded by !**/*.dylib
  • core/src/main/resources/io/questdb/bin/darwin-x86-64/libquestdb.dylib is excluded by !**/*.dylib
  • core/src/main/resources/io/questdb/bin/linux-aarch64/libquestdb.so is excluded by !**/*.so
  • core/src/main/resources/io/questdb/bin/linux-x86-64/libquestdb.so is excluded by !**/*.so
  • core/src/main/resources/io/questdb/bin/windows-x86-64/libquestdb.dll is excluded by !**/*.dll
📒 Files selected for processing (7)
  • compat/src/test/java/io/questdb/compat/InfluxDBClientFailureTest.java (3 hunks)
  • compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java (5 hunks)
  • core/src/main/c/share/zip.c (1 hunks)
  • core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorImpl.java (4 hunks)
  • core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorState.java (9 hunks)
  • core/src/main/java/io/questdb/std/Zip.java (2 hunks)
  • core/src/test/java/io/questdb/test/cutlass/http/line/LineHttpFailureTest.java (13 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/src/main/java/io/questdb/std/Zip.java
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: New pull request (Check Changes Check changes)
  • GitHub Check: build
  • GitHub Check: aarch64
  • GitHub Check: build
🔇 Additional comments (15)
compat/src/test/java/io/questdb/compat/InfluxDBClientFailureTest.java (2)

35-41: Good swap to explicit imports.

Reduces accidental package bleed and speeds incremental builds.


271-286: Gzip enable/disable happy path looks good.

Covers toggling on the same client; assertions are appropriate.

compat/src/test/java/io/questdb/compat/InfluxDBClientTest.java (11)

45-46: Import for CyclicBarrier is appropriate.

Matches the new parallel helper usage.


210-213: Helper extraction improves reuse.

Single flag parameter keeps gzip/no‑gzip flows unified.


215-218: Added gzip variant increases coverage.

Mirrors the uncompressed path; good.


221-242: Nice coverage: gzip then raw on same connection.

Validates mid‑connection toggle; txn=4 rationale is sound (2 writes × 2 calls).

If flakes appear under load, consider asserting the HTTP keep‑alive state via server metrics/logs to ensure reuse.


245-247: Parallel many tables (no gzip) reads well.

Thread fan‑out + per‑table asserts are clear.


250-253: All‑gzip parallel variant OK.

Complements raw test; good matrix coverage.


256-259: Mixed gzip/non‑gzip parallel variant OK.

Exercises interleaving; good addition.


277-279: Per‑thread connection with shared helper is fine.

No shared state in sendIlp; safe for concurrency.


314-316: Keep‑alive off + helper usage looks correct.

Validates ingest without persistent connections.


765-785: Helper abstraction for gzip toggle is clean.

Awaits 2 txns (two writes in sendIlp); assertions match; LGTM.


838-841: Functional interface for per‑thread gzip toggle is a good fit.

Keeps call sites concise.

core/src/main/java/io/questdb/cutlass/http/processors/LineHttpProcessorImpl.java (2)

97-101: LGTM: correct routing of chunks to gzip vs plain parser.

Straightforward and clear.


199-200: LGTM: gzip resources are cleaned up at request end.

Ensures native inflater is not leaked.

@glasstiger
Copy link
Copy Markdown
Contributor

[PR Coverage check]

😍 pass : 49 / 61 (80.33%)

file detail

path covered line new line coverage
🔵 io/questdb/cutlass/line/tcp/LineWalAppender.java 0 1 00.00%
🔵 io/questdb/cutlass/line/tcp/AdaptiveRecvBuffer.java 0 1 00.00%
🔵 io/questdb/cutlass/http/processors/LineHttpProcessorImpl.java 9 11 81.82%
🔵 io/questdb/cutlass/http/processors/LineHttpProcessorState.java 40 48 83.33%

@bluestreak01 bluestreak01 merged commit 7eb19b9 into master Oct 1, 2025
30 of 31 checks passed
@bluestreak01 bluestreak01 deleted the nw_ilp_gzip branch October 1, 2025 18:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Compatibility Compatibility with third-party tools and services Enhancement Enhance existing functionality

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants