Skip to content

feat(sql): add rowCount, txn and timestamp columns to tables()#6581

Merged
bluestreak01 merged 34 commits intomasterfrom
vi_writer_stats
Jan 4, 2026
Merged

feat(sql): add rowCount, txn and timestamp columns to tables()#6581
bluestreak01 merged 34 commits intomasterfrom
vi_writer_stats

Conversation

@bluestreak01
Copy link
Copy Markdown
Member

@bluestreak01 bluestreak01 commented Dec 28, 2025

tandem: https://github.com/questdb/questdb-enterprise/pull/841
documentation: questdb/documentation#310

Summary

Adds real-time table write statistics tracking, exposed via new columns in the tables() SQL function.

New columns in tables():

Column Type Description
suspended BOOLEAN Whether a WAL table is suspended (false for non-WAL tables)
rowCount LONG Approximate row count at last tracked write
pendingRowCount LONG Rows written to WAL but not yet applied to table
dedupeRowCount LONG Cumulative count of rows removed by deduplication (since server start)
lastWriteTimestamp TIMESTAMP Approximate timestamp of last TableWriter commit
writerTxn LONG TableWriter transaction number at last tracked write
sequencerTxn LONG WAL sequencer transaction number (WAL tables only)
lastWalTimestamp TIMESTAMP Max data timestamp from last WAL commit (WAL tables only)
memoryPressureLevel INT Memory pressure level: 0 (none), 1 (reduced parallelism), 2 (backoff). null for non-WAL tables
txnCount LONG Total number of WAL transactions recorded (since server start)
txnSizeP50 LONG 50th percentile (median) WAL transaction size in rows
txnSizeP90 LONG 90th percentile WAL transaction size in rows
txnSizeP99 LONG 99th percentile WAL transaction size in rows
txnSizeMax LONG Maximum WAL transaction size in rows
writeAmplificationCount LONG Total number of write amplification samples recorded
writeAmplificationP50 DOUBLE 50th percentile (median) write amplification ratio
writeAmplificationP90 DOUBLE 90th percentile write amplification ratio
writeAmplificationP99 DOUBLE 99th percentile write amplification ratio
writeAmplificationMax DOUBLE Maximum write amplification ratio
mergeThroughputCount LONG Total number of merge throughput samples recorded
mergeThroughputP50 LONG 50th percentile (median) WAL merge throughput in rows/second
mergeThroughputP90 LONG Throughput that 90% of merge jobs exceeded (slow tail)
mergeThroughputP99 LONG Throughput that 99% of merge jobs exceeded (slowest 1%)
mergeThroughputMax LONG Maximum WAL merge throughput in rows/second
replicaBatchCount LONG Total number of replica download batches (replica only)
replicaBatchSizeP50 LONG 50th percentile replica download batch size in rows
replicaBatchSizeP90 LONG 90th percentile replica download batch size in rows
replicaBatchSizeP99 LONG 99th percentile replica download batch size in rows
replicaBatchSizeMax LONG Maximum replica download batch size in rows
replicaMorePending BOOLEAN true if last download batch was limited and more data is available
maxUncommittedRows INT Table's maxUncommittedRows setting
o3MaxLag LONG Table's o3MaxLag setting in microseconds

Throughput percentile semantics

For throughput metrics (where higher = better), percentiles are inverted to show the slow tail:

  • mergeThroughputP99 = throughput that 99% of jobs exceeded (i.e., the slowest 1%)
  • mergeThroughputP90 = throughput that 90% of jobs exceeded (i.e., the slowest 10%)

This is consistent with latency percentiles where P99 shows the "worst" case.

Replica columns

The replicaBatchCount, replicaBatchSize*, and replicaMorePending columns are populated on replicas only via recordReplicaDownload(). On primaries, these columns will be 0 or false. This separation prevents replica download batches (which may contain multiple transactions) from corrupting the txnSize* histogram statistics.

Column order

The full column order for tables() is now:

id, table_name, designatedTimestamp, partitionBy, walEnabled, suspended, dedup, 
ttlValue, ttlUnit, matView, rowCount, pendingRowCount, dedupeRowCount, lastWriteTimestamp, 
writerTxn, sequencerTxn, lastWalTimestamp, directoryName, memoryPressureLevel,
txnCount, txnSizeP50, txnSizeP90, txnSizeP99, txnSizeMax,
writeAmplificationCount, writeAmplificationP50, writeAmplificationP90, writeAmplificationP99, 
writeAmplificationMax, mergeThroughputCount, mergeThroughputP50, mergeThroughputP90, 
mergeThroughputP99, mergeThroughputMax, replicaBatchCount, replicaBatchSizeP50, replicaBatchSizeP90, 
replicaBatchSizeP99, replicaBatchSizeMax, replicaMorePending, maxUncommittedRows, o3MaxLag

Nullability and precision

These values are approximations, not precise real-time metrics:

  • Null when not tracked: Values are null for tables that haven't been written to since server start, or have been evicted from the tracker due to capacity limits
  • Updated on pool return: Statistics are captured when writers return to pool, not on every commit. A writer held for a long time won't update the tracker until released
  • WAL tracking is real-time: pendingRowCount, dedupeRowCount, sequencerTxn, lastWalTimestamp, histogram stats, write amplification, and merge throughput are updated on every WAL commit/apply
  • LRU eviction: Tracker maintains bounded memory (default 1000 tables). Least recently written tables are evicted when capacity is exceeded
  • Startup hydration: On startup, values are hydrated from table metadata (TxReader), but diverge as writes occur

Non-WAL tables: sequencerTxn, lastWalTimestamp, pendingRowCount, dedupeRowCount, memoryPressureLevel, histogram columns, write amplification, and merge throughput columns are always null or 0. suspended is always false.

WAL tables: All columns populated when tracked. lastWalTimestamp reflects the max data timestamp from the WAL transaction, not wall-clock time.

Example usage:

-- Find recently written tables
SELECT table_name, rowCount, lastWriteTimestamp 
FROM tables() 
WHERE lastWriteTimestamp IS NOT NULL
ORDER BY lastWriteTimestamp DESC;

-- Check WAL apply lag (sequencer ahead of writer)
SELECT table_name, sequencerTxn - writerTxn AS pending_txns
FROM tables() 
WHERE walEnabled AND sequencerTxn IS NOT NULL;

-- Find suspended WAL tables
SELECT table_name, suspended, memoryPressureLevel
FROM tables()
WHERE walEnabled AND suspended;

-- Check tables under memory pressure
SELECT table_name, memoryPressureLevel
FROM tables()
WHERE memoryPressureLevel > 0;

-- Check pending WAL rows and dedup activity
SELECT table_name, pendingRowCount, dedupeRowCount
FROM tables()
WHERE walEnabled AND pendingRowCount > 0;

-- Identify tables sending duplicate data
SELECT table_name, dedupeRowCount
FROM tables()
WHERE dedupeRowCount > 0
ORDER BY dedupeRowCount DESC;

-- Analyze WAL transaction size distribution
SELECT table_name, txnCount, txnSizeP50, txnSizeP90, txnSizeP99, txnSizeMax
FROM tables()
WHERE walEnabled AND txnCount > 0
ORDER BY txnSizeP99 DESC;

-- Find tables with large transaction spikes
SELECT table_name, txnSizeP50, txnSizeMax, txnSizeMax - txnSizeP50 AS spike_delta
FROM tables()
WHERE walEnabled AND txnSizeMax > txnSizeP50 * 10;

-- Analyze write amplification (O3 merge overhead)
SELECT table_name, writeAmplificationCount, writeAmplificationP50, writeAmplificationP99, writeAmplificationMax
FROM tables()
WHERE walEnabled AND writeAmplificationCount > 0
ORDER BY writeAmplificationP99 DESC;

-- Find tables with high write amplification (potential O3 issues)
SELECT table_name, writeAmplificationP50, writeAmplificationMax
FROM tables()
WHERE walEnabled AND writeAmplificationP50 > 2.0;

-- Analyze WAL merge throughput (P99 shows slow tail)
SELECT table_name, mergeThroughputCount, mergeThroughputP50, mergeThroughputP99, mergeThroughputMax
FROM tables()
WHERE walEnabled AND mergeThroughputCount > 0
ORDER BY mergeThroughputP99 ASC;  -- Lowest P99 = worst performing tables

-- Find tables with slow merge performance (P99 is the slow tail)
SELECT table_name, mergeThroughputP99 AS slowest_throughput, mergeThroughputP50 AS median_throughput
FROM tables()
WHERE walEnabled AND mergeThroughputP99 < 100000;  -- Slowest 1% under 100K rows/s

-- Check replica download batch statistics
SELECT table_name, replicaBatchCount, replicaBatchSizeP50, replicaBatchSizeP90, replicaBatchSizeMax, replicaMorePending
FROM tables()
WHERE replicaBatchCount > 0;

-- Find replicas that are still catching up
SELECT table_name, replicaMorePending, pendingRowCount
FROM tables()
WHERE replicaMorePending = true;

Implementation

RecentWriteTracker

New concurrent data structure (io.questdb.cairo.pool.RecentWriteTracker) optimized for:

  • Lock-free reads: Uses ConcurrentHashMap for zero-contention lookups
  • Minimal allocation on write path: Reuses WriteStats objects via atomic field updates
  • Bounded memory: Lazy eviction at 2x capacity, evicts based on max(writerTimestamp, walTimestamp)
  • WAL support: CAS-based updates for sequencerTxn and walTimestamp (highest value wins)
  • Contention-free counters: Uses LongAdder for pendingRowCount and dedupeRowCount
  • Transaction size histogram: HDR Histogram tracks distribution of WAL transaction sizes with SimpleReadWriteLock for concurrent access
  • Write amplification histogram: DoubleHistogram tracks ratio of physical rows written to logical rows (O3 merge overhead)
  • Merge throughput histogram: HDR Histogram tracks WAL merge throughput in rows/second (percentiles inverted to show slow tail)
  • Batch size histogram: HDR Histogram tracks replica download batch sizes (populated via recordReplicaDownload())
  • Replica more pending flag: AtomicBoolean tracks whether download batch was limited

WAL Row Tracking

  • pendingRowCount: Incremented on every WAL commit, decremented when WAL is applied to table
  • dedupeRowCount: Accumulated when WAL apply detects fewer committed rows than WAL rows (deduplication occurred)
  • Floor mechanism: On restart, a floor seqTxn is set to prevent negative pending counts from WAL transactions that existed before tracking started
  • Transaction size histogram: Records row count of each WAL transaction, exposes percentiles (p50, p90, p99) and max
  • Write amplification histogram: Records physicalRowsAdded / rowsAdded ratio after WAL apply, exposes percentiles and max
  • Merge throughput histogram: Records rowsAdded * 1000000 / insertTimespan (rows/second) after WAL merge, exposes inverted percentiles (P99 = 1st percentile = slow tail)

Replica-specific tracking

  • recordReplicaDownload(): Separate method for replica download tracking that:
    • Accumulates walRowCount (same as recordWalWrite)
    • Records to batchSizeHistogram (NOT txnSizeHistogram)
    • Updates sequencerTxn and walTimestamp (highest wins)
    • Sets replicaMorePending flag indicating if more data is available
  • This prevents replica download batches from corrupting transaction size statistics

Integration points

  • WriterPool: Records stats when TableWriter returns to pool
  • WalWriter: Records WAL writes with sequencer txn, max timestamp, and row count on every commit
  • ApplyWal2TableJob: Decrements pending rows, tracks dedup, records write amplification and merge throughput when WAL is applied
  • CheckWalTransactionsJob: Sets floor seqTxn on startup for tables with pending WALs
  • CairoEngine: Hydrates tracker from TxReader on startup
  • TableSequencerAPI: Provides suspended status and memoryPressureLevel via SeqTxnTracker

Configuration

cairo.recent.write.tracker.capacity (default: 1000) - maximum number of tables tracked

Test plan

  • Unit tests for RecentWriteTracker (concurrent updates, eviction, CAS semantics)
  • Integration tests for WAL and non-WAL tables
  • SQL tests for new tables() columns
  • JMH benchmark for write path performance

🤖 Generated with Claude Code

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Dec 28, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Introduces RecentWriteTracker, a concurrent metrics system for tracking recently written tables with per-table statistics including timestamps, row counts, WAL metrics, merge stats, and replica tracking. Integrates tracker throughout the system: CairoEngine, writer pools, WalWriter, and ServerMain. Significantly expands the tables() function schema with 30+ new columns. Extends Function-related interfaces with new accessors and lifecycle hooks.

Changes

Cohort / File(s) Summary
RecentWriteTracker Core
benchmarks/src/main/java/org/questdb/RecentWriteTrackerBenchmark.java, core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java
New benchmark suite and comprehensive concurrent write tracker with per-table metrics (timestamps, row counts, WAL stats, merge amplification, replica metrics), lazy eviction, and hydration support.
CairoEngine & Configuration Integration
core/src/main/java/io/questdb/CairoEngine.java, core/src/main/java/io/questdb/PropServerConfiguration.java, core/src/main/java/io/questdb/PropertyKey.java, core/src/main/java/io/questdb/cairo/CairoConfiguration.java, core/src/main/java/io/questdb/cairo/CairoConfigurationWrapper.java, core/src/main/java/io/questdb/cairo/DefaultCairoConfiguration.java
Adds recentWriteTrackerCapacity configuration property, exposes RecentWriteTracker via CairoEngine, integrates hydration during startup, manages tracker lifecycle on table drop.
Writer Pool Integration
core/src/main/java/io/questdb/cairo/pool/WriterPool.java, core/src/main/java/io/questdb/cairo/pool/WalWriterPool.java, core/src/main/java/io/questdb/cairo/wal/WalWriter.java
Passes RecentWriteTracker to writer constructors, records WAL writes/updates with metrics (timestamps, row counts, transaction IDs), tracks write amplification and throughput.
WAL Jobs & Processing
core/src/main/java/io/questdb/cairo/wal/ApplyWal2TableJob.java, core/src/main/java/io/questdb/cairo/wal/CheckWalTransactionsJob.java
Records merge stats (amplification, throughput) and WAL processing metrics via RecentWriteTracker after batch completion; synchronizes floor sequence transactions during WAL initialization.
Tables() Metadata Expansion
core/src/main/java/io/questdb/griffin/engine/functions/catalogue/TablesFunctionFactory.java
Expands tables() schema from ~10 columns to ~42 columns, integrating RecentWriteTracker metrics (row counts, timestamps, WAL metrics, merge stats, replica metrics, memory pressure, suspension state).
Function Interface Extensions
core/src/main/java/io/questdb/cairo/sql/Function.java, core/src/main/java/io/questdb/cairo/sql/FunctionExtension.java, core/src/main/java/io/questdb/griffin/FunctionFactory.java
Adds geohash/geo accessors, array/interval/long256 getters, non-determinism probing, lifecycle hooks (cursorClosed, extendedOps); updates FunctionFactory newInstance signature with position and context.
Unary/Binary Function Abstractions
core/src/main/java/io/questdb/griffin/engine/functions/UnaryFunction.java, core/src/main/java/io/questdb/griffin/engine/functions/BinaryFunction.java, core/src/main/java/io/questdb/griffin/engine/functions/SymbolFunction.java
Adds documentation and new methods (getArg, getLeft/getRight, isSymbolTableStatic) to base function classes for operator access and type introspection.
Cast Function Families
core/src/main/java/io/questdb/griffin/engine/functions/cast/Abstract*Function.java (20+ files)
Consistently adds getArg(), toPlan(), and related accessors (isThreadSafe, store, isSymbolTableStatic, init, newSymbolTable) across all cast function base classes.
GroupBy & Vector Aggregate Functions
core/src/main/java/io/questdb/griffin/engine/functions/groupby/Abstract*GroupByFunction.java, core/src/main/java/io/questdb/griffin/engine/groupby/vect/Abstract*VectorAggregateFunction.java
Adds function argument fields, constructors, and documentation to group-by base classes; expands VectorAggregateFunction signature with frameRowCount, getColumnIndex(), pushValueTypes().
Partition & RecordCursor APIs
core/src/main/java/io/questdb/cairo/AbstractFullPartitionFrameCursor.java, core/src/main/java/io/questdb/cairo/AbstractRecordCursorFactory.java, core/src/main/java/io/questdb/cairo/sql/RecordCursorFactory.java
Adds FullTablePartitionFrame inner class with accessor methods, introduces of(TableReader) initialization, adds close() hook, expands RecordCursorFactory API with changePageFrameSizes, getBaseFactory, supportsPageFrameCursor.
Date & Series Generation
core/src/main/java/io/questdb/griffin/engine/functions/date/Abstract*IntervalFunction.java, core/src/main/java/io/questdb/griffin/engine/functions/date/AbstractGenerateSeriesRecordCursorFactory.java
Adds timezone-aware interval calculation, introduces calculateInterval with TZ offset handling, expands generate_series with function argument fields and cursor initialization.
Join Cursor & TreeSet Factories
core/src/main/java/io/questdb/griffin/engine/join/AbstractAsOfJoinFastRecordCursor.java, core/src/main/java/io/questdb/griffin/engine/table/AbstractTreeSetRecordCursorFactory.java, core/src/main/java/io/questdb/griffin/engine/table/AbstractDeferredTreeSetRecordCursorFactory.java, core/src/main/java/io/questdb/griffin/engine/table/AbstractPageFrameRecordCursorFactory.java
Adds scaleTimestamp utility, of() initialization with TimeFrameCursor, introduces new TreeSet cursor factory with DirectLongList rows management, extends page frame support.
Utility & Hash Classes
core/src/main/java/io/questdb/std/AbstractCharSequenceHashSet.java, core/src/main/java/io/questdb/std/str/AbstractCharSequence.java, core/src/main/java/io/questdb/std/datetime/DateFormat.java
Implements hash set initialization, probing, and removal logic; adds date parsing overload; extends char sequence utilities with documentation.
Miscellaneous Updates
core/src/main/java/io/questdb/ServerMain.java, core/src/main/java/io/questdb/DefaultBootstrapConfiguration.java, core/src/main/java/io/questdb/client/ArraySender.java, core/src/main/java/io/questdb/cutlass/http/client/AbstractChunkedResponse.java, core/src/main/java/io/questdb/cutlass/line/array/AbstractArray.java, core/src/main/java/io/questdb/cairo/wal/WalReader.java, core/src/main/java/io/questdb/cairo/wal/WalMetrics.java, core/src/main/java/io/questdb/cairo/wal/seq/SeqTxnTracker.java, core/src/main/java/io/questdb/cairo/mv/WalTxnRangeLoader.java
Replaces BANNER with text block, executes concurrent hydration callbacks, renames WalReader.getEventCursor to getWalEventCursor, updates WalMetrics.addRowsWritten parameter name, minor formatting.
Test Coverage
core/src/test/java/io/questdb/test/cairo/pool/RecentWriteTrackerTest.java, core/src/test/java/io/questdb/test/cairo/pool/RecentWriteTrackerIntegrationTest.java, core/src/test/java/io/questdb/test/ServerMainTest.java, core/src/test/java/io/questdb/test/griffin/engine/functions/catalogue/TablesFunctionFactoryTest.java, core/src/test/java/* (12+ test files)
Comprehensive unit and integration tests for RecentWriteTracker, WAL/non-WAL table metrics tracking, suspension/memory pressure columns, hydration on restart; updates existing tests for expanded tables() schema.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

SQL, Core, New feature

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 65.52% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(sql): add rowCount, txn and timestamp columns to tables()' clearly describes the main change: adding new columns to the tables() SQL function.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, detailing new columns, their types, usage examples, implementation details, and test coverage.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@bluestreak01 bluestreak01 changed the title feat(sql): make writer statistics available on SQL query from tables() feat(core): track recent table writes and expose via tables() Dec 28, 2025
@bluestreak01 bluestreak01 changed the title feat(core): track recent table writes and expose via tables() feat(sql): add rowCount, txn and timestamp columns to tables() Dec 28, 2025
@bluestreak01
Copy link
Copy Markdown
Member Author

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jan 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
core/src/main/java/io/questdb/cutlass/line/array/AbstractArray.java (1)

78-109: Initialize memA after applyShape() to match reshape pattern.

The constructor calls array.applyShape() at line 107 but doesn't reinitialize memA afterward. All reshape methods (lines 211, 236, 263, 293) consistently call memA = array.startMemoryA() after applyShape(), indicating that applyShape() changes array state in a way that requires refreshing the memory appender.

Since memA is initialized at field declaration (line 76) before the array is properly configured, it may be stale after the constructor completes.

🔎 Proposed fix
         array.applyShape();
         flatLength = array.getFlatViewLength();
+        memA = array.startMemoryA();
     }
core/src/main/java/io/questdb/cutlass/http/client/AbstractChunkedResponse.java (1)

68-83: Add parameter validation and improve documentation.

The method accepts raw memory addresses without validation. While current callers in HttpClient manage buffers correctly, the public API lacks defensive checks:

  1. Missing bounds validation: No verification that lo and hi fall within [bufLo, bufHi]
  2. Missing ordering validation: No check that lo <= hi
  3. Insufficient documentation: JavaDoc omits preconditions (valid address ranges, when safe to call) and thread-safety guarantees

A caller could pass reversed or out-of-bounds addresses, causing memory corruption in downstream operations.

🔎 Suggested validation improvements
 /**
  * Begins processing a new chunk of response data.
+ * This method resets the internal state and should only be called
+ * before starting a new response or after completing the previous one.
+ * Not thread-safe: concurrent calls or calls during active processing
+ * will corrupt internal state.
  *
- * @param lo the low address of the data
- * @param hi the high address of the data
+ * @param lo the low address of the data (must be >= bufLo and <= bufHi)
+ * @param hi the high address of the data (must be >= lo and <= bufHi)
+ * @throws IllegalArgumentException if parameters are out of valid range
  */
 public void begin(long lo, long hi) {
+    if (lo < bufLo || lo > bufHi) {
+        throw new IllegalArgumentException("lo out of buffer range [bufLo=" + bufLo + ", bufHi=" + bufHi + ", lo=" + lo + "]");
+    }
+    if (hi < lo || hi > bufHi) {
+        throw new IllegalArgumentException("hi out of buffer range or less than lo [bufLo=" + bufLo + ", bufHi=" + bufHi + ", lo=" + lo + ", hi=" + hi + "]");
+    }
     this.dataLo = lo;
     this.dataHi = hi;
     this.state = STATE_CHUNK_SIZE;
     this.receive = hi == lo;
     this.endOfChunk = false;
     size = 0;
     available = 0;
     consumed = 0;
 }
🧹 Nitpick comments (24)
core/src/main/java/io/questdb/std/datetime/AbstractDateFormat.java (1)

30-32: Good: Javadoc added to abstract class.

The documentation is clear and syntactically correct. Consider expanding it with implementation guidance or behavioral notes for subclasses if needed—e.g., details on parse semantics or thread-safety expectations.

core/src/main/java/io/questdb/cairo/AbstractFullPartitionFrameCursor.java (1)

34-128: Documentation additions look good.

The Javadoc comments added throughout this file are accurate and follow standard conventions. While they are somewhat terse (mostly restating field/method names), they provide a baseline level of documentation that improves code readability.

Optional: Consider enriching documentation for better maintainability

If you'd like to enhance these docs further, consider adding:

  • Context and purpose: Explain what partition frames represent and how they're used in the cursor lifecycle
  • Invariants: Document relationships between fields (e.g., partitionIndex range relative to partitionHi)
  • Thread safety: Note any concurrency expectations for readers using this cursor
  • Lifecycle: Clarify when fields are initialized and their validity scope

Example for partitionHi:

 /**
- * The partition high boundary.
+ * The partition high boundary (exclusive upper bound).
+ * Represents the total number of partitions in the reader.
+ * Valid after of(TableReader) is called; updated on reload().
  */
 protected int partitionHi;

This is purely optional and can be deferred to future documentation improvements.

core/src/main/java/io/questdb/griffin/engine/functions/groupby/AbstractCountGroupByFunction.java (1)

38-48: Documentation added for class and fields.

The JavaDoc additions follow standard conventions and improve code maintainability. The documentation is functional but could be more descriptive to provide additional context (e.g., explaining what kind of function arg represents in count operations, or which map valueIndex refers to).

core/src/main/java/io/questdb/cutlass/http/client/Fragment.java (1)

27-43: Consider enhancing the documentation with additional context.

The added Javadoc is clear but minimal. For better developer experience, consider documenting:

  • What the memory addresses represent (raw pointers, buffer offsets, etc.)
  • Thread-safety guarantees
  • Lifecycle and validity (when is the Fragment valid? who owns the memory?)
  • Relationship to Response (since Fragment is returned by Response.recv())
📝 Example enhanced documentation
 /**
- * Represents a fragment of data with low and high memory addresses.
+ * Represents a contiguous fragment of HTTP response data in native memory.
+ * The fragment is valid only until the next call to Response.recv() or until
+ * the Response is closed. The memory addresses are raw pointers to native memory
+ * managed by the HTTP client.
+ * <p>
+ * This interface is not thread-safe and should only be accessed from the thread
+ * that received it from Response.recv().
  */
 public interface Fragment {
     /**
-     * Returns the high address of this fragment.
+     * Returns the exclusive upper bound address of this fragment in native memory.
      *
-     * @return the high address
+     * @return the high address (exclusive)
      */
     long hi();

     /**
-     * Returns the low address of this fragment.
+     * Returns the inclusive lower bound address of this fragment in native memory.
      *
-     * @return the low address
+     * @return the low address (inclusive)
      */
     long lo();
 }
core/src/main/java/io/questdb/cutlass/http/client/Response.java (1)

31-44: Consider enhancing the method documentation.

The documentation for both recv() methods could be more comprehensive. Consider adding:

  • What the "default timeout" value is for recv()
  • What happens when a timeout occurs (exception thrown? null returned? special Fragment value?)
  • Whether timeout values of 0 or negative numbers have special meaning
  • Thread-safety guarantees
  • Whether Fragment can be null on error conditions
📝 Example enhanced documentation
     /**
-     * Receives the next fragment of response data using the default timeout.
+     * Receives the next fragment of response data using the default timeout (5000ms).
+     * Blocks until data is available or the timeout expires.
      *
-     * @return the received fragment
+     * @return the received fragment, never null
+     * @throws io.questdb.cairo.HttpException if the timeout expires or connection fails
      */
     Fragment recv();

     /**
-     * Receives the next fragment of response data with the specified timeout.
+     * Receives the next fragment of response data with the specified timeout.
+     * Blocks until data is available or the timeout expires.
      *
-     * @param timeout the timeout in milliseconds
-     * @return the received fragment
+     * @param timeout the timeout in milliseconds (0 = no timeout, negative = default)
+     * @return the received fragment, never null
+     * @throws io.questdb.cairo.HttpException if the timeout expires or connection fails
+     * @throws IllegalArgumentException if timeout is negative and not a special value
      */
     Fragment recv(int timeout);
core/src/main/java/io/questdb/griffin/engine/groupby/vect/AbstractCountVectorAggregateFunction.java (1)

57-61: Consider using private final fields or adding null checks for protected fields used by subclasses.

The fields distinctFunc and keyValueFunc are declared in this abstract class but never referenced in its methods. Subclasses like CountIntVectorAggregateFunction assign these fields in their constructors and then use them directly in aggregate() without null checks. While the current Count* subclasses properly initialize these fields in all code paths, this pattern is error-prone: if a subclass forgets to initialize either field, an NPE will occur at runtime with no way to detect it in the abstract class. Other similar aggregate functions in this package use private final fields instead, which is more explicit and prevents accidental misuse by subclasses.

core/src/main/java/io/questdb/std/AbstractCharSequenceHashSet.java (1)

120-128: Consider defensive validation in keyAt.

The method assumes index is negative (as documented), but passing a non-negative index would cause incorrect array access. Adding a precondition check would improve safety:

public CharSequence keyAt(int index) {
    assert index < 0 : "index must be negative";
    return keys[-index - 1];
}

However, this is a performance tradeoff since the negative-index convention is already documented.

core/src/test/java/io/questdb/test/ServerMainTest.java (1)

127-202: Well-structured hydration test with minor robustness suggestion.

The test correctly validates RecentWriteTracker hydration across server restarts. The setup with wait_wal_table() ensures deterministic state, and setting the hydration callback before start() is the correct order.

One minor robustness concern: hydrationLatch.await() on line 181 has no timeout, which could cause the test to hang indefinitely if hydration fails unexpectedly. Consider adding a timeout variant if SOCountDownLatch supports it.

🔎 Optional: Add timeout to prevent test hanging

If SOCountDownLatch supports a timed await, consider:

// Wait for hydration with timeout to prevent indefinite hang
boolean completed = hydrationLatch.await(30, TimeUnit.SECONDS);
Assert.assertTrue("Hydration should complete within timeout", completed);
core/src/main/java/io/questdb/cairo/wal/ApplyWal2TableJob.java (2)

522-522: Simplify redundant double rounding.

Line 522 applies Numbers.roundUp twice to the same value, which is redundant. The expression Numbers.roundUp(Numbers.roundUp(100.0 * physicalRowsAdded / rowsAdded, 2) / 100.0, 2) can be simplified.

🔎 Proposed simplification
-                    double amplification = rowsAdded > 0 ? Numbers.roundUp(Numbers.roundUp(100.0 * physicalRowsAdded / rowsAdded, 2) / 100.0, 2) : 0;
+                    double amplification = rowsAdded > 0 ? Numbers.roundUp((double) physicalRowsAdded / rowsAdded, 2) : 0;

660-662: Consider wrapping tracker call in try-catch for consistency.

The recordWalProcessed call is not wrapped in a try-catch block, unlike the similar recordWrite call in WriterPool.java (lines 577-582). If tracking fails here, it could disrupt WAL processing.

🔎 Proposed defensive wrapping
-                // Decrement pending WAL row count and track dedup after successful processing
-                engine.getRecentWriteTracker().recordWalProcessed(writer.getTableToken(), lastCommittedSeqTxn, lastCommittedRows, rowsCommitted);
-
+                // Decrement pending WAL row count and track dedup after successful processing
+                try {
+                    engine.getRecentWriteTracker().recordWalProcessed(writer.getTableToken(), lastCommittedSeqTxn, lastCommittedRows, rowsCommitted);
+                } catch (Throwable th) {
+                    LOG.error().$("failed to track WAL processing [table=").$(writer.getTableToken())
+                            .$(", error=").$(th).I$();
+                }
+
core/src/main/java/io/questdb/PropServerConfiguration.java (1)

391-391: RecentWriteTracker capacity wiring looks consistent; optional validation

The new recentWriteTrackerCapacity field is read from CAIRO_RECENT_WRITE_TRACKER_CAPACITY with a default of 1000 and exposed via PropCairoConfiguration.getRecentWriteTrackerCapacity(), which aligns with the tracker’s configuration needs.

If RecentWriteTracker does not explicitly handle non‑positive capacities, consider validating (e.g., > 0) here and throwing a ServerConfigurationException on invalid values to fail fast with a clearer message rather than deferring to deeper runtime failures.

Also applies to: 1416-1417, 3658-3661

core/src/main/java/io/questdb/griffin/engine/functions/BinaryFunction.java (1)

35-157: BinaryFunction default behavior is solid; be mindful of child contracts and commutativity

Centralizing binary-function lifecycle and state (init/close/memoize/toPlan/etc.) here is a good cleanup and should significantly reduce duplication across binary function implementations.

Two soft-contract points to keep in mind for implementors:

  • getLeft() / getRight() are implicitly assumed non‑null and independently closeable; if any implementation ever reuses the same Function instance for both sides, that child will see close() / cursorClosed() twice. Either avoid such sharing or ensure those children are idempotent.
  • isEquivalentTo is structurally order-sensitive; for truly commutative operators that should treat a op b equivalent to b op a, consider overriding isEquivalentTo in those specific implementations.

Otherwise, the defaults (especially for constantness, runtime-constant detection, and parallelism flags) look consistent with existing Function semantics.

core/src/main/java/io/questdb/griffin/engine/functions/eq/AbstractEqBinaryFunction.java (1)

32-73: AbstractEqBinaryFunction cleanly integrates equality functions with BinaryFunction

The new base class correctly encapsulates left/right operands, exposes them via getLeft()/getRight(), and provides a negation-aware toPlan that yields a=b or a!=b as expected. This should simplify equality-function implementations and make them benefit from BinaryFunction’s shared lifecycle and state-handling defaults without extra boilerplate.

If you later want to rely on BinaryFunction’s generic operator rendering, you could override isOperator() to return true and delegate to the default toPlan, but the dedicated toPlan here is perfectly fine and explicit.

core/src/test/java/io/questdb/test/cairo/pool/RecentWriteTrackerTest.java (1)

207-227: Eviction test assertion message vs bound is slightly inconsistent

In testEviction, the comment and failure message say “size should be around capacity (5)” / “<= capacity after eviction”, but the assertion uses tracker.size() <= 10, effectively allowing up to 2 * capacity to match the current implementation’s eviction threshold.

To avoid confusion (and future maintenance mistakes if the eviction policy changes), consider:

  • Updating the comment and assertion message to explicitly say “<= 2x capacity”, or
  • Deriving the bound from the configured capacity (e.g., 2 * capacity) instead of the literal 10.
benchmarks/src/main/java/org/questdb/RecentWriteTrackerBenchmark.java (1)

207-221: Consider adding a note clarifying the simplified WriteStats design.

The benchmark's WriteStats is intentionally simplified compared to the production RecentWriteTracker.WriteStats (which uses AtomicLong, LongAdder, histograms, and locks). Since the benchmark targets lambda allocation patterns rather than full stat tracking semantics, this is appropriate, but a brief comment would help future readers understand this design choice.

Suggested documentation
-    // Simple WriteStats class for benchmarking
+    // Simplified WriteStats class for benchmarking allocation patterns.
+    // The production RecentWriteTracker.WriteStats uses atomic fields, histograms,
+    // and locks - omitted here since we're only measuring lambda capture overhead.
     public static class WriteStats {
core/src/test/java/io/questdb/test/griffin/engine/functions/catalogue/TablesFunctionFactoryTest.java (2)

218-226: Multiline string literal formatting could be clearer.

The embedded writerTxn value concatenation within the multiline string creates unusual formatting. Consider using String.format or a simpler concatenation approach for better readability.

🔎 Suggested improvement
-            assertSql(
-                    """
-                            table_name\twriterTxn\tsequencerTxn\tlastWalTimestamp
-                            test_non_wal\t""" + writerTxn + """
-                            \tnull\t
-                            """,
-                    "select table_name, writerTxn, sequencerTxn, lastWalTimestamp from tables() where table_name = 'test_non_wal'"
-            );
+            assertSql(
+                    "table_name\twriterTxn\tsequencerTxn\tlastWalTimestamp\n" +
+                    "test_non_wal\t" + writerTxn + "\tnull\t\n",
+                    "select table_name, writerTxn, sequencerTxn, lastWalTimestamp from tables() where table_name = 'test_non_wal'"
+            );

374-382: Similar multiline string formatting issue.

Same readability concern with the embedded values in multiline string literals.

core/src/test/java/io/questdb/test/cairo/pool/RecentWriteTrackerIntegrationTest.java (2)

48-76: Test lacks meaningful assertions.

This test serves as documentation but only asserts that the tracker is not null. Consider either adding substantive assertions or converting this to actual documentation/Javadoc.


320-321: Thread.sleep(1) is unreliable for timestamp differentiation.

Sub-millisecond sleeps are not guaranteed by the JVM and may not produce different timestamps on fast systems. The test assertion uses >= which handles this, but the comment suggests the intent is to ensure different timestamps.

🔎 Alternative approach

Consider using a test clock or explicitly advancing time instead of relying on Thread.sleep(1):

// Small delay to ensure different timestamp
// Note: Using >= assertion since Thread.sleep(1) may not guarantee different microsecond timestamps

Or use a more robust approach that doesn't depend on timing at all.

core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java (2)

534-542: Recursive quicksort may cause StackOverflowError for large datasets.

With default capacity of 1000 and eviction at 2x (2000 entries), a worst-case sorted input could require ~2000 stack frames. While typical JVM stacks can handle this, consider using an iterative approach or Java's built-in sorting for robustness.

🔎 Alternative using iterative or built-in sort

For robustness, consider using an iterative quicksort with an explicit stack, or sort using indices into an array:

// Option: Use indices array and Arrays.sort with comparator
int[] indices = new int[size];
for (int i = 0; i < size; i++) indices[i] = i;
// Sort indices by timestamps descending

494-508: O(n*k) eviction algorithm may cause latency spikes.

When evicting from 2x capacity down to capacity, this iterates the entire map k times (where k = capacity). For the default capacity of 1000, this means 1000 full iterations over 2000 entries = 2M operations. Consider using a min-heap or sorting once.

🔎 More efficient eviction approach
// Instead of k iterations finding minimum each time:
// 1. Collect all entries with timestamps
// 2. Sort once
// 3. Remove oldest k entries

List<Map.Entry<TableToken, WriteStats>> entries = new ArrayList<>(writeStats.entrySet());
entries.sort(Comparator.comparingLong(e -> e.getValue().getMaxTimestamp()));
for (int i = 0; i < toEvict && i < entries.size(); i++) {
    writeStats.remove(entries.get(i).getKey());
}
core/src/main/java/io/questdb/griffin/engine/functions/catalogue/TablesFunctionFactory.java (3)

266-282: Assert statement may cause unexpected failures.

Line 280 uses assert col == MAX_UNCOMMITTED_ROWS_COLUMN after several if statements. If a new INT column is added but not handled, this will fail with assertions enabled. Consider using an explicit else if with a default case.

🔎 Safer pattern
 @Override
 public int getInt(int col) {
     if (col == ID_COLUMN) {
         return table.getId();
     }
     if (col == TTL_VALUE_COLUMN) {
         return getTtlValue(table.getTtlHoursOrMonths());
     }
     if (col == MEMORY_PRESSURE_LEVEL_COLUMN) {
         if (!table.isWalEnabled()) {
             return Numbers.INT_NULL;
         }
         SeqTxnTracker tracker = tableSequencerAPI.getTxnTracker(table.getTableToken());
         return tracker.getMemPressureControl().getMemoryPressureLevel();
     }
-    assert col == MAX_UNCOMMITTED_ROWS_COLUMN;
-    return table.getMaxUncommittedRows();
+    if (col == MAX_UNCOMMITTED_ROWS_COLUMN) {
+        return table.getMaxUncommittedRows();
+    }
+    return Numbers.INT_NULL;
 }

284-296: Inconsistent return value for null writeStats.

When writeStats == null, the method returns 0.0, but the default case returns Double.NaN. This inconsistency could confuse consumers. Consider returning Double.NaN consistently for missing data.

🔎 Consistent null handling
 @Override
 public double getDouble(int col) {
     if (writeStats == null) {
-        return 0.0;
+        return Double.NaN;
     }
     return switch (col) {

303-313: Complex null-handling condition is hard to maintain.

The long chain of || conditions for determining which columns return 0 vs Numbers.LONG_NULL is error-prone when adding new columns. Consider grouping related columns or using a Set for cleaner logic.

🔎 Alternative approach using column groups
// Define sets of columns that return 0 when writeStats is null
private static final IntHashSet ZERO_WHEN_NULL_COLUMNS = new IntHashSet();
static {
    ZERO_WHEN_NULL_COLUMNS.add(PENDING_ROW_COUNT_COLUMN);
    ZERO_WHEN_NULL_COLUMNS.add(DEDUPE_ROW_COUNT_COLUMN);
    ZERO_WHEN_NULL_COLUMNS.add(TXN_COUNT_COLUMN);
    // ... etc
}

// Then in getLong:
if (writeStats == null) {
    return ZERO_WHEN_NULL_COLUMNS.contains(col) ? 0 : Numbers.LONG_NULL;
}
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 031ff0f and 2e0a9f8.

📒 Files selected for processing (97)
  • benchmarks/src/main/java/org/questdb/RecentWriteTrackerBenchmark.java
  • core/src/main/java/io/questdb/DefaultBootstrapConfiguration.java
  • core/src/main/java/io/questdb/PropServerConfiguration.java
  • core/src/main/java/io/questdb/PropertyKey.java
  • core/src/main/java/io/questdb/ServerMain.java
  • core/src/main/java/io/questdb/cairo/AbstractFullPartitionFrameCursor.java
  • core/src/main/java/io/questdb/cairo/AbstractRecordCursorFactory.java
  • core/src/main/java/io/questdb/cairo/CairoConfiguration.java
  • core/src/main/java/io/questdb/cairo/CairoConfigurationWrapper.java
  • core/src/main/java/io/questdb/cairo/CairoEngine.java
  • core/src/main/java/io/questdb/cairo/DefaultCairoConfiguration.java
  • core/src/main/java/io/questdb/cairo/TableWriter.java
  • core/src/main/java/io/questdb/cairo/mv/WalTxnRangeLoader.java
  • core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java
  • core/src/main/java/io/questdb/cairo/pool/WalWriterPool.java
  • core/src/main/java/io/questdb/cairo/pool/WriterPool.java
  • core/src/main/java/io/questdb/cairo/sql/Function.java
  • core/src/main/java/io/questdb/cairo/sql/FunctionExtension.java
  • core/src/main/java/io/questdb/cairo/sql/PartitionFrameCursor.java
  • core/src/main/java/io/questdb/cairo/sql/RecordCursorFactory.java
  • core/src/main/java/io/questdb/cairo/sql/SymbolTableSource.java
  • core/src/main/java/io/questdb/cairo/wal/ApplyWal2TableJob.java
  • core/src/main/java/io/questdb/cairo/wal/CheckWalTransactionsJob.java
  • core/src/main/java/io/questdb/cairo/wal/WalMetrics.java
  • core/src/main/java/io/questdb/cairo/wal/WalReader.java
  • core/src/main/java/io/questdb/cairo/wal/WalWriter.java
  • core/src/main/java/io/questdb/cairo/wal/seq/SeqTxnTracker.java
  • core/src/main/java/io/questdb/client/ArraySender.java
  • core/src/main/java/io/questdb/cutlass/http/client/AbstractChunkedResponse.java
  • core/src/main/java/io/questdb/cutlass/http/client/Fragment.java
  • core/src/main/java/io/questdb/cutlass/http/client/Response.java
  • core/src/main/java/io/questdb/cutlass/line/array/AbstractArray.java
  • core/src/main/java/io/questdb/griffin/FunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/BinaryFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/GroupByFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/SymbolFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/UnaryFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToBooleanFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToByteFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToCharFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToDateFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToDecimal64Function.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToDecimalFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToDoubleFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToFloatFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToGeoHashFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToIPv4Function.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToIntFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToLong256Function.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToLongFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToShortFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToStrFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToSymbolFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToTimestampFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToUuidFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/cast/AbstractCastToVarcharFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/catalogue/AbstractEmptyCatalogueFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/catalogue/TablesFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/date/AbstractDayIntervalFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/date/AbstractDayIntervalWithTimezoneFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/date/AbstractGenerateSeriesRecordCursorFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/decimal/ToDecimal64Function.java
  • core/src/main/java/io/questdb/griffin/engine/functions/decimal/ToDecimalFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/eq/AbstractEqBinaryFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/groupby/AbstractCountDistinctIntGroupByFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/groupby/AbstractCountGroupByFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/groupby/AbstractCovarGroupByFunction.java
  • core/src/main/java/io/questdb/griffin/engine/functions/math/AbsDecimalFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/math/AbsDoubleFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/math/AbsIntFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/math/AbsLongFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/math/AbsShortFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/groupby/vect/AbstractCountVectorAggregateFunction.java
  • core/src/main/java/io/questdb/griffin/engine/groupby/vect/VectorAggregateFunction.java
  • core/src/main/java/io/questdb/griffin/engine/join/AbstractAsOfJoinFastRecordCursor.java
  • core/src/main/java/io/questdb/griffin/engine/table/AbstractDeferredTreeSetRecordCursorFactory.java
  • core/src/main/java/io/questdb/griffin/engine/table/AbstractPageFrameRecordCursorFactory.java
  • core/src/main/java/io/questdb/griffin/engine/table/AbstractTreeSetRecordCursorFactory.java
  • core/src/main/java/io/questdb/std/AbstractCharSequenceHashSet.java
  • core/src/main/java/io/questdb/std/Mutable.java
  • core/src/main/java/io/questdb/std/datetime/AbstractDateFormat.java
  • core/src/main/java/io/questdb/std/datetime/DateFormat.java
  • core/src/main/java/io/questdb/std/str/AbstractCharSequence.java
  • core/src/main/java/io/questdb/std/str/CloneableMutable.java
  • core/src/test/java/io/questdb/test/ServerMainForeignTableTest.java
  • core/src/test/java/io/questdb/test/ServerMainTest.java
  • core/src/test/java/io/questdb/test/cairo/MetadataCacheTest.java
  • core/src/test/java/io/questdb/test/cairo/pool/RecentWriteTrackerIntegrationTest.java
  • core/src/test/java/io/questdb/test/cairo/pool/RecentWriteTrackerTest.java
  • core/src/test/java/io/questdb/test/cairo/pool/WriterPoolTest.java
  • core/src/test/java/io/questdb/test/cairo/wal/WalTableSqlTest.java
  • core/src/test/java/io/questdb/test/cairo/wal/WalWriterTest.java
  • core/src/test/java/io/questdb/test/cutlass/pgwire/PGJobContextTest.java
  • core/src/test/java/io/questdb/test/griffin/KeywordAsTableNameTest.java
  • core/src/test/java/io/questdb/test/griffin/ShowTablesTest.java
  • core/src/test/java/io/questdb/test/griffin/engine/functions/catalogue/TablesBootstrapTest.java
  • core/src/test/java/io/questdb/test/griffin/engine/functions/catalogue/TablesFunctionFactoryTest.java
💤 Files with no reviewable changes (2)
  • core/src/main/java/io/questdb/cairo/wal/seq/SeqTxnTracker.java
  • core/src/main/java/io/questdb/cairo/mv/WalTxnRangeLoader.java
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-11-10T14:28:48.329Z
Learnt from: mtopolnik
Repo: questdb/questdb PR: 0
File: :0-0
Timestamp: 2025-11-10T14:28:48.329Z
Learning: In AsOfJoinDenseRecordCursorFactoryBase.java, the `backwardScanExhausted` flag is intentionally NOT reset in `toTop()` because backward scan results are reusable across cursor rewinds. The backward scan caches historical matches that remain valid when the cursor is rewound.

Applied to files:

  • core/src/test/java/io/questdb/test/griffin/ShowTablesTest.java
  • core/src/main/java/io/questdb/cairo/sql/RecordCursorFactory.java
  • core/src/main/java/io/questdb/griffin/engine/functions/catalogue/TablesFunctionFactory.java
  • core/src/main/java/io/questdb/griffin/engine/join/AbstractAsOfJoinFastRecordCursor.java
📚 Learning: 2025-11-07T00:59:31.522Z
Learnt from: bluestreak01
Repo: questdb/questdb PR: 0
File: :0-0
Timestamp: 2025-11-07T00:59:31.522Z
Learning: In QuestDB's Cairo engine, transaction (_txn) files have a strong invariant: they are never truncated below TX_BASE_HEADER_SIZE. Once created, they are either fully formed (size >= header size) or completely removed along with the entire table directory when the table is dropped.

Applied to files:

  • core/src/main/java/io/questdb/cairo/wal/CheckWalTransactionsJob.java
🧬 Code graph analysis (7)
core/src/test/java/io/questdb/test/ServerMainTest.java (6)
core/src/main/java/io/questdb/PropBootstrapConfiguration.java (1)
  • PropBootstrapConfiguration (29-44)
core/src/main/java/io/questdb/ServerMain.java (1)
  • ServerMain (64-521)
core/src/main/java/io/questdb/cairo/TableToken.java (1)
  • TableToken (38-192)
core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java (1)
  • RecentWriteTracker (73-1139)
core/src/main/java/io/questdb/cairo/security/AllowAllSecurityContext.java (1)
  • AllowAllSecurityContext (35-209)
core/src/main/java/io/questdb/std/ObjList.java (1)
  • ObjList (34-395)
benchmarks/src/main/java/org/questdb/RecentWriteTrackerBenchmark.java (2)
core/src/main/java/io/questdb/cairo/TableToken.java (1)
  • TableToken (38-192)
core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java (1)
  • WriteStats (567-1138)
core/src/main/java/io/questdb/griffin/engine/functions/GroupByFunction.java (1)
core/rust/qdbr/src/allocator.rs (1)
  • allocator (351-357)
core/src/main/java/io/questdb/cairo/wal/ApplyWal2TableJob.java (1)
core/src/main/java/io/questdb/std/Numbers.java (1)
  • Numbers (44-3523)
core/src/main/java/io/questdb/cairo/wal/WalWriter.java (2)
core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java (1)
  • RecentWriteTracker (73-1139)
core/src/main/java/io/questdb/std/Numbers.java (1)
  • Numbers (44-3523)
core/src/test/java/io/questdb/test/cutlass/pgwire/PGJobContextTest.java (1)
compat/src/test/php/runner.php (1)
  • executeQuery (36-89)
core/src/main/java/io/questdb/griffin/engine/functions/catalogue/TablesFunctionFactory.java (5)
core/src/main/java/io/questdb/cairo/pool/RecentWriteTracker.java (1)
  • RecentWriteTracker (73-1139)
core/src/main/java/io/questdb/cairo/wal/seq/SeqTxnTracker.java (1)
  • SeqTxnTracker (34-193)
core/src/main/java/io/questdb/cairo/wal/seq/TableSequencerAPI.java (1)
  • TableSequencerAPI (55-564)
core/src/main/java/io/questdb/std/Numbers.java (1)
  • Numbers (44-3523)
core/src/main/java/io/questdb/std/ObjList.java (1)
  • ObjList (34-395)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (35)
  • GitHub Check: New pull request (Coverage Report Coverage Report)
  • GitHub Check: New pull request (Hosted Running tests on windows-other-2)
  • GitHub Check: New pull request (Hosted Running tests on windows-other-1)
  • GitHub Check: New pull request (Hosted Running tests on windows-pgwire)
  • GitHub Check: New pull request (Hosted Running tests on windows-cairo-2)
  • GitHub Check: New pull request (Hosted Running tests on windows-cairo-1)
  • GitHub Check: New pull request (Hosted Running tests on windows-fuzz2)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-other)
  • GitHub Check: New pull request (Hosted Running tests on windows-fuzz1)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-pgwire)
  • GitHub Check: New pull request (Hosted Running tests on windows-griffin-sub)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-cairo-sub)
  • GitHub Check: New pull request (Hosted Running tests on windows-griffin-base)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-cairo-root)
  • GitHub Check: New pull request (Hosted Running tests on mac-other)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-fuzz2)
  • GitHub Check: New pull request (Hosted Running tests on mac-pgwire)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-fuzz1)
  • GitHub Check: New pull request (Hosted Running tests on mac-cairo-fuzz)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-griffin-sub)
  • GitHub Check: New pull request (Hosted Running tests on mac-cairo)
  • GitHub Check: New pull request (SelfHosted Running tests with cover on linux-griffin-root)
  • GitHub Check: New pull request (Rust Test and Lint on linux-jdk17)
  • GitHub Check: New pull request (Hosted Running tests on mac-griffin)
  • GitHub Check: New pull request (SelfHosted Griffin tests on linux-x86-graal)
  • GitHub Check: New pull request (SelfHosted Griffin tests on linux-arm64)
  • GitHub Check: New pull request (SelfHosted Cairo tests on linux-x64-zfs)
  • GitHub Check: New pull request (SelfHosted Griffin tests on linux-x64-zfs)
  • GitHub Check: New pull request (SelfHosted Cairo tests on linux-x86-graal)
  • GitHub Check: New pull request (SelfHosted Cairo tests on linux-arm64)
  • GitHub Check: New pull request (SelfHosted Other tests on linux-x86-graal)
  • GitHub Check: New pull request (SelfHosted Other tests on linux-arm64)
  • GitHub Check: New pull request (SelfHosted Other tests on linux-x64-zfs)
  • GitHub Check: New pull request (Trigger Enterprise CI Trigger Enterprise Pipeline)
  • GitHub Check: New pull request (Check Changes Check changes)

@glasstiger
Copy link
Copy Markdown
Contributor

[PR Coverage check]

😍 pass : 414 / 553 (74.86%)

file detail

path covered line new line coverage
🔵 io/questdb/cairo/pool/RecentWriteTracker.java 174 276 63.04%
🔵 io/questdb/griffin/engine/functions/catalogue/TablesFunctionFactory.java 87 114 76.32%
🔵 io/questdb/cairo/CairoEngine.java 38 44 86.36%
🔵 io/questdb/cairo/TableWriter.java 64 68 94.12%
🔵 io/questdb/cairo/wal/CheckWalTransactionsJob.java 2 2 100.00%
🔵 io/questdb/cairo/DefaultCairoConfiguration.java 1 1 100.00%
🔵 io/questdb/cairo/O3PartitionJob.java 3 3 100.00%
🔵 io/questdb/PropertyKey.java 1 1 100.00%
🔵 io/questdb/std/Vect.java 3 3 100.00%
🔵 io/questdb/cairo/pool/WalWriterPool.java 3 3 100.00%
🔵 io/questdb/cairo/CairoConfigurationWrapper.java 1 1 100.00%
🔵 io/questdb/PropServerConfiguration.java 2 2 100.00%
🔵 io/questdb/cairo/wal/WalWriter.java 11 11 100.00%
🔵 io/questdb/cairo/wal/WalMetrics.java 1 1 100.00%
🔵 io/questdb/ServerMain.java 4 4 100.00%
🔵 io/questdb/cairo/pool/WriterPool.java 6 6 100.00%
🔵 io/questdb/cairo/wal/ApplyWal2TableJob.java 6 6 100.00%
🔵 io/questdb/cairo/wal/WalReader.java 6 6 100.00%
🔵 io/questdb/griffin/engine/groupby/vect/AbstractCountVectorAggregateFunction.java 1 1 100.00%

@bluestreak01 bluestreak01 merged commit 5c395e1 into master Jan 4, 2026
43 checks passed
@bluestreak01 bluestreak01 deleted the vi_writer_stats branch January 4, 2026 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants