feat(sql): upgrades to querying external parquet files#6369
Draft
feat(sql): upgrades to querying external parquet files#6369
Conversation
This reverts commit 0bef6a1.
…arquet_projection
3 tasks
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
# Conflicts: # core/src/main/java/io/questdb/griffin/engine/functions/catalogue/FilesFunctionFactory.java # core/src/main/java/io/questdb/griffin/engine/functions/regex/GlobStrFunctionFactory.java # core/src/main/java/io/questdb/griffin/engine/functions/table/GlobFilesFunctionFactory.java # core/src/test/java/io/questdb/test/griffin/engine/functions/table/GlobFilesFunctionFactoryTest.java # core/src/test/java/io/questdb/test/griffin/engine/functions/table/GlobFilesIntegrationTest.java
# Conflicts: # core/src/main/java/io/questdb/griffin/SqlOptimiser.java
puzpuzpuz
reviewed
Dec 16, 2025
| fromParquetColumnIndexes.setAll(metadataIndex, -1); | ||
| for (int i = 0, n = addressCache.getColumnCount(); i < n; i++) { | ||
| final int columnIndex = addressCache.getColumnIndexes().getQuick(i); | ||
| final int parquetColumnIndex = toParquetColumnIndexes.getQuick(columnIndex); |
Contributor
There was a problem hiding this comment.
Table reader column indexes and parquet column indexes may not match. That's why we have this additional indirection via toParquetColumnIndexes.
This was referenced Dec 18, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
WIP for parquet usability roadmap. Not ready for review, lots of refactoring required.
Closes #5280
Projection
Lack of projection causes high memory usage and slow queries when working with large parquet files. This PR aims to address this, by pushing down projections from higher query models directly to the record and page frame cursors.
Q1 Clickbench and hits.parquet
Note: This PR does change some of the permissiveness around Parquet metadata validation. Previously, it was required that the file was always fully decoded and the metadata matched exactly. Now only the projected columns must be readable from the file. Therefore, if you change the schema by changing an underlying column type, it will throw an exception. But if you add an extra column to the schema, it will simply be ignored.
Glob/hive-partitioned reads
Generally, large parquet datasets will come in a partitioned form. Querying this is unergonomic, so usually a means to query the files using a wildcard pattern is provided. For example:
Min/max statistics inc. timestamp intrinsics
Changelist
Out of scope - minmax stats etc. separate holistic PR required.