Skip to content

Conversation

@vustef
Copy link
Contributor

@vustef vustef commented Oct 27, 2025

Based on #7307.

Which issue does this PR close?

Rationale for this change

We need row numbers for many of the downstream features, e.g. computing unique row identifier in iceberg.

What changes are included in this PR?

New API to get row numbers as a virtual column:

let file = File::open(path).unwrap();
let row_number_field = Field::new("row_number", ArrowDataType::Int64, false).with_extension_type(RowNumber);
let options = ArrowReaderOptions::new().with_virtual_columns(vec![row_number_field]);
let reader = ParquetRecordBatchReaderBuilder::try_new_with_options(file, options)
    .unwrap()
    .build()
    .expect("Could not create reader");
reader
    .collect::<Result<Vec<_>, _>>()
    .expect("Could not read")
    ```

This column is defined as an extension type.
Parquet metadata is propagated to the array builder to compute first row indexes.
New Virtual column is included in addition to Primitive and Group.

Are these changes tested?

Yes

Are there any user-facing changes?

This is user facing feature, and has added docstrings.
No breaking changes, at least I tried not to, by creating a duplicate of public method to add more parameters.

}

fn skip_records(&mut self, num_records: usize) -> Result<usize> {
// TODO: Use advance_by when it stabilizes to improve performance
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO from original PR

}

fn row_groups(&self) -> Box<dyn Iterator<Item = &RowGroupMetaData> + '_> {
Box::new(std::iter::once(self.metadata.row_group(self.row_group_idx)))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this duplicates a lot, not sure if anything can be done here

/// - If nullable: def_level = parent_def_level + 1
/// - If required: def_level = parent_def_level
/// - rep_level = parent_rep_level (virtual fields are not repeated)
fn convert_virtual_field(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the name used here is not aligned with what other convert_ functions do

@github-actions github-actions bot added the parquet Changes to the parquet crate label Oct 27, 2025
Copy link
Contributor

@etseidl etseidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick first pass seems ok, just a couple of comments.

}

// decrypt column chunk info
// decrypt column chunk info and handle ordinal assignment
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this doesn't hurt, but it shouldn't be necessary. The row groups already passed through the OrdinalAssigner in parquet_metadata_from_bytes, and row_group_from_encrypted_thrift re-uses the ordinal from the RowGroupMetaData passed in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reverted the change here.

Comment on lines 899 to 901
if actual_ordinal == 0 {
self.first_has_ordinal = rg_has_ordinal;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if first_has_ordinal should be an option, and then set it if it's None. If/when we implement row group selection the first ordinal seen may not be 0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then enumeration won't work at all, and we'd have to rely on ordinal in the metadata? Actually, row numbers feature won't work at all, since it relies on having information (sizes in rows) about all row groups to figure out first row index of the row group it reads. Unless some trick that I'm not aware of is used.

Note that row numbers have to be stable across queries (i.e. independent of whether there was filtering), otherwise we would've implemented them on the client side by just enumerating rows.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but it seems this code is executed regardless of the presence of a virtual column. It looks to me like skipping the first rowgroup will result in an error if ordinals are present.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code should only error out if either ordinal is present in metadata for some row group but not for others, or vice-versa. That shouldn't happen with row group skipping, i.e. even if we skip, all the remaining row groups that we iterate through would fulfil this condition, right?

What I'd like though, is that the build_row_number_reader fails if skipping occurs. Or that it ensures that there's no row skipping if it's invoked. Not sure how to ensure that at this point though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I skip row groups 0 and 1 and go straight to row group 2 (think externally cached statistics with point look up), which has ordinal set to 2 in the footer, self.first_has_ordinal is false and rg_has_ordinal is true. L904 will evaluate false, L906 will evaluate true and return an error.

I'll take a closer look at the rest of this PR to see if there's a way to make skipping and row numbers mutually exclusive.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see why we think differently. I was assuming then that the row group 2 would be the one to set self.first_has_ordinal. Because for example the loop that invokes ensure would be going for ordinal in 0..list_ident.size, whe size is going to be based only on the row groups after skipping.

Is that not going to happen?
Perhaps there might need some changes to be made once row group skipping is implemented, not sure how best to guard against getting the intended behaviour broken without catching it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because for example the loop that invokes ensure would be going for ordinal in 0..list_ident.size, whe size is going to be based only on the row groups after skipping.

Ah, but here we're in a function whose input is the entire encoded footer. list_ident.size will always be the unfiltered number of row groups. So I think we'll either:

  • loop over ordinal in 0..list_ident.size, but skip decoding if ordinal not in a list
  • assuming an index, loop over a list of ordinals, decode the row group using a range from the index, and pass ordinal as actual_ordinal

In either event, we'd pass 2 as actual_ordinal in my scenario above. But yeah, that's pretty far down the road. Guess we can solve it then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I see why you proposed to use Option. I pushed a change for that.

@adriangb
Copy link
Contributor

How is this going to integrate into DataFusion? I feel like FileScanConfig will have to lie and say it has this column, etc.

@vustef
Copy link
Contributor Author

vustef commented Nov 13, 2025

How is this going to integrate into DataFusion? I feel like FileScanConfig will have to lie and say it has this column, etc.

I'll defer that to @alamb . I only thought about how to integrate it in iceberg-rust, and perhaps there are similarities. There the user of the library would have to request this column, and then we'd propagate this information to the underlying arrow reader.

Remove unused import
@alamb
Copy link
Contributor

alamb commented Nov 13, 2025

How is this going to integrate into DataFusion? I feel like FileScanConfig will have to lie and say it has this column, etc.

I don't have any real plan for the DataFusion integration. FWIW the first usecase of these numbers I think will be various iceberg / other table format integrations to support delete vectors

In order to support "virtual columns" in DataFusion, I suspect we will need to update ListingTable to have some notion of virtual columns (in addition to partition columns).

Other virtual columns people have talked about are file names, for example, which wouldn't come from the parquet reader but instead would come from the file opening machinery

Copy link
Contributor

@etseidl etseidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Impressive. Thanks @vustef and @alamb.

/// # Ok(())
/// # }
/// ```
pub fn with_virtual_columns(self, virtual_columns: Vec<FieldRef>) -> Self {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vustef I think this is where we'd be able to detect row group filtering. There would be a with_row_group_selection() or some such function added to control skipping, and a check could be added both here and in the new function to disallow setting both.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for figuring this out. I guess there's no action that we can take right now then, please let me know if it's otherwise.

@vustef
Copy link
Contributor Author

vustef commented Nov 14, 2025

Impressive. Thanks @vustef and @alamb.

Thank you @etseidl for the review. @alamb now that we got another approve, are we good to merge this, before the Monday release?

@alamb
Copy link
Contributor

alamb commented Nov 14, 2025

Also thanks to @jkylling whose started this project

@alamb
Copy link
Contributor

alamb commented Nov 14, 2025

Impressive. Thanks @vustef and @alamb.

Thank you @etseidl for the review. @alamb now that we got another approve, are we good to merge this, before the Monday release?

Yeah, I don't see any reason to hold off merging. Let's do it!

@alamb
Copy link
Contributor

alamb commented Nov 14, 2025

Once the Ci is green I'll merge this PR. Thank you @vustef

@alamb
Copy link
Contributor

alamb commented Nov 14, 2025

gogoogogogogo!!!

@alamb alamb merged commit 3d5428d into apache:main Nov 14, 2025
16 checks passed
@alamb
Copy link
Contributor

alamb commented Nov 14, 2025

The 57.1.0 patch release may be the most epic minor release we have ever had

@alamb
Copy link
Contributor

alamb commented Nov 14, 2025

Thanks again @jkylling @vustef and @etseidl

@vustef
Copy link
Contributor Author

vustef commented Nov 14, 2025

Thanks again @jkylling @vustef and @etseidl

It was my pleasure, thanks to you all from me as well.

Copy link
Contributor

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Post-merge drive by review

Comment on lines +63 to +64
// Sort ranges by ordinal to maintain original row group order
ranges.sort_by_key(|(ordinal, _)| *ordinal);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this part? The row groups were supplied in some particular order (by the row_groups iterator), and we're reordering by row group ordinal instead? Wouldn't that cause row number mismatches with other columns that continue reading in the original order? It seems like we actually need:

let selected_ordinals = HashMap<i16, usize> = row_groups
    .enumerate()
    .map(...)
    .collect::<Result<_>>()?;

and then ranges needs to use that enumeration ordinal (not the row group ordinal):

if let Some(i) = selected_ordinals.get(&ordinal) {
    ranges.push((i, ...);
}

... so that the sorted ranges match the original row_group iterator's order?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I messed this up when I started computing first row indexes...thank you for catching this. WIll follow up shortly with tests and the fix.

.extension_type_name()
.map(|name| name.starts_with(VIRTUAL_PREFIX!()))
.unwrap_or(false)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
}
.map_or(false, |name| name.starts_with(VIRTUAL_PREFIX!()))

Comment on lines +628 to +629
if !is_virtual_column(field) {
panic!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a fan of panics, but if we're going to panic why not just

assert!(
    is_virtual_column(field), 
    "...", 
    field.name()
);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me neither, but seemed like a unwritten rule that all these with_ methods in ArrowReaderOptions return Self rather than a Result. Please comment in the new PR if I should change that behaviour.

@alamb also checking for your opinion on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the builder-like with_ functions don't give much opportunity for validity checking. We could probably use a proper ArrowReaderOptionsBuilder and do that kind of checking in build(). Or just change this one to a setter (so it's obvious it can't be chained) and return a Result<()>.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is ok to return errors when trying to build options, like

let options = options.with_virtual_columns(cols)?;

If there is an error it is unlikely the code wants to continue configuring the options anyways

alamb added a commit that referenced this pull request Nov 18, 2025
# Which issue does this PR close?

Closes #8864.

# Rationale for this change

#8715 introduced row numbers
feature last week. However, it had a bug, which luckily for us @scovich
pointed out soon after the merge. The issue is that the row numbers are
produced in ordinal-based order of the row groups, instead of
user-requested order of row groups. The former is wrong, and is being
fixed here by switching to user-requested order.

# What changes are included in this PR?

Just fixing the bug as explained above, and adding test. Also addressing
two small comments from post-merge review:
#8715 (review)

# Are these changes tested?

Yes.

# Are there any user-facing changes?

No, this wasn't released yet.

---------

Co-authored-by: Andrew Lamb <[email protected]>
alamb added a commit that referenced this pull request Nov 19, 2025
…ic on invalid input (#8867)

# Which issue does this PR close?

- Follow on to #8715
- related to #8863



# Rationale for this change

per #8715 (comment),
@scovich rightly says

> not a fan of panics

It is much better for a user facing API to return an error on invalid
input than painc

# What changes are included in this PR?

1. Make `ArrowReaderOptions::with_virtual_columns` error rather than
panic on invalid input
2.  Update tests to match

# Are these changes tested?
Yes by CI


# Are there any user-facing changes?
While this is an API change it was introduced in
#8715 which has not yet been
released. Therefore, we can make this change without breaking semver.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

parquet Changes to the parquet crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Parquet] Support file row number in Parquet reader

6 participants