Skip to content

Coerce parquet int96 timestamps to microsecond precision #5655

@ion-elgreco

Description

@ion-elgreco

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
Spark annoyingly by default writes the deprecated int96 timestamps even though it's microsecond precision in the source.

Arrow-rs coerces this as nanosecond precision during reads. Pyarrow does this as well, however in their parquet reader you can enable a flag that coerces the int96 timestamps from nanosecond to microsecond.

Describe the solution you'd like
Add an option to the parquet reader to handle this coercion to microsecond precision for int96 timestamps

Describe alternatives you've considered

Additional context

This would help in delta-rs with reading spark created parquet tables, and prevent schema mismatches

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementAny new improvement worthy of a entry in the changelog

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions