Skip to content

Unit tests fail due to switch to Parquet #1157

@PGijsbers

Description

@PGijsbers

I wanted to investigate which unit tests fail and why (and how to get them working again).
One of the changes that broke multiple tests was the switch from ARFF files to Parquet. For example, a dataset which had a boolean value in ARFF would have be a categorical type with two allowed values - in parquet they simply are a pandas boolean type.
In another example, see below, the Parquet had stored numeric data as float64 but our tests expect them to be uint8 (because there are no missing values and casting the column to uint8 loses no data).

image

How do we proceed with this? I propose updating the unit tests/expected behavior where reasonable (for example, integrating the bool dtype instead of holding on to a two-valued categorical). Sometimes it might make sense to instead expect changes to the parquet file (as the example in the image), but at the same time I don't know if the parquet readers in all languages can deal with the different types. Either way we can also update our parquet loading logic to test if type conversions are possible, so that our unit tests should be robust to the changes.

side note: I am not entirely sure why the data is stored in float64 when AFAIK the openml-python module was used to convert the data..

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions