Skip to content

Should we use double precision for all calculations? #943

@Peter9192

Description

@Peter9192

In #685 (followed up in #940) we noticed coincidentily that calculations on single precision (float32, i.e. the default dtype for all cmor data) may cause inaccurate results. This is known and documented behaviour in numpy, e.g. here:

Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-precision accumulator using the dtype keyword can alleviate this issue.

The question thus arises whether we should explicitly enforce a higher precision before doing any calculations on the data. This could lead to more accurate results, but it might also increase memory use.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions