-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
%a should output "Hexadecimal floating point, lowercase"
After fixing #7362, we still see some issues.
It seems like GNU coreutils prefers "shifting" the output so that we have a single hex digit between 0x1 and 0xf before the decimal point, while uutils always picks 0x1. And the output is padded with 0.
$ cargo run printf "%a %a\n" 15.125 16.125
0x1.e400000000000p+3 0x1.0200000000000p+4
$ printf "%a %a\n" 15.125 16.125
0xf.2p+0 0x8.1p+1
The value is technically correct though:
0x1.e400000000000p+3 (1+14/16+4/256)*2**3=15.125
0x1.0200000000000p+4 (1+2/256)*2**4=16.125
(note: be careful to add env before printf as some shell implementations provide built-in printf...)
Also, the behaviour is different across platforms. Running LANG=C env printf '%a %.6a\n' 0.12544 0.12544 in various dockers (gist):
| arch | %a |
%.6a |
|---|---|---|
linux-386/linux-amd64 |
0x8.07357e670e2c12bp-6 |
0x8.07357ep-6 |
linux-arm-v5/linux-arm-v7 |
0x1.00e6afcce1c58p-3 |
0x1.00e6b0p-3 |
linux-arm64-v8/linux-mips64le/linux-ppc64le/linux-s390x |
0x1.00e6afcce1c58255b035bd512ec7p-3 |
0x1.00e6b0p-3 |
According to https://en.cppreference.com/w/c/io/fprintf: The default precision is sufficient for exact representation of the value..
On x86, 16 nibbles = 64 bits are printed at most, including the integer part. That corresponds to the internal x86 80-bit floating point, long double type. printf shifts 3 of the fraction bits in the integer part before the ., so that the whole 64 bits can fit neatly in 16 nibbles when printed. It's interesting that this behaviour is preserved when specifying a precision (e.g. %.6f).
On arm64 (and a bunch of other archs): 28 nibbles = 112 bits are printed after the decimal point. That corresponds to quad-precision 128-bit float. Also long double type.
On arm32: 13 nibbles = 52 bits are printed. That's double-precision 64-bit float. Also long double type.