-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Fix for NumPy 1.25 #6970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for NumPy 1.25 #6970
Conversation
|
Most of the failures are related to changes in NumPy's cos/sin implementation. These changes widely affect other packages in the ecosystem as well, so I presume it will be discussed on tomorrow's triage meeting @seberg? If we are guaranteed that the new implementation is more accurate than the previous one across the board, then I'd be okay fixing all those tests. But if it varies from place to place, the change may warrant further discussion? |
|
For reference we already dealt with this in Matplotlib : matplotlib/matplotlib#25813 and matplotlib/matplotlib#25789 . |
|
fwiw I do think the change in NumPy should be reverted (agree with all the comments in the mailing list thread arguing for that, including the point about teaching), but at the same time, maybe we should also fix our tests to not depend on exact matches for those special values... |
|
@jni The way many of these tests were constructed is to take an image, transform it with some parameters (which affect the output shape), and to confirm that it's correct. Then, that gets enshrined in the test suite. Now, it's still correct, but the shapes are slightly off :/ And it's strange to test for shape +- 1 on all dimensions, that's a bit rough. So, how do we do it? Perhaps choose better angles for the transforms? |
|
Do you see any flakyness of these same tests across architectures / platforms? |
|
I think our CI is limited that way (chip instruction sets), but at least across numpy versions during the transition we'll have problems. |
|
For Matplotlib we occasionally get reports from the Debian packager of test failures on exotic architectures / endianess / bit size systems (most of them are RBGA values changing by +/-1 presumably due to float rounding issues), I was wondering if skimage got the same and if these tests were the ones that also had issues there. If that were the case it might be extra motivation to make these tests less brittle. |
|
7924a0c to
78096f2
Compare
|
Can we just close this now that NumPy reverted those changes? Or do we want to keep this as an improvement? |
|
I think this needs to be merged to remove the skip of NumPy 1.25 on Azure? |
3af93fa to
3d55001
Compare
|
We still need to sort out what to do about the 7 failing tests |
|
I just noticed linux-cp3.11-pre has |
1d2fe13 to
c4d9ead
Compare
8a41854 to
63fcbf6
Compare
|
|
Enabling stable sorting in |
This reverts commit e57c23d.
NumPy enabled optimizations for quicksort on AVX-512 enabled processors [1]. We sort based on neighbor distances which can be equal. The default quicksort is not stable and could have different behavior on AVX-512. Test this assumption. [1] https://numpy.org/devdocs/release/1.25.0-notes.html#faster-np-argsort-on-avx-512-enabled-processors
after updating to stable sort internally.
729c8de to
c350740
Compare
266a801 to
20b9a56
Compare
…umpy-1.25 # Conflicts: # skimage/morphology/_util.py
See if it does anything.
| _, indices = np.unique(sorted_raveled_offsets, return_index=True) | ||
| sorted_raveled_offsets = sorted_raveled_offsets[np.sort(indices)] | ||
| sorted_distances = sorted_distances[np.sort(indices)] | ||
| indices = np.sort(indices, kind="stable") | ||
| sorted_raveled_offsets = sorted_raveled_offsets[indices] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indices contains only unique values, so "quicksort" and "stable" should have the same behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These sorts probably don't differ by much, but why specify stable here if it's not necessary?
See if it does anything.
|
Okay fixed the two remaining failing test by switching to stable sorting at the appropriate place. Going forward, we should probably consider stable sorting as the default and use quicksort only if there is enough to warrant the trade-off of unstable sorting. DetailsSubject: [PATCH] Use stable sort everywhere
---
Index: skimage/util/_regular_grid.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/skimage/util/_regular_grid.py b/skimage/util/_regular_grid.py
--- a/skimage/util/_regular_grid.py (revision 8a59eda7bb4609636ba8e0c9ecfa122f0b4bad7c)
+++ b/skimage/util/_regular_grid.py (date 1692119211772)
@@ -60,7 +60,7 @@
"""
ar_shape = np.asanyarray(ar_shape)
ndim = len(ar_shape)
- unsort_dim_idxs = np.argsort(np.argsort(ar_shape))
+ unsort_dim_idxs = np.argsort(np.argsort(ar_shape, kind="stable"))
sorted_dims = np.sort(ar_shape)
space_size = float(np.prod(ar_shape))
if space_size <= n_points:
Index: skimage/transform/tests/test_hough_transform.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/skimage/transform/tests/test_hough_transform.py b/skimage/transform/tests/test_hough_transform.py
--- a/skimage/transform/tests/test_hough_transform.py (revision 8a59eda7bb4609636ba8e0c9ecfa122f0b4bad7c)
+++ b/skimage/transform/tests/test_hough_transform.py (date 1692119211772)
@@ -260,7 +260,7 @@
min_ydistance=1, threshold=None,
num_peaks=np.inf,
total_num_peaks=np.inf)
- s = np.argsort(out[3]) # sort by radii
+ s = np.argsort(out[3], kind="stable") # sort by radii
assert_equal(out[1][s], np.array([y_0, y_1]))
assert_equal(out[2][s], np.array([x_0, x_1]))
assert_equal(out[3][s], np.array([rad_0, rad_1]))
Index: skimage/transform/hough_transform.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/skimage/transform/hough_transform.py b/skimage/transform/hough_transform.py
--- a/skimage/transform/hough_transform.py (revision 8a59eda7bb4609636ba8e0c9ecfa122f0b4bad7c)
+++ b/skimage/transform/hough_transform.py (date 1692119211772)
@@ -354,9 +354,9 @@
cy = np.array(cy)
accum = np.array(accum)
if normalize:
- s = np.argsort(accum / r)
+ s = np.argsort(accum / r, kind="stable")
else:
- s = np.argsort(accum)
+ s = np.argsort(accum, kind="stable")
accum_sorted, cx_sorted, cy_sorted, r_sorted = \
accum[s][::-1], cx[s][::-1], cy[s][::-1], r[s][::-1]
Index: skimage/feature/peak.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/skimage/feature/peak.py b/skimage/feature/peak.py
--- a/skimage/feature/peak.py (revision 8a59eda7bb4609636ba8e0c9ecfa122f0b4bad7c)
+++ b/skimage/feature/peak.py (date 1692119211772)
@@ -404,7 +404,7 @@
xcoords_peaks = np.array(xcoords_peaks)
if num_peaks < len(img_peaks):
- idx_maxsort = np.argsort(img_peaks)[::-1][:num_peaks]
+ idx_maxsort = np.argsort(img_peaks, kind="stable")[::-1][:num_peaks]
img_peaks = img_peaks[idx_maxsort]
ycoords_peaks = ycoords_peaks[idx_maxsort]
xcoords_peaks = xcoords_peaks[idx_maxsort]
Index: skimage/transform/radon_transform.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/skimage/transform/radon_transform.py b/skimage/transform/radon_transform.py
--- a/skimage/transform/radon_transform.py (revision 8a59eda7bb4609636ba8e0c9ecfa122f0b4bad7c)
+++ b/skimage/transform/radon_transform.py (date 1692119211772)
@@ -338,7 +338,7 @@
"""
interval = 180
- remaining_indices = list(np.argsort(theta)) # indices into theta
+ remaining_indices = list(np.argsort(theta, kind="stable")) # indices into theta
# yield an arbitrary angle to start things off
angle = theta[remaining_indices[0]]
yield remaining_indices.pop(0)
Index: skimage/morphology/grayreconstruct.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/skimage/morphology/grayreconstruct.py b/skimage/morphology/grayreconstruct.py
--- a/skimage/morphology/grayreconstruct.py (revision 8a59eda7bb4609636ba8e0c9ecfa122f0b4bad7c)
+++ b/skimage/morphology/grayreconstruct.py (date 1692119211772)
@@ -179,7 +179,8 @@
images = images.reshape(-1)
# Erosion goes smallest to largest; dilation goes largest to smallest.
- index_sorted = np.argsort(images).astype(signed_int_dtype, copy=False)
+ index_sorted = np.argsort(images, kind="stable")
+ index_sorted = index_sorted.astype(signed_int_dtype, copy=False)
if method == 'dilation':
index_sorted = index_sorted[::-1]
Though, now we get a memory corruption or something like that on pre. It looks reproducable. 🤔 windows job with more details |
|
Well done figuring out the problem! |
|
Okay, after some digging I am still clueless:
Details
|
|
For local debugging with NumPy v1.26.0b1 import numpy as np
from skimage.feature import multiscale_basic_features
channel_axis = 0
num_channels = 5
shape_spatial = (10, 10)
ndim = len(shape_spatial)
shape = tuple(
np.insert(shape_spatial, channel_axis % (ndim + 1), num_channels)
)
img = np.zeros(shape)
img[:10] = 1
img += 0.05 * np.random.randn(*img.shape)
features = multiscale_basic_features(img, sigma_min=1, sigma_max=2)Is it possible that
to 1, the error disappears... |
Follow-up to #6969 and #6973. This re-enables the tests for NumPy 1.25 we still need to sort out what to do about the 7 failing tests