-
Notifications
You must be signed in to change notification settings - Fork 28
Align to coord #561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Align to coord #561
Conversation
WalkthroughAdds a new alignment implementation dascore.proc.align.align_to_coord, exports it from dascore.proc, and aliases it on Patch; updates coords to handle multi-dimensional inputs and out-of-bounds indexing; adds tests and benchmarks; minor PR template wording fix. Changes
Possibly related PRs
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #561 +/- ##
========================================
Coverage 99.92% 99.92%
========================================
Files 127 128 +1
Lines 10616 10747 +131
========================================
+ Hits 10608 10739 +131
Misses 8 8
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
tests/test_proc/test_align.py (1)
199-224: Optional: Static analysis notes on unused arguments.Static analysis flags unused arguments
patch(lines 201, 217) andshift_coord(lines 204, 217) in helper methods. These arguments provide a consistent signature across validators and enable future enhancements. If you prefer to suppress these warnings, consider adding# noqa: ARG002comments or use_patch,_shift_coordto signal intentionally unused parameters.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.github/pull_request_template.md(1 hunks)benchmarks/test_align_benchmarks.py(1 hunks)benchmarks/test_patch_benchmarks.py(1 hunks)dascore/core/coords.py(2 hunks)dascore/core/patch.py(1 hunks)dascore/proc/__init__.py(1 hunks)dascore/proc/align.py(1 hunks)tests/test_core/test_coords.py(2 hunks)tests/test_proc/test_align.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
dascore/proc/__init__.py (1)
dascore/proc/align.py (1)
align_to_coord(193-284)
tests/test_core/test_coords.py (1)
dascore/core/coords.py (5)
CoordArray(1297-1425)values(986-990)values(1245-1260)data(606-608)get_coord(1519-1710)
dascore/core/coords.py (2)
dascore/compat.py (1)
is_array(40-43)dascore/core/coordmanager.py (3)
min(1060-1062)max(1064-1066)ndim(807-809)
benchmarks/test_patch_benchmarks.py (4)
dascore/examples.py (1)
get_example_patch(659-706)dascore/proc/coords.py (1)
update_coords(228-253)dascore/core/patch.py (3)
coords(255-269)dims(212-226)Patch(28-511)dascore/proc/align.py (1)
align_to_coord(193-284)
dascore/proc/align.py (3)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/utils/patch.py (1)
patch_function(180-286)dascore/core/coords.py (13)
data(606-608)ndim(468-470)get_coord(1519-1710)change_length(879-894)change_length(1034-1039)change_length(1273-1284)update(742-755)update(969-971)values(986-990)values(1245-1260)get_next_index(795-859)index(702-721)new(592-603)
tests/test_proc/test_align.py (4)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/examples.py (1)
get_example_patch(659-706)dascore/proc/coords.py (1)
update_coords(228-253)dascore/proc/align.py (1)
align_to_coord(193-284)
dascore/core/patch.py (1)
dascore/proc/align.py (1)
align_to_coord(193-284)
🪛 Ruff (0.13.2)
dascore/proc/align.py
94-94: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
tests/test_proc/test_align.py
201-201: Unused method argument: patch
(ARG002)
204-204: Unused method argument: shift_coord
(ARG002)
217-217: Unused method argument: shift_coord
(ARG002)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: Run benchmarks
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
🔇 Additional comments (20)
.github/pull_request_template.md (1)
6-6: LGTM! Grammar improvement.The wording change improves readability and correctness.
benchmarks/test_align_benchmarks.py (1)
1-3: Placeholder benchmark file detected.This file contains only a docstring and import but no benchmark functions. Since
benchmarks/test_patch_benchmarks.pyalready includes aTestAlignBenchmarksclass with concrete benchmark tests, consider either:
- Adding benchmark implementations here if you intend to separate alignment benchmarks, or
- Removing this placeholder file and keeping all alignment benchmarks in
test_patch_benchmarks.py.Clarify the intent for this file to avoid maintenance confusion.
tests/test_core/test_coords.py (2)
21-21: LGTM! Public API exposure for CoordArray.CoordArray is now accessible for tests, aligning with the broader alignment functionality that supports multi-dimensional coordinate input.
1955-1960: LGTM! Validates 2D datetime64 handling.This test confirms that
get_coordcorrectly returns aCoordArrayfor 2D datetime64 input, supporting the alignment functionality across multiple dimensions.dascore/core/coords.py (2)
677-681: LGTM! Array-aware relative value computation.The updated logic correctly handles both scalar and array inputs by vectorizing the computation with
np.wherefor arrays while preserving the original scalar behavior.
1630-1632: LGTM! Handles multi-dimensional input correctly.The early return for
data.ndim > 1prevents incorrect monotonic/even sampling checks on multi-dimensional arrays, ensuring they are treated asCoordArrayinstances.benchmarks/test_patch_benchmarks.py (3)
316-326: LGTM! Correctly constructs 2D alignment benchmark fixture.The fixture creates a realistic alignment scenario with a 1D shift coordinate based on the distance dimension.
328-350: LGTM! Correctly constructs 4D alignment benchmark fixture.The fixture properly extends the patch to 4D and creates multi-dimensional shift coordinates for comprehensive benchmark coverage.
352-374: LGTM! Comprehensive alignment benchmark coverage.The benchmark tests cover both 2D and 4D alignment scenarios with multiple modes (full, valid, same), providing thorough performance measurement of the alignment functionality.
dascore/proc/__init__.py (1)
19-19: LGTM! Public API augmentation.The import exposes
align_to_coordas part of thedascore.procpublic API, enabling direct usage of the new alignment functionality.dascore/core/patch.py (1)
426-426: LGTM! Exposes align_to_coord on Patch.The alias enables
patch.align_to_coord(...)usage, consistent with the existing pattern of exposing processing functions as Patch methods.tests/test_proc/test_align.py (6)
14-29: LGTM! Comprehensive 2D alignment fixture.The fixture creates a patch with multiple shift coordinate representations (absolute, samples, relative) for thorough alignment testing.
32-62: LGTM! Comprehensive 4D alignment fixture.The fixture creates a 4D patch with shift coordinates of varying dimensionalities (1D, 2D, 3D) for comprehensive multi-dimensional alignment testing.
65-110: LGTM! Well-designed fixtures for alignment correctness tests.The fixtures create predictable test scenarios (zero shifts, known positive shifts, mixed shifts) with constant-value traces, enabling precise validation of alignment behavior.
113-158: LGTM! Comprehensive validation tests.The validation tests thoroughly cover argument handling for
align_to_coord, ensuring proper error handling for invalid inputs.
161-296: LGTM! Comprehensive alignment behavioral tests.The tests thoroughly cover alignment behavior across different modes (full, same, valid) and coordinate representations (absolute, samples, relative) for both 2D and 4D patches.
297-411: LGTM! Thorough correctness validation.The correctness tests use predictable constant-value traces to validate exact data values, NaN placements, and overlap semantics across different shift scenarios and modes.
dascore/proc/align.py (3)
192-284: Well-structured implementation with clear separation of concerns.The function is well-organized with clear helper functions, comprehensive documentation, and good use of assertions. The requirement for evenly-sampled coordinates (line 267) appropriately addresses the
change_lengthconstraint identified in helper functions.Strengths:
- Clear separation into validation, calculation, and application phases
- Comprehensive docstring with visual examples
- Type hints and default values
- Assertion to verify dimensionality preservation
- Proper use of
@patch_function()decorator for integration with the Patch API
155-190: No action needed: all coordinate types inheritchange_lengthfromBaseCoord.
267-270: Resolved — get_next_index preserves input shape for array-like (including multi-dimensional) values.BaseCoord.get_next_index uses np.atleast_1d and returns an array when the input is array-like; both BaseCoord._get_index (np.round/elementwise math) and CoordMonotonicArray._get_index (np.searchsorted) operate elementwise and return indices with the same shape as the provided value (tests exercise array inputs in tests/test_core/test_coords.py).
dascore/proc/align.py
Outdated
| def _calculate_mode_parameters(mode, shifts, n_samples): | ||
| """ | ||
| Calculate output size and offsets based on alignment mode. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| mode | ||
| One of "full", "valid", or "same". | ||
| shifts | ||
| Array of shift amounts. | ||
| n_samples | ||
| Original number of samples along the dimension. | ||
|
|
||
| Returns | ||
| ------- | ||
| dict with keys: | ||
| output_size: Size of output dimension | ||
| adjusted_shifts: Shifts adjusted for the output coordinate frame | ||
| start_offset_samples: Offset in samples from original start | ||
| """ | ||
| shifts = np.asarray(shifts, dtype=np.int64) | ||
| min_shift, max_shift = shifts.min(), shifts.max() | ||
| if mode == "full": | ||
| # Expand dimension to fit all shifts | ||
| output_size = n_samples + max_shift - min_shift | ||
| adjusted_shifts = shifts - min_shift | ||
| start_offset_samples = -min_shift | ||
| elif mode == "same": | ||
| # Keep original size | ||
| output_size = n_samples | ||
| adjusted_shifts = shifts | ||
| start_offset_samples = 0 | ||
| else: # mode == "valid" | ||
| # Find overlapping region | ||
| output_size = n_samples - (max_shift - min_shift) | ||
| adjusted_shifts = shifts - min_shift | ||
| start_offset_samples = max_shift - min_shift | ||
| return { | ||
| "output_size": output_size, | ||
| "adjusted_shifts": adjusted_shifts, | ||
| "start_offset_samples": start_offset_samples, | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for edge cases.
The function does not validate inputs and could fail or produce invalid results in edge cases:
- Empty
shiftsarray would crash onshifts.min()/shifts.max() n_samples <= 0could produce invalid output sizes- In "valid" mode,
output_sizecould become negative whenmax_shift - min_shift > n_samples
Consider adding validation at the start:
def _calculate_mode_parameters(mode, shifts, n_samples):
"""
Calculate output size and offsets based on alignment mode.
Parameters
----------
mode
One of "full", "valid", or "same".
shifts
Array of shift amounts.
n_samples
Original number of samples along the dimension.
Returns
-------
dict with keys:
output_size: Size of output dimension
adjusted_shifts: Shifts adjusted for the output coordinate frame
start_offset_samples: Offset in samples from original start
"""
shifts = np.asarray(shifts, dtype=np.int64)
+ if shifts.size == 0:
+ msg = "shifts array cannot be empty"
+ raise ParameterError(msg)
+ if n_samples <= 0:
+ msg = f"n_samples must be positive, got {n_samples}"
+ raise ParameterError(msg)
min_shift, max_shift = shifts.min(), shifts.max()
if mode == "full":
# Expand dimension to fit all shifts
output_size = n_samples + max_shift - min_shift
adjusted_shifts = shifts - min_shift
start_offset_samples = -min_shift
elif mode == "same":
# Keep original size
output_size = n_samples
adjusted_shifts = shifts
start_offset_samples = 0
else: # mode == "valid"
# Find overlapping region
output_size = n_samples - (max_shift - min_shift)
+ if output_size <= 0:
+ msg = (
+ f"No valid overlap: n_samples={n_samples}, "
+ f"shift_range={max_shift - min_shift}"
+ )
+ raise ParameterError(msg)
adjusted_shifts = shifts - min_shift
start_offset_samples = max_shift - min_shift
return {
"output_size": output_size,
"adjusted_shifts": adjusted_shifts,
"start_offset_samples": start_offset_samples,
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _calculate_mode_parameters(mode, shifts, n_samples): | |
| """ | |
| Calculate output size and offsets based on alignment mode. | |
| Parameters | |
| ---------- | |
| mode | |
| One of "full", "valid", or "same". | |
| shifts | |
| Array of shift amounts. | |
| n_samples | |
| Original number of samples along the dimension. | |
| Returns | |
| ------- | |
| dict with keys: | |
| output_size: Size of output dimension | |
| adjusted_shifts: Shifts adjusted for the output coordinate frame | |
| start_offset_samples: Offset in samples from original start | |
| """ | |
| shifts = np.asarray(shifts, dtype=np.int64) | |
| min_shift, max_shift = shifts.min(), shifts.max() | |
| if mode == "full": | |
| # Expand dimension to fit all shifts | |
| output_size = n_samples + max_shift - min_shift | |
| adjusted_shifts = shifts - min_shift | |
| start_offset_samples = -min_shift | |
| elif mode == "same": | |
| # Keep original size | |
| output_size = n_samples | |
| adjusted_shifts = shifts | |
| start_offset_samples = 0 | |
| else: # mode == "valid" | |
| # Find overlapping region | |
| output_size = n_samples - (max_shift - min_shift) | |
| adjusted_shifts = shifts - min_shift | |
| start_offset_samples = max_shift - min_shift | |
| return { | |
| "output_size": output_size, | |
| "adjusted_shifts": adjusted_shifts, | |
| "start_offset_samples": start_offset_samples, | |
| } | |
| def _calculate_mode_parameters(mode, shifts, n_samples): | |
| """ | |
| Calculate output size and offsets based on alignment mode. | |
| Parameters | |
| ---------- | |
| mode | |
| One of "full", "valid", or "same". | |
| shifts | |
| Array of shift amounts. | |
| n_samples | |
| Original number of samples along the dimension. | |
| Returns | |
| ------- | |
| dict with keys: | |
| output_size: Size of output dimension | |
| adjusted_shifts: Shifts adjusted for the output coordinate frame | |
| start_offset_samples: Offset in samples from original start | |
| """ | |
| shifts = np.asarray(shifts, dtype=np.int64) | |
| if shifts.size == 0: | |
| msg = "shifts array cannot be empty" | |
| raise ParameterError(msg) | |
| if n_samples <= 0: | |
| msg = f"n_samples must be positive, got {n_samples}" | |
| raise ParameterError(msg) | |
| min_shift, max_shift = shifts.min(), shifts.max() | |
| if mode == "full": | |
| # Expand dimension to fit all shifts | |
| output_size = n_samples + max_shift - min_shift | |
| adjusted_shifts = shifts - min_shift | |
| start_offset_samples = -min_shift | |
| elif mode == "same": | |
| # Keep original size | |
| output_size = n_samples | |
| adjusted_shifts = shifts | |
| start_offset_samples = 0 | |
| else: # mode == "valid" | |
| # Find overlapping region | |
| output_size = n_samples - (max_shift - min_shift) | |
| if output_size <= 0: | |
| msg = ( | |
| f"No valid overlap: n_samples={n_samples}, " | |
| f"shift_range={max_shift - min_shift}" | |
| ) | |
| raise ParameterError(msg) | |
| adjusted_shifts = shifts - min_shift | |
| start_offset_samples = max_shift - min_shift | |
| return { | |
| "output_size": output_size, | |
| "adjusted_shifts": adjusted_shifts, | |
| "start_offset_samples": start_offset_samples, | |
| } |
🤖 Prompt for AI Agents
In dascore/proc/align.py around lines 14 to 56, _calculate_mode_parameters lacks
input validation: handle empty or non-array shifts, non-positive n_samples, and
the case where in "valid" mode (max_shift - min_shift) > n_samples which yields
a negative output_size. At the top convert shifts to an array and raise a
ValueError if shifts is empty or contains non-integer values; validate n_samples
is an int > 0 and raise ValueError otherwise; after computing min_shift and
max_shift, if mode == "valid" and max_shift - min_shift > n_samples raise a
ValueError (or clamp/adjust behavior if preferred) and document the choice;
ensure all raised errors include clear messages mentioning the parameter and
expected ranges.
CodSpeed Performance ReportMerging #561 will not alter performanceComparing Summary
Benchmarks breakdown
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
dascore/proc/align.py (3)
191-290: Consider adding runtime mode validation.The function has a
Literal["full", "valid", "same"]type hint formode, but no runtime validation. Invalid modes will fail deep in helper functions rather than at the entry point with a clear error message.Add validation after the docstring:
>>> out = patch_coord.align_to_coord(time="shift_time", mode="full") >>> >>> # Example 2: Round-trip alignment with reverse parameter >>> shifts = np.array([0, 5, 10, 15, 20]) # shifts in samples >>> patch_shift = patch.update_coords(my_shifts=("distance", shifts)) >>> # Forward alignment >>> aligned = patch_shift.align_to_coord( ... time="my_shifts", samples=True, mode="full" ... ) >>> # Reverse to get back original (after dropping NaN padding) >>> reversed_patch = aligned.align_to_coord( ... time="my_shifts", samples=True, mode="full", reverse=True ... ) >>> original = reversed_patch.dropna("time") >>> assert original.equals(patch_shift) >>> >>> # Example 3: Use 'valid' mode to extract only overlapping region >>> valid_aligned = patch_shift.align_to_coord( ... time="my_shifts", samples=True, mode="valid" ... ) >>> # Result contains no NaN values, only the overlapping data >>> assert not np.isnan(valid_aligned).any() """ + # Validate mode parameter + valid_modes = ("full", "valid", "same") + if mode not in valid_modes: + msg = f"mode must be one of {valid_modes}, got {mode!r}" + raise ParameterError(msg) + # Validate inputs and get coordinates. dim_name, coord_name = _validate_alignment_inputs(patch, kwargs)
109-126: Addstrict=Truetozip()call.As flagged by static analysis and a previous review comment, the
zip()call at line 104 should usestrict=Trueto ensurecoord_axesandidxhave matching lengths, catching potential bugs early.Apply this diff:
- for i, ax in zip(idx, coord_axes): + for i, ax in zip(idx, coord_axes, strict=True): out[ax] = i
128-161: Fix function name in error messages.The function is named
align_to_coord(singular), but all error messages refer toalign_to_coords(plural). This inconsistency could confuse users.Apply this diff to correct the error messages:
if len(kwargs) != 1: - msg = "align_to_coords requires exactly one keyword argument" + msg = "align_to_coord requires exactly one keyword argument" raise ParameterError(msg) (dim_name, coord_name) = dict(kwargs).popitem() if not isinstance(coord_name, str): - msg = "align_to_coords requires keyword name and value to be strings." + msg = "align_to_coord requires keyword name and value to be strings." raise ParameterError(msg) # key must be a dimension and value a non-dimensional coord name. if dim_name not in patch.dims: msg = ( - f"align_to_coords requires the keyword to be a dimension, but " + f"align_to_coord requires the keyword to be a dimension, but " f"{dim_name} is not one of the patch dimensions: {patch.dims}" ) raise ParameterError(msg) if coord_name not in patch.coords.coord_map or coord_name in patch.dims: msg = ( - f"align_to_coords requires the value to be the name of a non-dimensional " + f"align_to_coord requires the value to be the name of a non-dimensional " f"coordinate but '{coord_name}' is either not a coordinate or also " f"a dimension. Patch dimensions are: {patch.dims} and " f"patch coords: {patch.coords}" ) raise ParameterError(msg) # Value must not depend on selected dim. value_dims = patch.coords.dim_map[coord_name] if dim_name in value_dims: msg = ( - f"align_to_coords requires the align coord not depend on the selected" + f"align_to_coord requires the align coord not depend on the selected" f" dimension but {dim_name} does." ) raise ParameterError(msg)
🧹 Nitpick comments (1)
tests/test_proc/test_align.py (1)
164-230: Consider removing unused parameters for clarity.The test helper methods
_test_mode_full,_test_mode_same, and_test_valid_modehave unused parameters (patchand/orshift_coord) that are flagged by static analysis. While these might be kept for interface consistency, consider whether they're actually needed.If the parameters aren't needed for future extensibility, apply this diff:
def _test_mode_same( self, - patch, shifted_patch, shift_dim="time", shift_coord="shift_time", ): - """Helper function to test mode=same.""" - assert shifted_patch.shape == patch.shape + """Helper function to test mode=same.""" # The shift value should be the same as the number of NaN in each # trace (due to how the shift_time coord was built). time = shifted_patch.get_coord(shift_dim) axis = shifted_patch.get_axis(shift_dim) nan_count = np.isnan(shifted_patch.data).sum(axis=axis, keepdims=True) shift_values = shifted_patch.get_array(shift_coord) # Handle different shift coordinate types if shift_coord.endswith("_samples"): # Shift coord is already in samples shift = shift_values elif shift_coord.endswith("_relative"): # Shift coord is relative to start, just divide by step shift = shift_values / time.step else: # Shift coord is in absolute units, convert to samples shift = (shift_values - time.min()) / time.step # Need to broadcast up shift. if nan_count.ndim > shift.ndim: dims = shifted_patch.coords.dim_map[shift_coord] missing = set(shifted_patch.dims) - set(dims) args = [slice(None)] * nan_count.ndim for dim in missing: args[shifted_patch.get_axis(dim)] = None shift = shift[tuple(args)] assert np.allclose(nan_count, shift) def _test_mode_full( self, - patch, shifted_patch, shift_dim="time", - shift_coord="shift_time", ): """Tests for full mode shifts.""" # The number of NaN should be the same for all channels. axis = shifted_patch.get_axis(shift_dim) nan_count = np.isnan(shifted_patch.data).sum(axis=axis) assert len(np.unique(nan_count)) == 1 def _test_valid_mode( self, - patch, shifted_patch, shift_dim="time", - shift_coord="shift_time", ): """Tests for when the valid mode is used.""" # Valid mode should produce smaller or equal sized output axis = shifted_patch.get_axis(shift_dim) - assert shifted_patch.shape[axis] <= patch.shape[axis] # There should be no NaN values in valid mode assert np.isnan(shifted_patch.data).sum() == 0Then update the validator calls to match the new signatures (remove
patchargument from all calls, and removeshift_coordfrom_test_mode_fulland_test_valid_modecalls).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
dascore/proc/align.py(1 hunks)tests/test_proc/test_align.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/test_proc/test_align.py (5)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/examples.py (1)
get_example_patch(659-706)dascore/proc/coords.py (1)
update_coords(228-253)dascore/proc/align.py (1)
align_to_coord(192-315)dascore/proc/basic.py (1)
dropna(438-494)
dascore/proc/align.py (5)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/utils/misc.py (1)
iterate(257-269)dascore/utils/patch.py (1)
patch_function(180-286)dascore/core/patch.py (6)
dtype(318-320)shape(286-299)ndim(229-231)data(272-283)dims(212-226)coords(255-269)dascore/core/coords.py (15)
min(438-440)max(443-445)ndim(468-470)data(606-608)get_coord(1519-1710)update(742-755)update(969-971)change_length(879-894)change_length(1034-1039)change_length(1273-1284)get_next_index(795-859)values(986-990)values(1245-1260)index(702-721)new(592-603)
🪛 Ruff (0.13.3)
tests/test_proc/test_align.py
201-201: Unused method argument: patch
(ARG002)
204-204: Unused method argument: shift_coord
(ARG002)
217-217: Unused method argument: shift_coord
(ARG002)
dascore/proc/align.py
55-55: Unused function argument: reverse
(ARG001)
104-104: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: Run benchmarks
🔇 Additional comments (10)
tests/test_proc/test_align.py (4)
14-111: LGTM! Well-structured test fixtures.The fixtures provide comprehensive coverage of alignment scenarios including 2D/4D patches, various coordinate representations (absolute, sampled, relative), and specific shift patterns (zero, positive, mixed). This sets up a solid foundation for thorough testing.
113-159: LGTM! Comprehensive validation tests.The validation tests properly verify all input constraints including:
- Exactly one keyword argument requirement
- Valid dimension and coordinate names
- Prohibition of dimensional coordinates
- Independence from the shift dimension
- Type checks for string arguments
232-296: LGTM! Comprehensive parametrized tests.The parametrized tests effectively verify alignment behavior across multiple dimensions (2D/4D patches), coordinate representations (absolute/sampled/relative), and modes (full/same/valid).
297-447: Excellent correctness verification tests.These tests thoroughly validate the correctness of the alignment implementation by:
- Verifying data preservation with zero shifts
- Checking proper NaN placement with positive shifts in same mode
- Validating data positioning in full mode
- Confirming no NaN values in valid mode
- Testing round-trip alignment with the reverse parameter
The tests use concrete, predictable data which makes verification straightforward and debuggable.
dascore/proc/align.py (6)
19-30: LGTM! Clean immutable dataclass design.The frozen dataclass provides a clean, type-safe way to package shift metadata and ensures immutability, which is appropriate for metadata that shouldn't be modified.
32-47: LGTM! Clear index calculation helpers.The
_get_source_indicesand_get_dest_indicesfunctions cleanly separate the logic for calculating source and destination array indices with proper boundary handling usingnp.whereandnp.maximum.
100-107: LGTM! Clean slice indexer creation.The
_create_slice_indexerfunction provides a clear, focused helper for creating slice indexers from start/stop indices.
164-174: LGTM! Clean coordinate update logic.The
_get_aligned_coordsfunction cleanly updates the dimension coordinate to reflect the alignment by adjusting the minimum value and changing the length based on the shift metadata.
176-189: LGTM! Clear shift index calculation.The
_get_shift_indicesfunction properly handles the conversion of coordinate values to indices, including support for samples, relative coordinates, and reverse alignment. The normalization to make the minimum shift zero is a good design choice.
291-315: LGTM! Clean main function implementation.The main function body is well-structured with clear steps:
- Input validation and coordinate extraction
- Axis and dimension mapping
- Shift calculation
- Data transformation
- Coordinate updates
The assertion at line 311 provides a good sanity check that dimensionality is preserved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
dascore/proc/align.py (1)
328-352: Consider replacing assert with explicit validation.The implementation is well-structured and correctly orchestrates the alignment process. However, Line 348 uses an
assertstatement in production code. Since assertions can be disabled with Python's-Oflag, consider replacing it with an explicit check and raising an appropriate exception if the invariant is violated.Apply this diff to use explicit validation:
- assert shifted_data.ndim == patch.data.ndim, "dimensionality changed" + if shifted_data.ndim != patch.data.ndim: + msg = ( + f"Alignment changed data dimensionality: " + f"expected {patch.data.ndim}, got {shifted_data.ndim}" + ) + raise ValueError(msg)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
dascore/proc/align.py(1 hunks)tests/test_proc/test_align.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/test_proc/test_align.py (5)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/examples.py (1)
get_example_patch(659-706)dascore/proc/coords.py (1)
update_coords(228-253)dascore/proc/align.py (1)
align_to_coord(202-352)dascore/proc/basic.py (1)
dropna(438-494)
dascore/proc/align.py (4)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/utils/misc.py (1)
iterate(257-269)dascore/utils/patch.py (1)
patch_function(180-286)dascore/core/coords.py (14)
min(438-440)max(443-445)ndim(468-470)data(606-608)get_coord(1519-1710)values(986-990)values(1245-1260)update(742-755)update(969-971)change_length(879-894)change_length(1034-1039)change_length(1273-1284)get_next_index(795-859)index(702-721)
🪛 Ruff (0.13.3)
tests/test_proc/test_align.py
201-201: Unused method argument: patch
(ARG002)
204-204: Unused method argument: shift_coord
(ARG002)
217-217: Unused method argument: shift_coord
(ARG002)
dascore/proc/align.py
103-103: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: Run benchmarks
🔇 Additional comments (12)
tests/test_proc/test_align.py (5)
1-111: Well-designed test fixtures.The fixture hierarchy is comprehensive and provides good coverage for different alignment scenarios (2D/4D patches, various shift patterns). The progressive complexity from simple to complex patches facilitates clear test organization.
113-159: Comprehensive validation tests.The validation tests thoroughly cover edge cases and error conditions, ensuring robust input handling. Good use of pytest.raises with match patterns for precise error checking.
164-230: Static analysis warnings are false positives.The helper methods (
_test_mode_same,_test_mode_full,_test_valid_mode) share a common signature for polymorphic dispatch via the_validatorsdictionary. Although not all parameters are used in every helper, maintaining the common interface ensures consistent calling conventions and is a valid design pattern for test validators.
232-283: Excellent test parametrization.The parametrized tests efficiently cover multiple modes (full, same, valid) and coordinate representations (absolute, samples, relative) across different patch dimensionalities. This ensures comprehensive validation without code duplication.
284-370: Robust round-trip validation.The round-trip tests with the
reverseparameter provide strong validation of alignment correctness. Testing with various shift patterns (zero, positive, mixed) and modes ensures the implementation handles edge cases and maintains data integrity through forward and reverse operations.dascore/proc/align.py (7)
1-30: Clean module structure.The module organization is clear with a well-defined
_ShiftInfodataclass that appropriately usesfrozen=Truefor immutability. The fields capture all necessary shift metadata for downstream operations.
32-47: Correct index calculation logic.The helper functions
_get_source_indicesand_get_dest_indicescorrectly compute array indices with appropriate boundary handling using numpy operations. The logic aligns with the three alignment modes.
49-97: Correct shift calculation logic.The function correctly implements the three alignment modes (full, same, valid) with appropriate index calculations and boundary handling. The logic matches the documented behavior and examples.
99-125: Correct data shifting implementation.The functions correctly implement the data shifting logic, iterating over all necessary indices and copying data from source to destination slices with proper fill value handling.
127-161: Comprehensive input validation.The validation function thoroughly checks all input requirements, ensuring exactly one keyword argument, correct types, valid dimension and coordinate names, and proper independence between the dimension and alignment coordinate.
163-199: Correct coordinate handling.The coordinate update logic correctly implements the three modes (absolute, relative, samples) with appropriate calculations for the new coordinate range. The reverse parameter is properly handled by inverting shifts while maintaining the reference point.
201-327: Excellent documentation.The docstring is comprehensive and well-structured, with clear parameter descriptions, practical examples demonstrating key use cases, and detailed notes explaining the alignment semantics. The visual ASCII diagrams effectively illustrate the mode parameter behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
dascore/proc/align.py (1)
358-358: Replace assertion with proper error handling.The assertion on Line 358 is informal for production code. Consider replacing it with a proper error check that raises a descriptive exception if the dimensionality changes unexpectedly.
Apply this diff:
- assert shifted_data.ndim == patch.data.ndim, "dimensionality changed" + if shifted_data.ndim != patch.data.ndim: + msg = ( + f"Internal error: dimensionality changed from {patch.data.ndim} " + f"to {shifted_data.ndim} during alignment" + ) + raise RuntimeError(msg)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
dascore/core/coords.py(4 hunks)dascore/proc/align.py(1 hunks)tests/test_core/test_coords.py(4 hunks)tests/test_proc/test_align.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_core/test_coords.py
- tests/test_proc/test_align.py
🔇 Additional comments (12)
dascore/core/coords.py (4)
674-681: LGTM! Good vectorization approach.The conditional logic correctly handles both array and scalar inputs. The vectorized
np.wherepath for arrays is efficient, while scalar inputs retain the simpler ternary expression. The treatment of zero as positive (offset from start) is correct for relative indexing.
837-840: LGTM! Correct negative indexing implementation.The wrap-around logic correctly implements Python-style negative indexing for samples mode. The conditional guard ensures this only applies when
allow_out_of_boundsis False, which is appropriate for backward compatibility.
1644-1646: LGTM! Reasonable guard for unsupported input.The early return for multi-dimensional arrays is appropriate. Returning
(None, None, None, False)clearly signals that start/stop/step cannot be determined for multi-dimensional data, bypassing the monotonic/even sampling detection logic designed for 1D arrays.
857-860: Verify edge case: division by zero risk.The index calculation
(array - min_val) / self.stepon line 859 could fail ifself.stepis zero or very close to zero. While the checkself.step is not Noneprevents None values, a zero step would cause a division by zero error.Additionally, the truncation via
.astype(np.int64)rather than rounding may introduce slight precision issues when values are not exactly on sample points. If this is intentional for the alignment use case, consider documenting this behavior.Run the following verification to check if step validation exists:
dascore/proc/align.py (8)
19-30: LGTM!The frozen dataclass is an appropriate choice for immutable shift metadata, and all fields are well-typed.
32-46: LGTM!The boundary handling logic using
np.wherecorrectly prevents out-of-bounds indexing for both source and destination arrays.
104-110: LGTM!The addition of
strict=Trueto thezip()call on Line 108 ensuresidxandcoord_axeshave matching lengths, addressing the previous review comment.
113-129: LGTM!The shift application logic correctly handles multi-dimensional slicing and properly fills gaps with the specified
fill_value.
132-165: LGTM!The input validation is comprehensive and error messages now correctly reference "align_to_coord" (singular), addressing the previous review comment.
168-188: LGTM!The coordinate alignment logic correctly distinguishes between absolute positioning (using minimum shift coordinate values) and offset-based modes (using calculated start_shift).
191-208: LGTM!The shift index calculation correctly allows out-of-bounds indices in absolute mode (where shift values define new absolute positions) and properly handles the
reverseflag for undoing alignments.
211-362: LGTM!The main function is well-structured with comprehensive documentation and clear examples. The implementation correctly handles the three alignment modes (full, same, valid) and the different interpretation modes (absolute, relative, samples). The
reverseparameter appropriately supports round-trip alignment operations.
| def _calculate_shift_info( | ||
| mode, | ||
| shifts, | ||
| dim, | ||
| dim_axis, | ||
| patch_shape, | ||
| ): | ||
| """ | ||
| Calculate metadata about the shift. | ||
| """ | ||
| # Validate mode parameter | ||
| valid_modes = ("full", "same", "valid") | ||
| if mode not in valid_modes: | ||
| msg = f"mode must be one of {valid_modes} but got '{mode}'" | ||
| raise ParameterError(msg) | ||
|
|
||
| # Convert shifts to array | ||
| shifts = np.asarray(shifts, dtype=np.int64) | ||
| n_samples = len(dim) | ||
| min_shift, max_shift = shifts.min(), shifts.max() | ||
|
|
||
| if mode == "full": | ||
| # Shift value to left/right most index in reference to stationary trace. | ||
| start_shift = min_shift | ||
| end_shift = max_shift | ||
| elif mode == "same": | ||
| # Keep original size (and dimension) | ||
| start_shift = 0 | ||
| end_shift = 0 | ||
| else: | ||
| # mode == "valid" | ||
| # Find overlapping region | ||
| start_shift = max_shift if max_shift > 0 else 0 | ||
| end_shift = min_shift if min_shift < 0 else 0 | ||
|
|
||
| output_size = n_samples + end_shift - start_shift | ||
| # get slices for start/stop | ||
| source_inds = _get_source_indices(shifts, start_shift, end_shift, n_samples) | ||
| dest_inds = _get_dest_indices(shifts, start_shift, n_samples, output_size) | ||
| # determine shape | ||
| shape = list(patch_shape) | ||
| shape[dim_axis] = output_size | ||
| # Package up and return. | ||
| out = _ShiftInfo( | ||
| mode=mode, | ||
| output_length=output_size, | ||
| start_shift=start_shift, | ||
| end_shift=end_shift, | ||
| new_shape=tuple(shape), | ||
| source_slice=source_inds, | ||
| dest_slice=dest_inds, | ||
| ) | ||
| return out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for edge cases.
The previous review comment about missing input validation still applies. The function can fail or produce invalid results in edge cases:
- Empty
shiftsarray crashes on Line 68 withshifts.min()/shifts.max() n_samples <= 0produces invalid output sizes- In "valid" mode,
output_sizeon Line 84 becomes negative whenmax_shift - min_shift >= n_samples
Apply this diff to add validation:
def _calculate_shift_info(
mode,
shifts,
dim,
dim_axis,
patch_shape,
):
"""
Calculate metadata about the shift.
"""
# Validate mode parameter
valid_modes = ("full", "same", "valid")
if mode not in valid_modes:
msg = f"mode must be one of {valid_modes} but got '{mode}'"
raise ParameterError(msg)
# Convert shifts to array
shifts = np.asarray(shifts, dtype=np.int64)
+ if shifts.size == 0:
+ msg = "shifts array cannot be empty"
+ raise ParameterError(msg)
n_samples = len(dim)
+ if n_samples <= 0:
+ msg = f"n_samples must be positive, got {n_samples}"
+ raise ParameterError(msg)
min_shift, max_shift = shifts.min(), shifts.max()
if mode == "full":
# Shift value to left/right most index in reference to stationary trace.
start_shift = min_shift
end_shift = max_shift
elif mode == "same":
# Keep original size (and dimension)
start_shift = 0
end_shift = 0
else:
# mode == "valid"
# Find overlapping region
start_shift = max_shift if max_shift > 0 else 0
end_shift = min_shift if min_shift < 0 else 0
output_size = n_samples + end_shift - start_shift
+ if mode == "valid" and output_size <= 0:
+ msg = (
+ f"No valid overlap: n_samples={n_samples}, "
+ f"shift_range={max_shift - min_shift}"
+ )
+ raise ParameterError(msg)
# get slices for start/stopBased on learnings
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _calculate_shift_info( | |
| mode, | |
| shifts, | |
| dim, | |
| dim_axis, | |
| patch_shape, | |
| ): | |
| """ | |
| Calculate metadata about the shift. | |
| """ | |
| # Validate mode parameter | |
| valid_modes = ("full", "same", "valid") | |
| if mode not in valid_modes: | |
| msg = f"mode must be one of {valid_modes} but got '{mode}'" | |
| raise ParameterError(msg) | |
| # Convert shifts to array | |
| shifts = np.asarray(shifts, dtype=np.int64) | |
| n_samples = len(dim) | |
| min_shift, max_shift = shifts.min(), shifts.max() | |
| if mode == "full": | |
| # Shift value to left/right most index in reference to stationary trace. | |
| start_shift = min_shift | |
| end_shift = max_shift | |
| elif mode == "same": | |
| # Keep original size (and dimension) | |
| start_shift = 0 | |
| end_shift = 0 | |
| else: | |
| # mode == "valid" | |
| # Find overlapping region | |
| start_shift = max_shift if max_shift > 0 else 0 | |
| end_shift = min_shift if min_shift < 0 else 0 | |
| output_size = n_samples + end_shift - start_shift | |
| # get slices for start/stop | |
| source_inds = _get_source_indices(shifts, start_shift, end_shift, n_samples) | |
| dest_inds = _get_dest_indices(shifts, start_shift, n_samples, output_size) | |
| # determine shape | |
| shape = list(patch_shape) | |
| shape[dim_axis] = output_size | |
| # Package up and return. | |
| out = _ShiftInfo( | |
| mode=mode, | |
| output_length=output_size, | |
| start_shift=start_shift, | |
| end_shift=end_shift, | |
| new_shape=tuple(shape), | |
| source_slice=source_inds, | |
| dest_slice=dest_inds, | |
| ) | |
| return out | |
| def _calculate_shift_info( | |
| mode, | |
| shifts, | |
| dim, | |
| dim_axis, | |
| patch_shape, | |
| ): | |
| """ | |
| Calculate metadata about the shift. | |
| """ | |
| # Validate mode parameter | |
| valid_modes = ("full", "same", "valid") | |
| if mode not in valid_modes: | |
| msg = f"mode must be one of {valid_modes} but got '{mode}'" | |
| raise ParameterError(msg) | |
| # Convert shifts to array | |
| shifts = np.asarray(shifts, dtype=np.int64) | |
| if shifts.size == 0: | |
| msg = "shifts array cannot be empty" | |
| raise ParameterError(msg) | |
| n_samples = len(dim) | |
| if n_samples <= 0: | |
| msg = f"n_samples must be positive, got {n_samples}" | |
| raise ParameterError(msg) | |
| min_shift, max_shift = shifts.min(), shifts.max() | |
| if mode == "full": | |
| # Shift value to left/right most index in reference to stationary trace. | |
| start_shift = min_shift | |
| end_shift = max_shift | |
| elif mode == "same": | |
| # Keep original size (and dimension) | |
| start_shift = 0 | |
| end_shift = 0 | |
| else: | |
| # mode == "valid" | |
| # Find overlapping region | |
| start_shift = max_shift if max_shift > 0 else 0 | |
| end_shift = min_shift if min_shift < 0 else 0 | |
| output_size = n_samples + end_shift - start_shift | |
| if mode == "valid" and output_size <= 0: | |
| msg = ( | |
| f"No valid overlap: n_samples={n_samples}, " | |
| f"shift_range={max_shift - min_shift}" | |
| ) | |
| raise ParameterError(msg) | |
| # get slices for start/stop | |
| ... |
🤖 Prompt for AI Agents
In dascore/proc/align.py around lines 49 to 101, add input validation to handle
edge cases: check that `shifts` is not empty and raise a clear ValueError if it
is (preventing calls to shifts.min()/max()); validate that `n_samples` is an
integer > 0 and raise if not; when mode == "valid" compute `output_size =
n_samples - (max_shift - min_shift)` and if that value is <= 0 raise a
ValueError (or choose to return an empty result explicitly) to prevent
negative/invalid sizes; ensure all error messages include the offending values
(shifts length, n_samples, min_shift/max_shift) so callers can debug easily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to the validation logic which runs before this, the conditions leading to failure cannot occur. Adding these checks just leads to unreachable code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find where _calculate_shift_info is called and examine the validation logic
cd dascore/proc
# Show the full align.py file to see the complete flow
cat align.py
Length of output: 13500
🏁 Script executed:
#!/bin/bash
# Check if there are tests covering edge cases like empty patches or extreme shifts
rg -n -C3 "n_samples|empty|size.*0|len.*0" tests/test_proc/test_align.py
Length of output: 999
@d-chambers, thank you for the clarification. You're right—if the upstream validation in _validate_alignment_inputs and the patch data model guarantee that coordinates exist with valid values and dimensions are non-zero length, then these edge case checks would indeed be unreachable.
I appreciate you explaining the validation architecture. Since you have the domain knowledge about what the patch data model guarantees, I'll defer to your judgment here.
|
I am going to merge this for now, then do some more testing on Monday where I need this functionality for work. As long as we catch any bugs before this ships in a release we should be ok. |
Description
This PR implements a new patch method called
align_to_coordwhich gives fine-grained control over shifting parts of a patch over arbitrary dimensions. .Checklist
I have (if applicable):
Summary by CodeRabbit
New Features
Improvements
Tests
Benchmarks
Documentation