-
Notifications
You must be signed in to change notification settings - Fork 28
Implement Hilbert Transform, envelope, and phase weighted stacking patch methods #539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds DIM_REDUCE_DOCS and moves dim-reduction behavior into BaseCoord.reduce_coord; introduces CoordManager.get_axis / Patch.get_axis and replaces many dims.index lookups across code/tests; adds Hilbert-based transforms (hilbert, envelope, phase_weighted_stack) with tests and example; minor docs, CI, and dependency updates. Changes
Possibly related PRs
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dascore/utils/array.py (1)
301-309: Axis can become stale after a prior reduction; recompute per-iteration.Using precomputed axes from get_dim_axis_value can target the wrong axis after a preceding squeeze/drop. Recompute axis each loop from the current patch.dims.
Apply this diff:
- dfo = get_dim_axis_value(patch, args=dims, allow_multiple=True) - # Iter all specified dimensions. - for dim, axis, _ in dfo: - new_coord = patch.get_coord(dim).reduce_coord(dim_reduce=dim_reduce) + # Iterate specified dims; recompute axis each time to reflect prior changes. + for dim in dims: + axis = patch.dims.index(dim) + new_coord = patch.get_coord(dim).reduce_coord(dim_reduce=dim_reduce) if new_coord is None: coords = patch.coords.drop_coords(dim)[0] data = func(data, axis=axis) else: coords = patch.coords.update(**{dim: new_coord}) data = np.expand_dims(func(data, axis=axis), axis) patch = patch.new(data=data, coords=coords)
🧹 Nitpick comments (10)
dascore/constants.py (1)
205-214: Fix doc typos and align “empty” behavior with implementation.
- Grammar: “returns and empty coord” → “returns an empty coord”.
- Behavior: current implementation returns a length‑1 partial coord (not truly empty). Clarify to avoid user confusion.
Apply this diff:
DIM_REDUCE_DOCS = """ dim_reduce How to reduce the dimensional coordinate associated with the aggregated axis. Can be the name of any valid aggregator, a callable, - "empty" (the default) - which returns and empty coord, or "squeeze" - which drops the coordinate. For dimensions with datetime or timedelta - datatypes, if the operation fails it will automatically be applied - to the coordinates converted to floats then the output converted back - to the appropriate time type. + "empty" (the default) — returns a length-1 partial coordinate (no values), + or "squeeze" which drops the coordinate. + For dimensions with datetime or timedelta dtypes, if the operation fails + the function is applied to float-encoded coordinates and the result is + converted back to the appropriate time type. """dascore/examples.py (2)
355-373: Preserve natural dimension order; avoid lexicographic misordering for dim_10+.sorted(coords) will order 'dim_10' before 'dim_2'. Define dims explicitly to match numeric order and build coords from that, preventing surprises in tests/usages.
Apply this diff:
-@register_func(EXAMPLE_PATCHES, key="nd_patch") -def nd_patch(dim_count=3, coord_lens=10): +@register_func(EXAMPLE_PATCHES, key="nd_patch") +def nd_patch(dim_count=3, coord_lens=10): @@ - ran = np.random.RandomState(42) - coords = {f"dim_{x + 1}": np.arange(coord_lens) for x in range(dim_count)} - dims = sorted(coords) - shape = tuple(len(coords[x]) for x in dims) + ran = np.random.RandomState(42) + dims = tuple(f"dim_{i + 1}" for i in range(dim_count)) + coords = {d: np.arange(coord_lens) for d in dims} + shape = tuple(len(coords[d]) for d in dims) data = ran.randn(*shape) return dc.Patch(data=data, coords=coords, dims=dims)
413-417: Good guard against non‑finite delays; optionally silence divide‑by‑zero warnings.Your isfinite guards avoid bad shifts when velocity=0/NaN. If you want to suppress runtime warnings from dist_to_source / velocity, wrap with np.errstate.
Apply this diff:
- for ind, dist in enumerate(distance): + for ind, dist in enumerate(distance): dist_to_source = np.abs(dist - source_distance) - shift = dist_to_source / velocity + with np.errstate(divide="ignore", invalid="ignore"): + shift = dist_to_source / velocity actual_shift = shift if np.isfinite(shift) else 0 time_delay = peak_time + actual_shiftAlso applies to: 429-434
tests/test_transform/test_hilbert.py (2)
70-94: Tiny typo in fixture docstring."moduled" → "modulated".
- """Return a moduled patch""" + """Return a modulated patch"""
136-153: Add a squeeze case to lock in coord/data semantics.Please add a test asserting that dim_reduce="squeeze" drops the stack_dim and data squeezes the axis.
+ def test_phase_weighted_stack_squeeze(self, random_patch): + out = random_patch.phase_weighted_stack( + stack_dim="distance", transform_dim="time", dim_reduce="squeeze" + ) + assert "distance" not in out.dimsdascore/transform/hilbert.py (5)
37-46: Docstring typo."represneting" → "representing".
- A patch with a complex data array represneting the analytic signal. + A patch with a complex data array representing the analytic signal.
96-103: Handle 1D stack case explicitly.If the patch only has stack_dim (no other dims), next(iter(dims)) will raise StopIteration. Raise a clear ParameterError.
def __infer_transform_dim(patch, stack_dim): """Try to infer transform dimension.""" dims = set(patch.dims) - {stack_dim} if len(dims) > 1: msg = "Patch has more than two dimensions, cant infer transform dim." raise ParameterError(msg) + if len(dims) == 0: + raise ParameterError("Patch has only the stack dimension; specify transform_dim.") return next(iter(dims))
185-201: Nit: wording."instable" → "unstable".
- # Get unit phasors. We use eps here to avoid instable division by 0. + # Get unit phasors. Use eps to avoid unstable division by 0.
99-101: Optional: grammar in error message."cant" → "can't". If you change this, update the test expectation.
- msg = "Patch has more than two dimensions, cant infer transform dim." + msg = "Patch has more than two dimensions, can't infer transform dim."
111-113: Optional: validate power.Consider asserting power >= 0 to avoid surprising amplification when negative.
def phase_weighted_stack( @@ - power: float = 2.0, + power: float = 2.0, @@ ) -> PatchType: @@ + if power < 0: + raise ParameterError("power must be >= 0")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
dascore/constants.py(1 hunks)dascore/core/coords.py(2 hunks)dascore/core/patch.py(1 hunks)dascore/examples.py(3 hunks)dascore/proc/aggregate.py(1 hunks)dascore/transform/__init__.py(1 hunks)dascore/transform/hilbert.py(1 hunks)dascore/utils/array.py(3 hunks)docs/references.bib(1 hunks)tests/test_transform/test_hilbert.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (8)
dascore/proc/aggregate.py (3)
dascore/utils/array.py (1)
_apply_aggregator(294-309)dascore/utils/docs.py (1)
compose_docstring(45-97)dascore/utils/patch.py (1)
patch_function(179-285)
dascore/core/patch.py (1)
dascore/transform/hilbert.py (3)
hilbert(19-55)envelope(59-93)phase_weighted_stack(107-206)
tests/test_transform/test_hilbert.py (5)
dascore/examples.py (2)
ricker_moveout(376-438)get_example_patch(658-705)dascore/exceptions.py (1)
ParameterError(26-27)dascore/transform/hilbert.py (3)
hilbert(19-55)envelope(59-93)phase_weighted_stack(107-206)dascore/core/patch.py (3)
coords(216-218)Patch(28-425)dims(196-198)dascore/utils/time.py (1)
to_float(327-333)
dascore/transform/hilbert.py (5)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/utils/docs.py (1)
compose_docstring(45-97)dascore/utils/patch.py (1)
patch_function(179-285)dascore/core/patch.py (3)
dims(196-198)dtype(236-238)coords(216-218)dascore/core/coords.py (1)
reduce_coord(893-930)
dascore/core/coords.py (3)
dascore/utils/docs.py (1)
compose_docstring(45-97)dascore/utils/time.py (5)
to_float(327-333)to_datetime64(22-55)is_datetime64(405-407)to_timedelta64(141-173)dtype_time_like(415-425)dascore/exceptions.py (1)
ParameterError(26-27)
dascore/transform/__init__.py (1)
dascore/transform/hilbert.py (3)
hilbert(19-55)envelope(59-93)phase_weighted_stack(107-206)
dascore/examples.py (3)
dascore/utils/misc.py (1)
register_func(34-57)dascore/core/patch.py (5)
coords(216-218)dims(196-198)shape(226-228)data(221-223)Patch(28-425)dascore/core/coordmanager.py (1)
shape(744-750)
dascore/utils/array.py (1)
dascore/core/coords.py (1)
reduce_coord(893-930)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.13)
🔇 Additional comments (8)
dascore/proc/aggregate.py (1)
9-9: LGTM: centralized docstring content via DIM_REDUCE_DOCS.This keeps the docs DRY and in sync with coord reduction semantics. Once DIM_REDUCE_DOCS is clarified (see constants.py comment), these docs will read correctly here as well.
Also applies to: 14-22
docs/references.bib (1)
60-69: LGTM: add canonical PWS reference.Entry is well-formed and matches the implementation notes.
dascore/transform/__init__.py (1)
12-12: LGTM: export hilbert, envelope, phase_weighted_stack.Public API exposure matches new transform module and Patch bindings.
dascore/core/patch.py (1)
399-401: LGTM: Patch bindings for hilbert/envelope/PWS.This aligns Patch’s API with the new transforms and test usage.
dascore/core/coords.py (1)
892-931: Fix integer-reduction shape handling — wrap scalar results and return a length-1 partial coord for "empty".Repo-wide grep found no calls that pass an integer scalar to get_coord(data=...), so the proposed diff is safe.
-@compose_docstring(dim_reduce=DIM_REDUCE_DOCS) -def reduce_coord(self, dim_reduce="empty"): +@compose_docstring(dim_reduce=DIM_REDUCE_DOCS) +def reduce_coord(self, dim_reduce: str | Callable = "empty") -> Self | None: @@ - if dim_reduce == "empty": - new_coord = self.update(shape=(1,), start=None, stop=None, data=None) + if dim_reduce == "empty": + # Return a length-1 partial coordinate (no values) to preserve axis. + new_coord = get_coord(shape=(1,), units=self.units, dtype=self.dtype) elif dim_reduce == "squeeze": return None elif (func := _AGG_FUNCS.get(dim_reduce)) or callable(dim_reduce): func = dim_reduce if callable(dim_reduce) else func coord_data = self.data - if dtype_time_like(coord_data): + if dtype_time_like(coord_data): result = _maybe_handle_datatypes(func, coord_data) else: - result = func(self.data) - new_coord = self.update(data=result) + result = np.atleast_1d(func(self.data)) + new_coord = self.update(data=result) else: msg = "dim_reduce must be 'empty', 'squeeze' or valid aggregator." raise ParameterError(msg) return new_coord-from collections.abc import Sized +from collections.abc import Sized, Callabledascore/utils/array.py (2)
14-14: LGTM on import reshuffle.Importing PatchType from constants is consistent with the rest of the module.
398-401: Confirm default dim-reduce behavior is intended._reassemble_patch always uses reduce_coord() with default "empty". If callers expect "squeeze" semantics for NumPy reduces, we need a way to pass that through.
tests/test_transform/test_hilbert.py (1)
47-65: Verified — example fixture exists and signature matches the test.dascore/examples.py registers "sin_wav" (sin_wave_patch) and get_example_patch(...) dispatches to EXAMPLE_PATCHES; sin_wave_patch accepts sample_rate and frequency so the test call is valid. Locations: dascore/examples.py:190-193 (sin_wave_patch), dascore/examples.py:658-705 (get_example_patch).
dascore/transform/hilbert.py
Outdated
| # Create new coord and coord manager, put patch back and return. | ||
| new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce) | ||
| cm = patch.coords.update(**{stack_dim: new_coord}) | ||
| return patch.new(data=stacked_data, coords=cm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dim_reduce='squeeze' breaks: coords drop but data keeps the axis.
reduce_coord returns None for "squeeze", but stacked_data is computed with keepdims=True and coords.update is called with None. This can error or yield coord/data mismatch.
Apply this diff:
- new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce)
- cm = patch.coords.update(**{stack_dim: new_coord})
- return patch.new(data=stacked_data, coords=cm)
+ new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce)
+ if new_coord is None:
+ # drop the coord and remove the singleton axis from data
+ cm = patch.coords.drop_coords(stack_dim)[0]
+ stacked_data = np.squeeze(stacked_data, axis=stack_axis)
+ else:
+ cm = patch.coords.update(**{stack_dim: new_coord})
+ return patch.new(data=stacked_data, coords=cm)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Create new coord and coord manager, put patch back and return. | |
| new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce) | |
| cm = patch.coords.update(**{stack_dim: new_coord}) | |
| return patch.new(data=stacked_data, coords=cm) | |
| # Create new coord and coord manager, put patch back and return. | |
| new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce) | |
| if new_coord is None: | |
| # drop the coord and remove the singleton axis from data | |
| cm = patch.coords.drop_coords(stack_dim)[0] | |
| stacked_data = np.squeeze(stacked_data, axis=stack_axis) | |
| else: | |
| cm = patch.coords.update(**{stack_dim: new_coord}) | |
| return patch.new(data=stacked_data, coords=cm) |
🤖 Prompt for AI Agents
In dascore/transform/hilbert.py around lines 203-206, dim_reduce='squeeze'
causes reduce_coord to return None while stacked_data was built with
keepdims=True and coords.update is called with None, producing a coord/data
mismatch; change the stacking to use keepdims=(dim_reduce != 'squeeze') so data
shape matches reduced coords, and if reduce_coord returns None, remove the
stack_dim from the coords update call (i.e., don't pass {stack_dim: None} —
delete/drop that key or call coords.without(stack_dim) before creating the new
patch) so coords and data remain consistent.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #539 +/- ##
=======================================
Coverage 99.88% 99.88%
=======================================
Files 123 124 +1
Lines 10256 10336 +80
=======================================
+ Hits 10244 10324 +80
Misses 12 12
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
dascore/proc/basic.py (1)
325-339: Max-norm should use absolute value (per docstring).Currently uses np.max(data,...). This fails for negative signals and contradicts “maximum of the absolute value”.
Apply:
- elif norm == "max": - norm_values = np.max(data, axis=axis) + elif norm == "max": + norm_values = np.max(np.abs(data), axis=axis)tests/test_proc/test_basic.py (2)
447-451: Shape assertion mixes units; use sample count, not step multiplication.
new_shape is in samples. Multiplying by distance_step (a coordinate increment) is dimensionally wrong unless step==1. Compute pad samples via the coord.Apply this diff:
- distance_axis = random_patch.get_axis("distance") - ch_spacing = random_patch.attrs["distance_step"] - assert ( - new_shape[distance_axis] == original_shape[distance_axis] + 14 * ch_spacing - ) + distance_axis = random_patch.get_axis("distance") + dist_coord = random_patch.get_coord("distance") + pad_count = dist_coord.get_sample_count(7) + assert new_shape[distance_axis] == original_shape[distance_axis] + 2 * pad_count
463-468: Same unit-mismatch in expand-coords case.
Use sample counts derived from the coordinate.Apply this diff:
- distance_axis = random_patch.get_axis("distance") - ch_spacing = random_patch.attrs["distance_step"] - dist_max = random_patch.attrs["distance_max"] - assert ( - new_shape[distance_axis] == original_shape[distance_axis] + 8 * ch_spacing - ) + distance_axis = random_patch.get_axis("distance") + dist_coord = random_patch.get_coord("distance") + pad_count = dist_coord.get_sample_count(4) + dist_max = random_patch.attrs["distance_max"] + assert new_shape[distance_axis] == original_shape[distance_axis] + 2 * pad_count
🧹 Nitpick comments (5)
dascore/core/coordmanager.py (1)
724-739: Nit: chain original exception when raising CoordError (ruff B904)Improve traceability by chaining the caught exception.
- try: - return self.dims.index(dim) - except (ValueError, IndexError): - msg = f"Patch has no dimension: {dim}. Its dimensions are: {self.dims}" - raise CoordError(msg) + try: + return self.dims.index(dim) + except (ValueError, IndexError) as err: + msg = f"Patch has no dimension: {dim}. Its dimensions are: {self.dims}" + raise CoordError(msg) from errtests/test_proc/test_taper.py (1)
120-120: Potential variable name confusion.The variable
dim_lenis assigned the result ofpatch.get_axis("distance")which returns an axis index (integer), not a dimension length. This could be confusing.Consider renaming for clarity:
- dim_len = patch.get_axis("distance") - mid_dim = dim_len // 2 + axis = patch.get_axis("distance") + mid_dim = axis // 2dascore/proc/basic.py (1)
385-391: Guard division-by-zero in standardize.Constant slices yield std==0 → inf/NaN. Use np.divide with where to produce zeros instead.
- new_data = (data - mean) / std + new_data = np.divide( + data - mean, std, out=np.zeros_like(data), where=std != 0 + )tests/test_proc/test_basic.py (2)
351-353: Avoid hard-coding axis numbers; tie to dims instead.
Hard-coding 1 is brittle. Compare to dims.index("time") so the test stays valid if dim order changes.Apply this diff:
- axis = patch_with_null.get_axis("time") - assert axis == 1 + axis = patch_with_null.get_axis("time") + assert axis == patch_with_null.dims.index("time")
364-366: Same here: remove the magic axis number.
Anchor the assertion to the dim order.Apply this diff:
- axis = patch_with_null.get_axis("distance") - assert axis == 0 + axis = patch_with_null.get_axis("distance") + assert axis == patch_with_null.dims.index("distance")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (36)
dascore/constants.py(1 hunks)dascore/core/coordmanager.py(2 hunks)dascore/core/patch.py(2 hunks)dascore/examples.py(4 hunks)dascore/proc/basic.py(4 hunks)dascore/proc/coords.py(1 hunks)dascore/proc/correlate.py(1 hunks)dascore/proc/detrend.py(1 hunks)dascore/proc/filter.py(2 hunks)dascore/proc/whiten.py(1 hunks)dascore/transform/dispersion.py(1 hunks)dascore/transform/fft.py(1 hunks)dascore/transform/fourier.py(1 hunks)dascore/transform/hilbert.py(1 hunks)dascore/utils/patch.py(5 hunks)dascore/viz/wiggle.py(1 hunks)tests/test_core/test_coordmanager.py(8 hunks)tests/test_core/test_patch.py(3 hunks)tests/test_core/test_patch_chunk.py(1 hunks)tests/test_proc/test_aggregate.py(2 hunks)tests/test_proc/test_basic.py(12 hunks)tests/test_proc/test_correlate.py(3 hunks)tests/test_proc/test_detrend.py(1 hunks)tests/test_proc/test_proc_coords.py(1 hunks)tests/test_proc/test_resample.py(7 hunks)tests/test_proc/test_rolling.py(3 hunks)tests/test_proc/test_taper.py(3 hunks)tests/test_transform/test_differentiate.py(3 hunks)tests/test_transform/test_fourier.py(2 hunks)tests/test_transform/test_hilbert.py(1 hunks)tests/test_transform/test_integrate.py(2 hunks)tests/test_transform/test_strain.py(1 hunks)tests/test_utils/test_array_utils.py(3 hunks)tests/test_utils/test_coordmanager_utils.py(2 hunks)tests/test_utils/test_patch_utils.py(2 hunks)tests/test_viz/test_waterfall.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- dascore/transform/dispersion.py
🚧 Files skipped from review as they are similar to previous changes (4)
- dascore/transform/hilbert.py
- dascore/examples.py
- tests/test_transform/test_hilbert.py
- dascore/constants.py
🧰 Additional context used
🧬 Code graph analysis (31)
dascore/transform/fourier.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_taper.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_resample.py (4)
dascore/examples.py (1)
random_patch(29-111)tests/conftest.py (2)
random_patch(282-284)patch(375-377)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_utils/test_coordmanager_utils.py (4)
tests/conftest.py (1)
cm_basic(106-108)dascore/core/coordmanager.py (2)
get_axis(724-739)shape(761-767)dascore/proc/basic.py (1)
get_axis(56-74)dascore/core/patch.py (1)
shape(226-228)
tests/test_transform/test_differentiate.py (3)
dascore/examples.py (1)
random_patch(29-111)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/utils/patch.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_utils/test_patch_utils.py (3)
tests/conftest.py (1)
random_patch(282-284)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/proc/correlate.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/proc/filter.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_proc_coords.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_core/test_patch_chunk.py (4)
dascore/examples.py (1)
random_patch(29-111)tests/conftest.py (1)
random_patch(282-284)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_transform/test_strain.py (4)
dascore/examples.py (1)
random_patch(29-111)tests/conftest.py (1)
random_patch(282-284)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_correlate.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/proc/detrend.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/viz/wiggle.py (3)
tests/conftest.py (1)
patch(375-377)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_rolling.py (3)
dascore/core/coordmanager.py (2)
get_axis(724-739)shape(761-767)dascore/proc/basic.py (1)
get_axis(56-74)dascore/examples.py (1)
random_patch(29-111)
dascore/proc/whiten.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_viz/test_waterfall.py (3)
dascore/examples.py (1)
random_patch(29-111)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_basic.py (3)
dascore/exceptions.py (3)
CoordError(38-39)ParameterError(26-27)PatchBroadcastError(62-63)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/proc/basic.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/core/patch.py (1)
coords(216-218)
tests/test_proc/test_detrend.py (3)
dascore/core/patch.py (1)
data(221-223)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_proc/test_aggregate.py (5)
dascore/core/coordmanager.py (3)
get_axis(724-739)shape(761-767)ndim(775-777)dascore/proc/basic.py (1)
get_axis(56-74)dascore/utils/misc.py (1)
broadcast_for_index(136-157)dascore/core/patch.py (2)
shape(226-228)ndim(201-203)dascore/core/coords.py (1)
ndim(468-470)
dascore/proc/coords.py (3)
dascore/core/patch.py (1)
coords(216-218)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_transform/test_fourier.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_core/test_patch.py (2)
dascore/core/coordmanager.py (2)
shape(761-767)get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_core/test_coordmanager.py (4)
tests/conftest.py (2)
cm_multidim(129-139)cm_basic(106-108)dascore/core/coordmanager.py (2)
get_axis(724-739)shape(761-767)dascore/proc/basic.py (1)
get_axis(56-74)dascore/core/patch.py (1)
shape(226-228)
dascore/transform/fft.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/core/coordmanager.py (4)
dascore/proc/basic.py (1)
get_axis(56-74)dascore/core/patch.py (1)
dims(196-198)dascore/core/coords.py (1)
index(698-717)dascore/exceptions.py (1)
CoordError(38-39)
tests/test_transform/test_integrate.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
tests/test_utils/test_array_utils.py (2)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)
dascore/core/patch.py (3)
dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/basic.py (1)
get_axis(56-74)dascore/transform/hilbert.py (3)
hilbert(19-54)envelope(58-91)phase_weighted_stack(111-211)
🪛 Ruff (0.12.2)
dascore/core/coordmanager.py
739-739: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_build_docs
🔇 Additional comments (50)
dascore/proc/correlate.py (1)
69-72: Axis resolution change LGTMUsing
patch.get_axis(dim)aligns with the new public API and preserves behavior infftshift.dascore/core/coordmanager.py (1)
373-376: Consistent axis lookupUsing
self.get_axis(dim)in sorting keeps axis resolution centralized. Good change.dascore/proc/detrend.py (1)
34-36: LGTM on get_axis migration
patch.get_axis(dim)keeps detrend aligned with the new API; behavior unchanged.dascore/viz/wiggle.py (1)
20-26: LGTM: centralized axis resolutionSwitch to
patch.get_axis(dim)is correct and consistent with the refactor.tests/test_viz/test_waterfall.py (1)
105-105: LGTM: axis lookup via get_axis.Matches the new public API and avoids brittle dims.index usage.
tests/test_proc/test_proc_coords.py (1)
244-244: LGTM: use get_axis for deleted axis check.Clearer failure when dim is missing; test intent unchanged.
dascore/proc/filter.py (1)
134-134: LGTM: adopt get_axis in filters.Axis resolution now routes through CoordManager; behavior unchanged.
Also applies to: 177-177
dascore/transform/fft.py (1)
36-36: LGTM: replace dims.index with get_axis.Keeps deprecated rfft aligned with the rest of the API.
tests/test_transform/test_strain.py (1)
18-19: LGTM: axis obtained via get_axis.Simplifies later broadcasting logic.
tests/test_transform/test_integrate.py (1)
41-41: LGTM: consistent axis retrieval in integration tests.Improves clarity and robustness of indexer construction and assertions.
Also applies to: 89-89
tests/test_proc/test_correlate.py (1)
37-37: LGTM: get_axis in correlation tests.Removes direct coupling to dims ordering; semantics preserved.
Also applies to: 75-75, 94-94
tests/test_core/test_patch.py (3)
189-189: LGTM! Improved axis resolution.The change from
patch.dims.index("time")topatch.get_axis("time")follows the new public API introduced across the codebase for axis lookups. This is more robust as it provides better error handling through the underlyingCoordManager.get_axismethod.
247-247: LGTM! Consistent use of new axis API.The replacement of
patch.dims.index("time")withpatch.get_axis("time")maintains consistency with the PR-wide migration to the public axis resolution API.
858-858: LGTM! Improved axis resolution for ufunc operations.The change from
random_patch.dims.index("time")torandom_patch.get_axis("time")is consistent with the broader migration pattern and provides better error handling when the dimension is not found.tests/test_proc/test_aggregate.py (3)
23-23: LGTM! Cleaner axis resolution.The change from
random_patch.dims.index("distance")torandom_patch.get_axis("distance")aligns with the new public API for axis lookups, providing better error handling and consistency across the codebase.
30-30: LGTM! Consistent axis lookup usage.The replacement follows the same pattern as other test files in adopting the new
get_axis()method for dimension-to-axis resolution.
57-57: LGTM! Enhanced test assertion.The addition of
assert out.ndim == 1strengthens the test by explicitly verifying that the squeeze operation correctly reduces the dimensionality, which is the expected behavior whendim_reduce="squeeze".tests/test_proc/test_taper.py (2)
36-36: LGTM! Improved axis resolution.The change from direct dimension index lookup to using
patch.get_axis(dim)follows the consistent pattern throughout the PR and provides better error handling.
180-180: LGTM! Consistent API usage.The change from
patch_ones.dims.index("time")topatch_ones.get_axis("time")maintains consistency with the PR-wide migration to the new axis resolution API.dascore/proc/whiten.py (1)
151-151: LGTM! Consistent axis resolution.The change from
fft_patch.dims.index(fft_dim)tofft_patch.get_axis(fft_dim)follows the established pattern throughout the PR and provides better error handling when the dimension is not found.tests/test_utils/test_coordmanager_utils.py (2)
30-30: LGTM! Improved axis resolution.The change from
cm_basic.dims.index("distance")tocm_basic.get_axis("distance")follows the consistent migration pattern and leverages the new public API for coordinate manager axis lookups.
96-96: LGTM! Consistent API adoption.The replacement of direct dimension indexing with
cm1.get_axis("time")maintains consistency with the broader codebase changes and provides better error handling.tests/test_proc/test_rolling.py (3)
135-135: LGTM! Improved axis resolution.The change from
out.dims.index("time")toout.get_axis("time")follows the consistent pattern of adopting the new public axis resolution API.
148-148: LGTM! Consistent API usage.The replacement maintains consistency with the PR-wide migration to the
get_axismethod for dimension-to-axis resolution.
193-193: LGTM! Clean axis lookup.The change from
random_patch.dims.index("time")torandom_patch.get_axis("time")provides better error handling and follows the established migration pattern.tests/test_proc/test_detrend.py (1)
16-16: LGTM! Consistent axis resolution.The change from
det.dims.index("time")todet.get_axis("time")aligns with the PR-wide adoption of the new public axis resolution API, providing better error handling and consistency.dascore/utils/patch.py (6)
415-415: LGTM! Improved axis resolution in merge function.The change from
patches[0].dims.index(merge_dim)topatches[0].get_axis(merge_dim)follows the established pattern and provides better error handling when the dimension is not found.
605-605: LGTM! Documentation example updated.The docstring example correctly demonstrates the new
patch.get_axis("time")API, which is more user-friendly than the internaldims.index("time")approach.
609-609: LGTM! Consistent documentation update.The second example in the docstring properly uses
patch.get_axis("time"), maintaining consistency with the API changes throughout the codebase.
726-726: LGTM! Improved axis resolution in utility function.The replacement of
patch.dims.index(dim_)withpatch.get_axis(dim_)follows the established pattern and provides better error handling for dimension lookups.
1221-1221: LGTM! Consistent axis lookup for single dimension.The change from
patch.dims.index(dim)topatch.get_axis(dim)provides better error handling and follows the established migration pattern.
1231-1231: LGTM! Proper sequence handling.The replacement correctly handles sequences of dimensions by using
patch.get_axis(d)for each dimension in the sequence, maintaining consistency with the new API.tests/test_proc/test_resample.py (1)
22-22: Good migration to public axis API.Replacing dims.index(...) with patch.get_axis(...) improves readability and centralizes error handling in CoordManager.
Also applies to: 32-32, 49-49, 143-143, 159-159, 174-174, 191-191
tests/test_core/test_patch_chunk.py (1)
276-276: Consistent axis resolution via get_axis.Keeps tests aligned with the new public API.
tests/test_transform/test_differentiate.py (1)
38-38: Axis lookup changes look good.Uniformly using get_axis(...) matches production code paths.
Also applies to: 67-67, 101-102
tests/test_core/test_coordmanager.py (1)
352-352: Solid refactor to CoordManager.get_axis.These updates exercise the new API across drop/select/decimate paths.
Also applies to: 416-417, 429-429, 541-542, 571-571, 893-893, 1031-1031
tests/test_utils/test_patch_utils.py (1)
661-663: Tests updated to use get_axis are correct.Assertions now rely on the same axis resolution as library code.
Also applies to: 691-691
dascore/proc/basic.py (2)
437-456: Dropna axis handling aligns with new API.The boolean mask reduction and coord update look correct; associated coords should trim via CoordManager.update.
If any 2D/non-dim coords share the dropped dim, confirm they trim as expected by running the existing TestDrop tests.
56-75: Patch.get_axis alias present — no action required.
Verified get_axis is exposed on Patch (dascore/core/patch.py:284 —get_axis = dascore.proc.get_axis).tests/test_transform/test_fourier.py (1)
87-87: Axis lookup via get_axis is correct here.Keeps DFT tests consistent across transformed dims.
Also applies to: 125-125
tests/test_utils/test_array_utils.py (1)
161-161: Axis refactor matches library use.These changes improve robustness where dims order may vary.
Also applies to: 237-237, 250-250
tests/test_proc/test_basic.py (7)
14-14: Importing CoordError is correct and used below.
No action needed.
253-259: Good switch to the public axis API.
Using patch.get_axis("time") removes brittle dim index assumptions.
376-379: Nice: shape check uses resolved axis.
This is robust across dim orders.
436-438: Looks good: pad length verified along resolved time axis.
No changes needed.
479-484: Good: axis resolution via get_axis in mixed-dim pad test.
Assertions read clearly.
508-514: FFT pad checks are correct and axis-agnostic.
No changes needed.
517-522: Correlate pad target length computed correctly.
Using get_axis keeps it stable.dascore/core/patch.py (2)
284-285: Solid addition: expose get_axis on Patch.
This aligns tests and usage around a single public API for axis resolution.
398-401: New transform aliases look good — verification failed; confirm exports/importsrg returned "No such file or directory" for /transform/; couldn't locate dascore/transform. Confirm hilbert, envelope, and phase_weighted_stack are defined in dascore/transform/hilbert.py, re-exported from dascore/transform/init.py, and that numpy/scipy imports exist. Run:
#!/bin/bash # locate defs rg -nP '^\s*def\s+(hilbert|envelope|phase_weighted_stack)\b' -C1 -S # check re-exports in transform __init__ files rg -nP '(hilbert|envelope|phase_weighted_stack)' -g '**/transform/__init__.py' -n -S # check numpy/scipy imports rg -n 'import numpy as np' -S rg -nP '\bscipy\.signal\b' -S
tests/test_proc/test_basic.py
Outdated
| class TestGetAxis: | ||
| """Tests for helper function to get get axis of dimension.""" | ||
|
|
||
| def test_index_comparable(self, random_patch): | ||
| """Ensure get_axis is the same as patch.dims.index.""" | ||
| for dim in random_patch.dims: | ||
| axis = random_patch.get_axis(dim) | ||
| assert axis == random_patch.get_axis(dim) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the test to actually compare against dims.index and clean up the docstring.
Currently it compares get_axis to itself (always true). Align with the docstring intent and fix the typo.
Apply this diff:
-class TestGetAxis:
- """Tests for helper function to get get axis of dimension."""
+class TestGetAxis:
+ """Tests for helper function to get the axis of a dimension."""
def test_index_comparable(self, random_patch):
- """Ensure get_axis is the same as patch.dims.index."""
- for dim in random_patch.dims:
- axis = random_patch.get_axis(dim)
- assert axis == random_patch.get_axis(dim)
+ """Ensure get_axis matches patch.dims.index."""
+ for dim in random_patch.dims:
+ assert random_patch.get_axis(dim) == random_patch.dims.index(dim)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class TestGetAxis: | |
| """Tests for helper function to get get axis of dimension.""" | |
| def test_index_comparable(self, random_patch): | |
| """Ensure get_axis is the same as patch.dims.index.""" | |
| for dim in random_patch.dims: | |
| axis = random_patch.get_axis(dim) | |
| assert axis == random_patch.get_axis(dim) | |
| class TestGetAxis: | |
| """Tests for helper function to get the axis of a dimension.""" | |
| def test_index_comparable(self, random_patch): | |
| """Ensure get_axis matches patch.dims.index.""" | |
| for dim in random_patch.dims: | |
| assert random_patch.get_axis(dim) == random_patch.dims.index(dim) |
🤖 Prompt for AI Agents
In tests/test_proc/test_basic.py around lines 603 to 611, the test's docstring
contains a typo and the assertion wrongly compares get_axis to itself; update
the docstring to "Tests for helper function to get axis of a dimension." and
change the assertion to compare the returned axis against the dimension index:
call random_patch.dims.index(dim) and assert axis ==
random_patch.dims.index(dim).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
dascore/examples.py (1)
424-436: Validate velocity and source_channel; avoid silent “0-shift” fallback.Non‑positive/NaN velocity currently yields zero shift; negative indices pass the assert. Prefer explicit validation and simplify the shift.
Apply:
time = np.arange(0, duration + time_step, time_step) distance = np.arange(channel_count) * distance_step - assert source_channel < len(distance) + if not (0 <= source_channel < len(distance)): + raise ValueError(f"source_channel must be in [0, {len(distance)})") + if not np.isfinite(velocity) or velocity <= 0: + raise ValueError("velocity must be a finite, positive number") source_distance = distance[source_channel] data = np.zeros((len(time), len(distance))) @@ - with np.errstate(divide="ignore", invalid="ignore"): - shift = dist_to_source / velocity - actual_shift = shift if np.isfinite(shift) else 0 - time_delay = peak_time + actual_shift + shift = dist_to_source / velocity + time_delay = peak_time + shiftdascore/proc/basic.py (1)
309-317: Normalize 'max' should use max(abs(x)) per docstring.Current code divides by
np.max(data, axis=axis), which is wrong for negative/complex values and contradicts the docstring (“maximum of the absolute value”). Fix as below.- elif norm == "max": - norm_values = np.max(data, axis=axis) + elif norm == "max": + norm_values = np.max(np.abs(data), axis=axis)
🧹 Nitpick comments (7)
dascore/examples.py (2)
16-16: Side-effect import: annotate or remove.If this import is only to register coords helpers (e.g., get_axis), annotate it to avoid “unused import” warnings; otherwise remove it.
Suggested tweak:
-import dascore.proc.coords +import dascore.proc.coords # noqa: F401 # side-effect: register coords helpers (e.g., get_axis)
356-374: nd_patch: add arg validation; minor RNG tidy.Guard against invalid shapes and use a clearer RNG variable.
Apply:
@register_func(EXAMPLE_PATCHES, key="nd_patch") def nd_patch(dim_count=3, coord_lens=10): @@ - ran = np.random.RandomState(42) + if dim_count < 1: + raise ValueError("dim_count must be >= 1") + if coord_lens < 1: + raise ValueError("coord_lens must be >= 1") + rng = np.random.RandomState(42) @@ - data = ran.randn(*shape) + data = rng.randn(*shape)dascore/proc/basic.py (2)
364-369: Zero-variance guard to avoid inf/NaN during standardize.With constant slices along
axis,stdis zero and yields infinities. Usenp.divide(..., where=std!=0)for a stable result.- new_data = (data - mean) / std + centered = data - mean + new_data = np.divide(centered, std, out=np.zeros_like(centered), where=std != 0)
416-425: Deterministic axis-reduction order (tiny nit).
axes = set(...)then passing a tuple introduces non-deterministic ordering. Whileany/allare commutative, prefer a stable order for clarity.- axes = set(range(len(patch.shape))) - {axis} - to_drop = func(to_drop, axis=tuple(axes)) + axes = tuple(i for i in range(len(patch.shape)) if i != axis) + to_drop = func(to_drop, axis=axes)tests/test_proc/test_detrend.py (1)
16-16: Compute axis on the output patch.Use
det.get_axis("time")to avoid coupling to the input patch’s dims.- means = np.mean(det.data, axis=random_patch.get_axis("time")) + means = np.mean(det.data, axis=det.get_axis("time"))tests/test_proc/test_basic.py (2)
351-353: Don’t hard-code axis indices in tests.Asserting
axis == 1is brittle. Assert the semantic mapping instead.- axis = patch_with_null.get_axis("time") - assert axis == 1 + axis = patch_with_null.get_axis("time") + assert patch_with_null.dims[axis] == "time"
364-366: Same here: avoid assuming a fixed axis number.Prefer checking the dimension name at the computed index.
- axis = patch_with_null.get_axis("distance") - assert axis == 0 + axis = patch_with_null.get_axis("distance") + assert patch_with_null.dims[axis] == "distance"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
dascore/examples.py(5 hunks)dascore/proc/basic.py(3 hunks)dascore/proc/coords.py(3 hunks)dascore/transform/hilbert.py(1 hunks)dascore/utils/patch.py(5 hunks)tests/test_core/test_coordmanager.py(9 hunks)tests/test_proc/test_basic.py(10 hunks)tests/test_proc/test_detrend.py(1 hunks)tests/test_proc/test_proc_coords.py(3 hunks)tests/test_proc/test_rolling.py(4 hunks)tests/test_proc/test_taper.py(4 hunks)tests/test_transform/test_differentiate.py(4 hunks)tests/test_transform/test_fourier.py(3 hunks)tests/test_transform/test_integrate.py(3 hunks)tests/test_utils/test_array_utils.py(4 hunks)tests/test_utils/test_coordmanager_utils.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
- tests/test_core/test_coordmanager.py
- dascore/transform/hilbert.py
- tests/test_proc/test_taper.py
- tests/test_transform/test_differentiate.py
- tests/test_proc/test_proc_coords.py
- dascore/utils/patch.py
- tests/test_proc/test_rolling.py
- tests/test_transform/test_fourier.py
- dascore/proc/coords.py
- tests/test_utils/test_array_utils.py
- tests/test_utils/test_coordmanager_utils.py
🧰 Additional context used
🧬 Code graph analysis (5)
tests/test_proc/test_basic.py (2)
dascore/proc/coords.py (1)
get_axis(743-761)dascore/core/coordmanager.py (1)
get_axis(724-739)
dascore/proc/basic.py (2)
dascore/proc/coords.py (1)
get_axis(743-761)dascore/core/coordmanager.py (1)
get_axis(724-739)
tests/test_transform/test_integrate.py (3)
dascore/core/patch.py (1)
coords(216-218)dascore/proc/coords.py (1)
get_axis(743-761)dascore/core/coordmanager.py (1)
get_axis(724-739)
tests/test_proc/test_detrend.py (2)
dascore/proc/coords.py (1)
get_axis(743-761)dascore/core/coordmanager.py (1)
get_axis(724-739)
dascore/examples.py (4)
dascore/core/patch.py (4)
coords(216-218)dims(196-198)shape(226-228)Patch(28-424)dascore/utils/misc.py (1)
register_func(34-57)dascore/core/coordmanager.py (2)
shape(761-767)get_axis(724-739)dascore/proc/coords.py (1)
get_axis(743-761)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_build_docs
🔇 Additional comments (14)
dascore/examples.py (2)
416-417: LGTM: safe guard for non-finite delay.The fallback to zero avoids NaNs propagating in _ricker.
534-541: Good move to axis-based indexing.Using patch.get_axis avoids fragile dims.index lookups and matches the new API.
dascore/proc/basic.py (1)
304-304: Good switch to public axis resolver.Using
self.get_axis(dim)centralizes axis resolution and harmonizes error semantics.tests/test_transform/test_integrate.py (3)
42-43: LGTM: use get_axis in indefinite integration test.This aligns tests with the public resolver and avoids reliance on dims ordering.
90-93: LGTM: use get_axis in definite integration test.Correctly checks the collapsed dimension length via the public API.
9-9: Remove side-effect import —import dascore.proc.coordsPatch.get_axis is already bound to
dascore.proc.get_axisin dascore/core/patch.py (line 284), so this import is not needed to expose get_axis. Remove:-import dascore.proc.coordstests/test_proc/test_basic.py (8)
253-259: LGTM: public axis resolver in broadcasting test.Using
random_patch.get_axis("time")improves clarity and durability.
376-379: LGTM: public axis resolver in 3D dropna test.Consistent with the new API.
436-441: LGTM: axis lookup via get_axis in pad(time) test.
447-451: LGTM: axis lookup via get_axis in pad(distance) test.
463-471: LGTM: axis lookup via get_axis in pad(distance, expand_coords) test.
479-484: LGTM: axis lookups for multi-dim padding.
508-514: LGTM: axis lookup for FFT padding.
517-522: LGTM: axis lookup for correlate padding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
dascore/examples.py (3)
355-373: Make nd_patch more robust: validate inputs, allow per-dim lengths, modern RNGGuard against zero/negative sizes, optionally accept a sequence for per-dimension lengths, and prefer numpy’s Generator API.
@register_func(EXAMPLE_PATCHES, key="nd_patch") def nd_patch(dim_count=3, coord_lens=10): @@ - ran = np.random.RandomState(42) - dims = tuple(f"dim_{i + 1}" for i in range(dim_count)) - coords = {d: np.arange(coord_lens) for d in dims} - shape = tuple(len(coords[d]) for d in dims) - data = ran.randn(*shape) + if dim_count < 1: + raise ValueError("dim_count must be >= 1") + lens = ( (coord_lens,) * dim_count + if isinstance(coord_lens, int) + else tuple(coord_lens) ) + if len(lens) != dim_count or any(l <= 0 for l in lens): + raise ValueError("coord_lens must have length dim_count and all > 0") + rng = np.random.default_rng(42) + dims = tuple(f"dim_{i + 1}" for i in range(dim_count)) + coords = {d: np.arange(n) for d, n in zip(dims, lens)} + shape = tuple(lens) + data = rng.standard_normal(shape) return dc.Patch(data=data, coords=coords, dims=dims)
431-435: Avoid assert for user input; simplify safe divide for shiftUse explicit validation (asserts can be stripped) and handle zero/non-finite velocity without warnings.
- assert source_channel < len(distance) + if not (0 <= source_channel < len(distance)): + raise ValueError(f"source_channel must be in [0, {len(distance) - 1}]") @@ - with np.errstate(divide="ignore", invalid="ignore"): - shift = dist_to_source / velocity - actual_shift = shift if np.isfinite(shift) else 0 - time_delay = peak_time + actual_shift + if not np.isfinite(velocity) or velocity == 0: + time_delay = peak_time + else: + time_delay = peak_time + (dist_to_source / velocity)Also applies to: 425-426
535-536: Minor: avoid repeated get_axis lookup and shadowing 'shape'Slightly tidier and avoids reusing the name 'shape'.
- index = tuple( - shape[patch.get_axis(dimension)] // 2 if dimension in used_dims else 0 - for dimension in patch.dims - ) + sizes = patch.shape + index = tuple( + (sizes[i] // 2) if dim in used_dims else 0 + for i, dim in enumerate(patch.dims) + )dascore/utils/patch.py (3)
598-601: Fix doctest/import noise in Examples block.Two stray lines mix non‑doctest text with prompts. Remove them to keep doctest clean.
- import dascore.proc.coord >>> import dascore as dc - import dascore.proc.coords >>> from dascore.utils.patch import get_dim_axis_value
641-642: Use get_axis here for consistency with the new axis API.This keeps behavior/messages uniform and avoids raw index lookups.
- out = tuple(_DimAxisValue(x, dims.index(x), kwargs.get(x)) for x in dim_out) + out = tuple(_DimAxisValue(x, patch.get_axis(x), kwargs.get(x)) for x in dim_out)
1216-1233: Good API switch; drop redundant inner imports and tighten example.
ParameterErroris already imported at module top; the inner imports are unnecessary.- if dim not in patch.dims: - from dascore.exceptions import ParameterError - + if dim not in patch.dims: msg = f"Dimension '{dim}' not found in patch dimensions {patch.dims}" raise ParameterError(msg) @@ - if d not in patch.dims: - from dascore.exceptions import ParameterError - + if d not in patch.dims: msg = f"Dimension '{d}' not found in patch dimensions {patch.dims}" raise ParameterError(msg)Also consider updating the doc example comment to avoid hard‑coding
axis: 1; assert againstpatch.get_axis("time")instead.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
dascore/examples.py(4 hunks)dascore/utils/patch.py(5 hunks)tests/test_proc/test_basic.py(10 hunks)tests/test_proc/test_detrend.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_proc/test_basic.py
- tests/test_proc/test_detrend.py
🧰 Additional context used
🧬 Code graph analysis (2)
dascore/utils/patch.py (2)
dascore/proc/coords.py (1)
get_axis(743-761)dascore/core/coordmanager.py (1)
get_axis(724-739)
dascore/examples.py (3)
dascore/utils/misc.py (1)
register_func(34-57)dascore/core/coordmanager.py (1)
get_axis(724-739)dascore/proc/coords.py (1)
get_axis(743-761)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_build_docs
🔇 Additional comments (3)
dascore/examples.py (1)
415-415: Finite-delay guard: LGTMPrevents NaNs/Infs from propagating into the wavelet.
dascore/utils/patch.py (2)
724-728: LGTM on axis resolution in spacing helper.
patch.get_axis(dim_)is the right call here.
413-421: Approve — get_axis usage is correct; verify no remaining dims.index callsSwitching to
patches[0].get_axis(merge_dim)aligns with the public API and improves error messages.Check repo-wide for direct axis lookups:
rg -nP '\bdims.index\s*(' -C2 -g '!/test/'
If rg reports "No files were searched", rerun without the -g filter:
rg -nP '\bdims.index\s*(' -C2
Or run rg --debug to diagnose ignored files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
docs/.gitignore (2)
1-4: Deduplicate.quarto/patterns.Both
.quarto/and/.quarto/in a subdirectory .gitignore target the same path (the latter scoped to this dir). Keep one for clarity.Suggested minimal cleanup:
-.quarto/ _site/* -/.quarto/ +.quarto/
6-6: Docs .gitignore verified — optionally broaden Quarto/Jupyter ignores.docs/.gitignore contains .quarto/ and **/*.quarto_ipynb (underscore); no .ipynb files or _freeze dirs found. Add these optional ignores to be defensive:
+_freeze/ +**/.ipynb_checkpoints/ +# If notebooks in docs are generated, ignore them: +**/*.ipynb
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
dascore/transform/hilbert.py(1 hunks)dascore/utils/patch.py(5 hunks)docs/.gitignore(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- dascore/utils/patch.py
- dascore/transform/hilbert.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_build_docs
🔇 Additional comments (1)
docs/.gitignore (1)
5-5: No concerns.
|
✅ Documentation built: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
dascore/transform/hilbert.py (1)
197-214: Fix coord/data mismatch when dim_reduce='squeeze' and avoid passing None into coords.update.reduce_coord returns None for "squeeze", but
coords.update(**{stack_dim: None})is invalid and data is currently computed withkeepdims=True. Align keepdims with dim_reduce and drop the coord when squeezing.- mean_phasor = np.mean(unit_phasors, axis=stack_axis, keepdims=True) + keep = dim_reduce != "squeeze" + mean_phasor = np.mean(unit_phasors, axis=stack_axis, keepdims=keep) @@ - stacked_data = np.mean(data, axis=stack_axis, keepdims=True) * weights + stacked_data = np.mean(data, axis=stack_axis, keepdims=keep) * weights @@ - new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce) - cm = patch.coords.update(**{stack_dim: new_coord}) - if dim_reduce == "squeeze": - stacked_data = np.squeeze(stacked_data, axis=stack_axis) - return patch.new(data=stacked_data, coords=cm) + new_coord = stack_coord.reduce_coord(dim_reduce=dim_reduce) + if new_coord is None: + # drop the coord when squeezing + cm = patch.coords.drop_coords(stack_dim)[0] + else: + cm = patch.coords.update(**{stack_dim: new_coord}) + return patch.new(data=stacked_data, coords=cm)
🧹 Nitpick comments (6)
dascore/transform/hilbert.py (6)
34-38: Docstring nit: stray period."analytic signal.." has a double period.
- A patch with a complex data array representing the analytic signal.. + A patch with a complex data array representing the analytic signal.
49-56: Prefer enforcing even sampling for Hilbert.The FFT-based Hilbert transform assumes uniform sampling. Guard on the coord to fail fast on uneven sampling.
- # Get axis for the dimension - axis = patch.get_axis(dim) + # Ensure uniform sampling and get axis for the dimension + _ = patch.get_coord(dim, require_evenly_sampled=True) + axis = patch.get_axis(dim)
89-95: Same even‑sampling guard for envelope.Mirror the check here.
- # Get the analytic signal - axis = patch.get_axis(dim) + # Get the analytic signal (ensure uniform sampling) + _ = patch.get_coord(dim, require_evenly_sampled=True) + axis = patch.get_axis(dim)
99-103: Grammar nit in error message."cant" → "can't".
- if len(dims) > 1: - msg = "Patch has more than two dimensions, cant infer transform dim." + if len(dims) > 1: + msg = "Patch has more than two dimensions; can't infer transform dim."
147-152: Doc inconsistency: squeeze case removes the dimension.When dim_reduce="squeeze", the stack dimension is removed, not length 1.
- A patch with the phase-weighted stack along the specified dimension. - The specified dimension will have length 1. + A patch with the phase-weighted stack along the specified dimension. + The specified dimension will have length 1 unless `dim_reduce="squeeze"`, + in which case the dimension is removed.
183-190: Defensive check: transform_dim should differ from stack_dim.Accidentally using the same dimension degrades PWS semantics. Fail fast.
if transform_dim is None: transform_dim = __infer_transform_dim(patch, stack_dim) + if transform_dim == stack_dim: + raise ParameterError("transform_dim must differ from stack_dim.")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
dascore/transform/hilbert.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
dascore/transform/hilbert.py (6)
dascore/exceptions.py (1)
ParameterError(26-27)dascore/utils/docs.py (1)
compose_docstring(45-97)dascore/utils/patch.py (1)
patch_function(179-285)dascore/proc/coords.py (3)
get_axis(743-761)get_coord(87-139)squeeze(652-669)dascore/core/coordmanager.py (5)
get_axis(724-739)new(431-449)get_coord(1007-1026)update(230-286)squeeze(894-924)dascore/core/coords.py (1)
reduce_coord(893-930)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_build_docs
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.12)
🔇 Additional comments (1)
dascore/transform/hilbert.py (1)
191-195: Minor stability nit: use analytic dtype for eps.You can pull eps directly from the complex dtype’s real component via
analytic_data.real.dtype(already done) — looks good. No change needed.If large float32 inputs are expected, consider casting the envelope/weights to the patch’s float dtype to avoid inadvertent upcasting to float64 where memory is tight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pyproject.toml (1)
76-84: Deduplicate pytest-cov in test extras; keep a single pinned entry.
pytest-covappears twice (once pinned, once unpinned). This is redundant and can confuse resolvers. Keep only the pinned spec.Apply this diff:
[project.optional-dependencies] test = [ "coverage>=7.4,<8", "pytest-cov>=4", "pre-commit", "pytest", - "pytest-codeblocks", - "pytest-cov", + "pytest-codeblocks", "twine", ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
pyproject.toml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.13)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.12)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_build_docs
🔇 Additional comments (1)
pyproject.toml (1)
76-79: Confirm Python 3.13 compatibility for test extras and CI — pyproject.toml (lines 76–79)
- coverage>=7.4,<8 — coverage.py 7.x declares Python 3.13 support.
- pytest-cov>=4 — 4.x releases do not explicitly claim Python 3.13 support; verify upstream (PyPI/changelog) or upgrade/pin to a release that explicitly supports 3.13.
- Coveralls & duplicate optional-deps — sandbox scans failed to access .github/workflows, tox, nox and produced no output; run the ripgrep/Python checks from the original comment locally to confirm whether coveralls was intentionally removed and to detect duplicate optional-dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
.github/workflows/run_min_dep_tests.yml (1)
15-19: Optional: set MPLCONFIGDIR and cap BLAS threads to stabilize CIHelps avoid cache/permission quirks and noisy oversubscription on runners.
env: # Ensure matplotlib doesn't try to show figures in CI MPLBACKEND: Agg QT_QPA_PLATFORM: offscreen + MPLCONFIGDIR: ${{ runner.temp }}/mpl + OPENBLAS_NUM_THREADS: "1" + OMP_NUM_THREADS: "1" + MKL_NUM_THREADS: "1" + NUMEXPR_NUM_THREADS: "1".github/workflows/runtests.yml (1)
18-20: Optional: mirror MPLCONFIGDIR and BLAS thread caps here tooKeeps both workflows in lockstep and reduces flakiness.
CACHE_NUMBER: 1 # Ensure matplotlib doesn't try to show figures in CI MPLBACKEND: Agg QT_QPA_PLATFORM: offscreen + MPLCONFIGDIR: ${{ runner.temp }}/mpl + OPENBLAS_NUM_THREADS: "1" + OMP_NUM_THREADS: "1" + MKL_NUM_THREADS: "1" + NUMEXPR_NUM_THREADS: "1"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/run_min_dep_tests.yml(1 hunks).github/workflows/runtests.yml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test_code (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.12)
- GitHub Check: test_code_min_deps (macos-latest, 3.13)
- GitHub Check: test_code_min_deps (macos-latest, 3.12)
- GitHub Check: test_code_min_deps (ubuntu-latest, 3.13)
- GitHub Check: test_code_min_deps (windows-latest, 3.12)
- GitHub Check: test_code (windows-latest, 3.10)
- GitHub Check: test_code_min_deps (windows-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.13)
- GitHub Check: test_code (windows-latest, 3.12)
- GitHub Check: test_code (macos-latest, 3.10)
- GitHub Check: test_code (macos-latest, 3.11)
- GitHub Check: test_code (windows-latest, 3.13)
- GitHub Check: test_code (ubuntu-latest, 3.11)
- GitHub Check: test_code (ubuntu-latest, 3.10)
- GitHub Check: test_build_docs
- GitHub Check: test_code (ubuntu-latest, 3.12)
🔇 Additional comments (2)
.github/workflows/run_min_dep_tests.yml (1)
15-19: LGTM: headless Matplotlib/Qt env for CIMPLBACKEND=Agg and QT_QPA_PLATFORM=offscreen are appropriate for headless runs across the matrix.
.github/workflows/runtests.yml (1)
18-20: LGTM: unify headless env in full test workflowSame settings as min-deps workflow; good for consistency.
Description
This PR implements the patch methods mentioned in the title. It also refactors the coordinate reduction logic associated with patch reductions to a coordinate method rather than a private helper function.
Checklist
I have (if applicable):
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores