-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Add cached version for normalize_chunks
#11650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
normalize_chunks
Unit Test ResultsSee test report for an extended history of previous test failures. This is useful for diagnosing flaky tests. 15 files ± 0 15 suites ±0 4h 29m 11s ⏱️ -2s Results for commit f4c0011. ± Comparison against base commit 7393a77. This pull request removes 1 and adds 2 tests. Note that renamed tests count towards both. |
|
Yes decent impact. Now the big issue is tokenize. |
| return i | ||
|
|
||
|
|
||
| @functools.lru_cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I bet you could use this function internally if you wrote a caching decorator that cached both by id and hash. That way you check id first, and then with the hash if that fails.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh this is interesting, I'll take a look at this but will merge that one for now
|
This is the issue: for 300 variables, that alone takes 16s. somehow using a cache with EDIT: The tuples within |
Yeah, this is tricky, I'll see if we can do something here |
|
can you import from dask.array.api when you add this to xarray? |
pre-commit run --all-filescc @dcherian is this suitable for xarray?