-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Add chunks='auto' support for cftime datasets #10527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
eb1a967
852476d
c921c59
1aba531
9429c3d
3c9d27e
5153d2d
62e71e6
cfdc31b
2f16bc7
ce720fa
4fa58c1
e58d6d7
590e503
f953976
6706524
4e56acd
0d008cd
49c4e9c
4594099
5d00b0a
80421ef
d1f7ad3
1b7de62
4407185
d8f45b2
20226c1
11ac9f0
8485df5
2c27877
0983261
c4ec31f
adbf5b2
6c93bc4
74bc0ea
0b9bbd0
e58322f
dbc6ebd
5680663
b5933ed
5db9225
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,11 +18,13 @@ | |
get_chunked_array_type, | ||
guess_chunkmanager, | ||
) | ||
from xarray.namedarray.utils import fake_target_chunksize | ||
|
||
if TYPE_CHECKING: | ||
from xarray.core.dataarray import DataArray | ||
from xarray.core.dataset import Dataset | ||
from xarray.core.types import T_ChunkDim | ||
from xarray.core.variable import IndexVariable, Variable | ||
|
||
MissingCoreDimOptions = Literal["raise", "copy", "drop"] | ||
|
||
|
@@ -83,8 +85,15 @@ def _get_chunk(var: Variable, chunks, chunkmanager: ChunkManagerEntrypoint): | |
for dim, preferred_chunk_sizes in zip(dims, preferred_chunk_shape, strict=True) | ||
) | ||
|
||
limit = chunkmanager.get_auto_chunk_size() | ||
limit, var_dtype = fake_target_chunksize(var, limit) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't we need to check if There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is related to what I was getting at yesterday with the no-op bit - reverting b5933ed would put that back in. With that said, the logic doesn't change meaningfully depending on it. Currently, if we put an eg. 300MiB limit in to a var which is an f64, we tell dask to compute the chunks based on those numbers. If we put in an f32 with the same limit, it'll currently tell the dask chunking mechanism to compute chunks for a f64 with a 150MiB limit - which gets us the exact same chunk sizes back (based on my tests). Actually, one of the side effects of the current implementation (no There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess with the current implementation |
||
|
||
chunk_shape = chunkmanager.normalize_chunks( | ||
chunk_shape, shape=shape, dtype=var.dtype, previous_chunks=preferred_chunk_shape | ||
chunk_shape, | ||
shape=shape, | ||
dtype=var_dtype, | ||
limit=limit, | ||
previous_chunks=preferred_chunk_shape, | ||
) | ||
|
||
# Warn where requested chunks break preferred chunks, provided that the variable | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this can be deleted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a play and I don't think I can fully get rid of it, I've reused as much of the abstracted logic as possible though.