Optimising interp for large datasets #5472
-
So I was working with the interp function in xarray for multidimensional (3d) interpolation. However I came across the following hurdle: The new points where I want to find out interpolated values, are supposed to be 1D arrays for each dimension and the interpolation is done not just for the points which we get after combining the ith elements of all arrays( for example ([x1,y1,z1],[x2,y2,z2]...) etc)but for all possible combinations of them which takes a huge toll on the memory used. Right now I'm giving a single point as an argument everytime which does the job but is slower. Is there a way around this without fiddling with the source code? Any help would be appreciated. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hello, Can you try adapting the examples in the documentation please: https://xarray.pydata.org/en/stable/user-guide/interpolation.html#advanced-interpolation |
Beta Was this translation helpful? Give feedback.
Hello, Can you try adapting the examples in the documentation please: https://xarray.pydata.org/en/stable/user-guide/interpolation.html#advanced-interpolation