You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #24 I removed the (very rudimentary) caching mechanism from the code. To not lose the knowledge, here's some background and code.
Reasoning for removing the code (for now)
For caching to become useful it would have to be smarter than what we have now. Further, because we have the low-res slices, the need of caching is lower.
After we've implemented more features that affect the managing of data (like contrast limits), we should revisit whether the additional complexity of a caching mechanism is worthwhile.
Outline
To perform caching, we'd keep a per-slicer client-side dictionary (e.g. on the slicer_state object that #24 introduced). At the moment the server callback that serves slices uses the index.data as an input. We'd need an extra store req-index that will copy over the index.data, except when the cache has data for that index. Something like this:
self._clientside_callback(
""" function update_req_index(index) { let slicer_state; // filled in slicer_state.cache = slicer_state.cache || {}; return slicer_state.cache[index] ? dash_clientside.no_update : index; } """,
Output(self._req_index.id, "data"),
[Input(self._index.id, "data")],
)
Then when a new slice is received from the server, store the incoming slice:
This will restore more or less the caching mechanism that we had earlier. But more features should be added to make it useful, e.g. obtaining neighboring slices, and purging slices from the cache if needed to preserve memory.
The text was updated successfully, but these errors were encountered:
In #24 I removed the (very rudimentary) caching mechanism from the code. To not lose the knowledge, here's some background and code.
Reasoning for removing the code (for now)
For caching to become useful it would have to be smarter than what we have now. Further, because we have the low-res slices, the need of caching is lower.
After we've implemented more features that affect the managing of data (like contrast limits), we should revisit whether the additional complexity of a caching mechanism is worthwhile.
Outline
To perform caching, we'd keep a per-slicer client-side dictionary (e.g. on the
slicer_state
object that #24 introduced). At the moment the server callback that serves slices uses theindex.data
as an input. We'd need an extra storereq-index
that will copy over theindex.data
, except when the cache has data for that index. Something like this:Then when a new slice is received from the server, store the incoming slice:
This will restore more or less the caching mechanism that we had earlier. But more features should be added to make it useful, e.g. obtaining neighboring slices, and purging slices from the cache if needed to preserve memory.
The text was updated successfully, but these errors were encountered: