-
Notifications
You must be signed in to change notification settings - Fork 4
Prune queued reconstructions when a slice is removed #3
Comments
There are multiple ways to tackle this issue:
I would be in favor of optimizing the real-time user experience, by focussing on real-time plugins. Maybe we should decrease the slice resolutions first, instead of complicating the reconstruction pipeline. I do agree that the system can be more robust with respect to slow actors (both computationally and on the network). But I am not sure 'pruning' is the way to go: if a slice gets requested, the user wants it to be reconstructed and visualized. In my opinion, the problem are the plugins that are not able to keep up. I think moving away from REP/REQ and using PUB/SUB and PUSH/PULL more is a good idea in any case. |
I just thought of another way: add timestamps to requests, and ignore them if they are too far in the past. Then the plugin itself can decide if it should pass along the slice, or postprocess it. Just to reiterate, the reconstruction should always be fast enough, since the backprojections are practically free. The queuing happens at slow plugins, so I think we should solve the problem there. |
I didn't check the source (use the force, read the source), but I suppose that RECAST3D is making the requests at too high of a rate. Why not make RECAST3D wait for receiving slice data, before making new requests? We could include some kind of buffer so that the bottleneck will always be at SliceRecon or the plugin. |
Naturally, I like the plugin to be fast as well. But also, I think it is inevitable that there will be bottlenecks at reconstructions or plugins, so maybe this is a good time to future-proof it as well. |
To put some number to it: requests for 1000x1000 reconstructions are fulfilled in 60 ms using a single GPU. I think it is reasonable to expect about one 'refresh' every second (as new projection data comes in). This means that there is a lot of room to change slices, even if you do it as fast as you can by hand. The whole system is built around this premise; that you change a slice and immediately see the reconstructed result. There is no bottleneck at the reconstruction. I would definitely be against changing the way RECAST3D requests slices because of slow plugins. I think detecting a slow plugin (or making the plugin itself responsible for handling the packets fast enough) is the way to go; if we want to change anything at all. I would personally just perform the postprocessing on smaller resolutions until we are able to optimize the plugin further. In conclusion, I would say it is not a bug, the whole system is built around the idea of nodes handling requests in real-time. If nodes detect they are unable to keep up, I would say its their responsibility to ignore the packets, or pass them on. |
I'm not saying this is a bug, but I do think it is an important use case! Of course the design of the methodology is up to you :) So let's timestamp the packets, and I will make sure to drop them if they are outdated? |
OK. Then we need the following changes:
In particular, the Python plugin callbacks will change from: def callback(shape, xs, idx):
xs = np.array(xs).reshape(shape)
# ... do something with xs
return [shape, xs.ravel().tolist()] to working directly on the packets: def callback(packet):
xs = np.array(packet.data).reshape(packet.slice_size)
if not too_far_in_the_past(packet.timestamp):
# ... do something with xs
packet.data = xs.ravel().tolist()
return packet |
When a slice is (re)moved in RECAST3D, there remain quite a few scheduled reconstructions on that slice. This is already happens when a plugin is slightly slow (i.e. ~200 ms). As a consequence, RECAST3D is receiving tons of useless updates on inactive slices, and it takes quite a while before the reconstruction on the moved plane is at the top of the queue and gets visualized.
How can we remove to-be-reconstructed slices from the queue in the SliceRecon server when we receive a
RemoveSlicePacket
?The text was updated successfully, but these errors were encountered: