Skip to content
This repository has been archived by the owner on Mar 27, 2020. It is now read-only.

Prune queued reconstructions when a slice is removed #3

Open
adriaangraas opened this issue Nov 23, 2018 · 7 comments
Open

Prune queued reconstructions when a slice is removed #3

adriaangraas opened this issue Nov 23, 2018 · 7 comments
Assignees

Comments

@adriaangraas
Copy link

When a slice is (re)moved in RECAST3D, there remain quite a few scheduled reconstructions on that slice. This is already happens when a plugin is slightly slow (i.e. ~200 ms). As a consequence, RECAST3D is receiving tons of useless updates on inactive slices, and it takes quite a while before the reconstruction on the moved plane is at the top of the queue and gets visualized.

How can we remove to-be-reconstructed slices from the queue in the SliceRecon server when we receive a RemoveSlicePacket?

@adriaangraas adriaangraas changed the title Prune reconstructions when slice is removed Prune queued reconstructions when slice is removed Nov 23, 2018
@adriaangraas adriaangraas changed the title Prune queued reconstructions when slice is removed Prune queued reconstructions when a slice is removed Nov 23, 2018
@jwbuurlage
Copy link
Member

jwbuurlage commented Nov 23, 2018

There are multiple ways to tackle this issue:

  • First and foremost: I would like to require plugins to be fast enough to keep up with the reconstructions. However, I am surprised that you already experience these problems at 200 ms/slice. The current number of slices that are requested by default should be quite limited, and 200ms (times three) is slow, but should just be enough to keep up.
  • Move away as much as possible from REP/REQ patterns, and queue tasks at the reconstruction server which can be modified (pruned) when 'remove slice' messages are received asynchronously.
  • Already visualize the reconstructed slices in RECAST3D, and let plugins overwrite the data once they are done with it. Since the delay is limited, this may just look like 'texture popping' which is undesirable.

I would be in favor of optimizing the real-time user experience, by focussing on real-time plugins. Maybe we should decrease the slice resolutions first, instead of complicating the reconstruction pipeline.

I do agree that the system can be more robust with respect to slow actors (both computationally and on the network). But I am not sure 'pruning' is the way to go: if a slice gets requested, the user wants it to be reconstructed and visualized. In my opinion, the problem are the plugins that are not able to keep up.

I think moving away from REP/REQ and using PUB/SUB and PUSH/PULL more is a good idea in any case.

@jwbuurlage
Copy link
Member

jwbuurlage commented Nov 23, 2018

I just thought of another way: add timestamps to requests, and ignore them if they are too far in the past. Then the plugin itself can decide if it should pass along the slice, or postprocess it.

Just to reiterate, the reconstruction should always be fast enough, since the backprojections are practically free. The queuing happens at slow plugins, so I think we should solve the problem there.

@adriaangraas
Copy link
Author

I didn't check the source (use the force, read the source), but I suppose that RECAST3D is making the requests at too high of a rate. Why not make RECAST3D wait for receiving slice data, before making new requests? We could include some kind of buffer so that the bottleneck will always be at SliceRecon or the plugin.

@adriaangraas
Copy link
Author

Naturally, I like the plugin to be fast as well. But also, I think it is inevitable that there will be bottlenecks at reconstructions or plugins, so maybe this is a good time to future-proof it as well.

@jwbuurlage
Copy link
Member

To put some number to it: requests for 1000x1000 reconstructions are fulfilled in 60 ms using a single GPU. I think it is reasonable to expect about one 'refresh' every second (as new projection data comes in). This means that there is a lot of room to change slices, even if you do it as fast as you can by hand.

The whole system is built around this premise; that you change a slice and immediately see the reconstructed result. There is no bottleneck at the reconstruction.

I would definitely be against changing the way RECAST3D requests slices because of slow plugins. I think detecting a slow plugin (or making the plugin itself responsible for handling the packets fast enough) is the way to go; if we want to change anything at all.

I would personally just perform the postprocessing on smaller resolutions until we are able to optimize the plugin further.

In conclusion, I would say it is not a bug, the whole system is built around the idea of nodes handling requests in real-time. If nodes detect they are unable to keep up, I would say its their responsibility to ignore the packets, or pass them on.

@adriaangraas
Copy link
Author

I'm not saying this is a bug, but I do think it is an important use case! Of course the design of the methodology is up to you :) So let's timestamp the packets, and I will make sure to drop them if they are outdated?

@jwbuurlage
Copy link
Member

jwbuurlage commented Nov 23, 2018

OK. Then we need the following changes:

  • The SetSlicePacket and SliceDataPacket will receive an additional field timestamp (milliseconds since UNIX epoch). Maybe it would be sense to add this to other packets as well.
  • Callbacks will no longer have arguments corresponding to the packet fields, but will instead have access to the actual packet (to make the API stable w.r.t. future packet changes).

In particular, the Python plugin callbacks will change from:

def callback(shape, xs, idx):
    xs = np.array(xs).reshape(shape)
    # ... do something with xs
    return [shape, xs.ravel().tolist()]

to working directly on the packets:

def callback(packet):
    xs = np.array(packet.data).reshape(packet.slice_size)
    if not too_far_in_the_past(packet.timestamp):
        # ... do something with xs        
        packet.data = xs.ravel().tolist()
    return packet

@jwbuurlage jwbuurlage self-assigned this Nov 23, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants