You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to adapt the preprocessing step to datasets that contain depth map sequences, such as the NTU RGB+D dataset? Currently, the system lacks support for efficiently handling depth map data in input pipelines, limiting its compatibility with such multimodal settings.
I think it would be helpful to implement a new data pipeline that can seamlessly process depth map sequences alongside RGB sequences. This pipeline should take two input paths, one for the RGB clip path and another for the depth map clip-path. It should then generate two corresponding tensors: one for the RGB sequence and another for the depth map sequence. Do any of you have an idea to tweak the existing code to make it work seamlessly with this requirement?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Is there a way to adapt the preprocessing step to datasets that contain depth map sequences, such as the NTU RGB+D dataset? Currently, the system lacks support for efficiently handling depth map data in input pipelines, limiting its compatibility with such multimodal settings.
I think it would be helpful to implement a new data pipeline that can seamlessly process depth map sequences alongside RGB sequences. This pipeline should take two input paths, one for the RGB clip path and another for the depth map clip-path. It should then generate two corresponding tensors: one for the RGB sequence and another for the depth map sequence. Do any of you have an idea to tweak the existing code to make it work seamlessly with this requirement?
Beta Was this translation helpful? Give feedback.
All reactions