You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use IBVS or PBVS to get the robotic arm to reach the preset reference position, can I use megapose or other traditional visual features or deep learning-based image feature matching to achieve the goal? Is there a need for a camera extrinsic? I'm currently using a realsense 415 camera. I'm not sure what level of positioning error and rotation error this will allow the robotic arm to achieve? Sub-millimeter or millimeter?
For my scene, there is a positional and angular deviation between the current perspective and the reference perspective.
I don't know much about visual servos, I hope to get your reply, thank you!
The text was updated successfully, but these errors were encountered:
Sure, IBVS or PBVS is dedicated to this kind of applications. You can use classical 2D visual features (points, lines...) or even 3D visual features extracted from the pose given by Megapose or any other model-based tracker algorithms. You will find couple of examples on our youtube channel, or provided by the community.
You will need the camera extrinsics obtained by calibration (tutorial).
The last post here related to ABB's final trim and assembly solution automates door assembly shows that the precision can be sub-millimetric.
Thank you for your reply.
I watched the videos you showcased on your YouTube channel, they are very impressive.
I currently have a question: what does "visual features extracted from the pose" mean?
How exactly should this be done?
Is it about obtaining some 3D points of the target object from the pose and then using visual servoing?
As I understand it, PBVS relies more on camera extrinsics, whereas IBVS is less dependent on camera extrinsics.
In the case of inaccurate camera extrinsics, will the result of visual servoing still be good?
I want to use IBVS or PBVS to get the robotic arm to reach the preset reference position, can I use megapose or other traditional visual features or deep learning-based image feature matching to achieve the goal? Is there a need for a camera extrinsic? I'm currently using a realsense 415 camera. I'm not sure what level of positioning error and rotation error this will allow the robotic arm to achieve? Sub-millimeter or millimeter?
For my scene, there is a positional and angular deviation between the current perspective and the reference perspective.
I don't know much about visual servos, I hope to get your reply, thank you!
The text was updated successfully, but these errors were encountered: