Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can the target tracking method based on image point line features or Megapose based on image point line features be used for visual servos? #1592

Open
GavinYang5 opened this issue Feb 20, 2025 · 3 comments

Comments

@GavinYang5
Copy link

I want to use IBVS or PBVS to get the robotic arm to reach the preset reference position, can I use megapose or other traditional visual features or deep learning-based image feature matching to achieve the goal? Is there a need for a camera extrinsic? I'm currently using a realsense 415 camera. I'm not sure what level of positioning error and rotation error this will allow the robotic arm to achieve? Sub-millimeter or millimeter?
For my scene, there is a positional and angular deviation between the current perspective and the reference perspective.
I don't know much about visual servos, I hope to get your reply, thank you!

@fspindle
Copy link
Contributor

Sure, IBVS or PBVS is dedicated to this kind of applications. You can use classical 2D visual features (points, lines...) or even 3D visual features extracted from the pose given by Megapose or any other model-based tracker algorithms. You will find couple of examples on our youtube channel, or provided by the community.

You will need the camera extrinsics obtained by calibration (tutorial).

The last post here related to ABB's final trim and assembly solution automates door assembly shows that the precision can be sub-millimetric.

@GavinYang5
Copy link
Author

Thank you for your reply.
I watched the videos you showcased on your YouTube channel, they are very impressive.
I currently have a question: what does "visual features extracted from the pose" mean?
How exactly should this be done?
Is it about obtaining some 3D points of the target object from the pose and then using visual servoing?

@GavinYang5
Copy link
Author

As I understand it, PBVS relies more on camera extrinsics, whereas IBVS is less dependent on camera extrinsics.
In the case of inaccurate camera extrinsics, will the result of visual servoing still be good?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants