-
Notifications
You must be signed in to change notification settings - Fork 115
Description
I have been looking at the class camera_calibration a lot lately and I was wondering what the reason was to go for a camera distortion model which only incorporates one degree of camera distortion.
Is there a good reason tangential distortion is ignored or was it simply found too computationally intensive / there was no time to work on it so far? Using one degree of radial distortion seems quite minimalistic too. I am just learning about how camera's and camera calibration works so I'd love some better explanations if you have any. Although the lower orders dominate, it could be possible that there is significant improvements here which would for example minimalize the error in calibration between two camera's at the middle line.
In particular because I see that in #148 there is talk about adjusting the distortion model to support negative distortion better, I thought it would be a good idea to also mention this possibility. It would not affect runtime performance significantly as finding the roots for the distortion function is only necessary during calibration and to visualize the calibration in the interface of SSL-vision.
You can view this as a 'feature request'