Currently, we are able to get very good results building maps from only visual inertial information. However, when we enable submapping, the bundle adjustment never converges. The system uses a triple camera solution with a non-overlapping FOV. So basically, the trajectory without submapping enabled comes out better currently.
Our LIDAR is a solid-state front-facing module with around 90 deg FOV. For the Super8 submapping to work well and produce meaningful factors, should it be a 360 lidar? I don't see any unusual logs when enabling the submapping, except that BA never converges.
What would be the best way to debug it? I tried to play with se2-lidar.yaml based on hilti2022 dataset, but was not successful - not all parameters are easy to understand there.
Thanks for doing such amazing work and open-sourcing it!
Currently, we are able to get very good results building maps from only visual inertial information. However, when we enable submapping, the bundle adjustment never converges. The system uses a triple camera solution with a non-overlapping FOV. So basically, the trajectory without submapping enabled comes out better currently.
Our LIDAR is a solid-state front-facing module with around 90 deg FOV. For the Super8 submapping to work well and produce meaningful factors, should it be a 360 lidar? I don't see any unusual logs when enabling the submapping, except that BA never converges.
What would be the best way to debug it? I tried to play with se2-lidar.yaml based on hilti2022 dataset, but was not successful - not all parameters are easy to understand there.
Thanks for doing such amazing work and open-sourcing it!