[Proposal] vision-based localization with vector map #3484
Replies: 4 comments 8 replies
-
Awesome work! I see you use the center line of road markings from the HD map for the association (matching). I wonder if it would be helpful for your algorithm if you also knew the width of those road markings from the HD map. In Lanelet2, I think road markings such as solid lines and dashed lines are normally represented using the center spline without width, so you might not be able to get the width information. Although line types can be specified using tags, e.g., thin_solid, I'm not sure if these tags have specified widths in meters (or centimeters). On the hand, I see that the crosswalks in the map you used contain locations of "stripes". Do you think it would work well if crosswalks were only defined using their boundaries (the outline of the crosswalks without strips inside)? |
Beta Was this translation helpful? Give feedback.
-
Very impressive!
Why not all at the same time? EKF localization gets better as it receives more and more diverse data, no? |
Beta Was this translation helpful? Give feedback.
-
Here's the latest update. 📣
If you're interested in running autoware with YabLoc mode, I recommend trying the sample dataset provided in the "Tests performed" section of this PR |
Beta Was this translation helpful? Give feedback.
-
hi @KYabuuchi yabuuchi-san,really great work |
Beta Was this translation helpful? Give feedback.
-
Hi! I have developed YabLoc, which is vision-based localization. Here is a demonstration video.
On behalf of TIER IV, I am pleased to announce a proposal to integrate YabLoc.
Currently, Autoware uses NDT for localization, mainly relying on LiDAR and point cloud maps. On the other hand, YabLoc does not require LiDAR and point cloud maps.
YabLoc extracts road surface markings from camera images and estimates the self-position by matching them with vector maps (Lanelet2). It estimates 3D position (x, y, z) and yaw angle, but not pitch and roll.
YabLoc will offer two initial pose estimation modes. One mode automatically estimates the orientation from the road markings, and the other uses RViz’s 2D Pose Estimate to specify the position.
Except for the auto initial pose estimation, YabLoc does not use DNN, and GPUs are not required.
Once YabLoc is merged to Autoware, Autoware users will be able to choose whether to use NDT or YabLoc (or eagleye) for localization. I believe this new localization method will enhance the functionality and versatility of Autoware.
Method
Input & Output
LiDAR and PCD map are no longer needed in YabLoc and camera images are required.
Also, it is important that the Lanelet2 includes road surface markings.
The basic principle of YabLoc
YabLoc uses a particle filter. The basic principle is illustrated in the diagram below. It extract road markings by masking the line segments by the road area obtained from graph-based segmentation. The red line in the center-top of the diagram represents the line segments identified as road surface markings. YabLoc then transform these segments for each particle and determine the particle's weight by comparing them with the cost map generated from lanelet2.
Localization node diagram when YabLoc is merged
The following diagram shows the node diagram. The blue blocks represent existing nodes in Autoware, while the yellow blocks represent nodes in YabLoc. YabLoc acts as a pose estimator instead of NDT.
(Some nodes are omitted for clarity.)
Limitation
Evaluation Environment
I tested YabLoc for some dataset in TIER IV.
Although I cannot show images or videos due to some reasons, I also verified autonomous driving at low speeds in closed test coarse.
Autoware Integration Plan
I plan to merge as follows referring to Eagleye:
I plan to create a merge request to autoware.universe, autoware_launch, autoware soon.
In the future, if it becomes difficult to maintain YabLoc in a separated repository or becomes commonly used, I plan to place YabLoc directly in autoware.universe.
Please let me know if you have an opinion about how to integrate 🙏
Current Status
Thank you for your support, and I look forward to your feedback!
Beta Was this translation helpful? Give feedback.
All reactions