-
Notifications
You must be signed in to change notification settings - Fork 8
Limo Simulation Activity
The goal of this tutorial is to make you familiar with the LIMO robot simulation and ROS2, the simulation is built for ROS2 Humble and only runs on Ubuntu 22.
The nice thing about ROS is that both simulated robots and real robots work in the same way and use the same software packages for the main functions, meaning any code you write for the simulated robots can be used on the real robots.
The simulation runs in a docker container which includes Ubuntu OS, ROS2 and LIMO robot drivers all pre-installed so after initial deployment the simulator can be used straight away and can be run on "any" machine, Windows, Mac and Linux.
- Start docker: On the lab PC, start up the provided container using the following instructions.
-
Download the docker-compose file.
-
Log into the docker registry.
-
Run the docker file.
-
Open the remote desktop in the browser.
-
Go to https://localhost:6901/ in your browser (Chrome works best). Use username
kasm_user
with passwordpassword
to log in. -
Once you have opened the remote desktop you should have a screen like this.
Remember that all of the following steps should be undertaken in the dockerised environment accessible through the browser window or through VS Code once connected to the docker container (not in the local PC terminal!).
- Start the simulator: In the dockerised environment, open the terminal and execute the following commands:
ros2 launch limo_gazebosim limo_gazebo_diff.launch.py
This launches the limo robot simulation.
-
Inspection and control: In another terminal window open
rviz2
with the following config filesrc/limo_gazebosim/rviz/urdf.rviz
using the command;
rviz2 -d /opt/ros/lcas_addons/src/limo_ros2/src/limo_gazebosim/rviz/urdf.rviz
This opens Rviz2, the robot visulaiser, this shows the robot's sensors output.
-
Inspect the robot's nodes and topics by using the
ros2 node
andros2 topic
commands. When you type the command without any additional arguments, you should see all available options. Display and compare the format of the following topics/odom
,/scan
,/tf
and/camera/color/image_raw
. You might want to also refer to the official node and topic tutorials. -
Now, let us use the graphical visualiser RVIZ to look at the robot and its sensor topics. Start by typing
rviz2 -d /opt/ros/lcas_addons/src/limo_ros2/src/limo_gazebosim/rviz/urdf.rviz
which uses a pre-defined configuration file, and you should see the interface with a robot model and its sensor data displayed. To get familiar with the interface, adjust the laser scan visualisation options and see how these affect the output. Try to add new visualisation for sensors not included in the provided configuration (e.g. odometry). -
Teleoperation. Leave running. In a new terminal start the keyboard teleoperation node
ros2 run teleop_twist_keyboard teleop_twist_keyboard
and drive the robot around using the keyboard. -
Let's now send some basic robot control commands using ROS topics. The robot's speed can be controlled by the
/cmd_vel
topic. Use theros2 topic pub
functionality to send a singleTwist
message (linear and angular velocity command) as the robot is in differential drive it can only take linear X and angular Z values to drive forward/reverse and left/right. Units are linear metres per second and angular radians per second (where Pi (3.14) radians == 180 degrees). An example:
ros2 topic pub --once /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.0, y: 0.0, z: 0.0}, angular: {x: 0.0, y: 0.0, z: 0.5}}"
Now, adjust the linear components of the Twist
message and see the resulting trajectory.
- Using your knowledge of the topic publishing, issue a series of commands that will drive the robot:
- in a circle with a radius of 0.5 m;
- in a 1 m square.
Try using as few commands as possible.
Make a fork of the LCAS ROB1001 GitHub repository (if you haven't already) and clone your fork to the home directory of the simulation environment, you make need to fetch the latest updates from the LCAS ROB1001 GitHub repository to your fork.
Use the scripts in the ROB1001 repository they subscribe to different topics and some publish movement messages to the robot, run the scripts and see what they do and try to understand how they work.
Instructions on how to run the Python scripts are here Run-the-ROB1001-ROS-scripts
-
Use the two scripts move_square.py and displacement.py as an example and create a new python script which subscribes to the odometry topic and publishes movement commands (cmd_vel) to move the robot in a specific shape and size (e.g a square 2x2m or circle 2m radius).
-
Use the two scripts move_square.py and lidar_distance.py as an example to create a python script that subscribes to either (or both) the lidar scan or depth camera to move the robot forward in a straight line but avoids obstacles (e.g. by stopping, or waiting until the obstacle has moved, or moving around the obstacle).
Make sure you commit any changes you make and push them to GitHub so you don't lose any work.