This repository is developed based on Lerobot, combined with Leju Kuavo robot, providing complete example code for data format conversion (rosbag → parquet), Imitation Learning (IL) training, simulator testing, and real robot deployment verification.
- Data format conversion module (rosbag → Lerobot parquet)
- IL model training framework (diffusion policy, ACT)
- Mujoco simulation support
- Real robot verification and deployment
- System: Ubuntu 20.04 recommended (if you are running 22.04 / 24.04 it's suggested to use Docker containers)
- Python: Python 3.10 recommended
- ROS: ROS Noetic + Kuavo Robot ROS patches (it's OK if installed in Docker container)
- Dependencies: Docker, NVIDIA CUDA Toolkit (if GPU acceleration is needed)
Ubuntu 20.04 + NVIDIA CUDA Toolkit + Docker is recommended.
Detailed steps (expand to view), for reference only
sudo apt update
sudo apt upgrade -y
ubuntu-drivers devices
# Tested verfied version is 535, you can try newer versions (do not use server branch)
sudo apt install nvidia-driver-535
# Reboot the computer
sudo reboot
# Verify driver installation
nvidia-smiWhen using nvidia-smi acceleration in Docker images, it is necessary to load the nvidia runtime library, therefore NVIDIA Container Toolkit needs to be installed.
sudo apt install curl
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1 && sudo apt-get install -y nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}sudo apt update
sudo apt install git
sudo apt install docker.io
# Configure NVIDIA Runtime in Docker
nvidia-ctk
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
sudo docker info | grep -i runtime
# The output should include "nvidia" RuntimeBoth Kuavo Mujoco simulation and real robot operation are based on the ROS Noetic environment. Since the real Kuavo robot uses Ubuntu 20.04 + ROS Noetic (non-docker), it is recommended to directly install ROS Noetic. If ROS Noetic cannot be installed due to a higher Ubuntu version, Docker can be used.
a. Direct System Installation of ROS Noetic (Recommended)
- Official Guide: ROS Noetic Installation
- Recommended Chinese mirror source: 小鱼ROS
Installation example:
wget http://fishros.com/install -O fishros && . fishros
# Menu selection: 5 Configure system sources → 2 Change sources and clean third-party sources → 1 Add ROS sources
wget http://fishros.com/install -O fishros && . fishros
# Menu selection: 1 One-click installation → 2 Install without changing sources → Select ROS1 Noetic DesktopTest ROS installation:
roscore # Open a new terminal
rosrun turtlesim turtlesim_node # Open a new terminal
rosrun turtlesim turtle_teleop_key # Open a new terminalb. Install ROS Noetic Using Docker
- First, it's best to change the mirror source:
sudo vim /etc/docker/daemon.json- Then write some mirror sources in this json file:
{
"registry-mirrors": [
"https://docker.m.daocloud.io",
"https://docker.imgdb.de",
"https://docker-0.unsee.tech",
"https://docker.hlmirror.com",
"https://docker.1ms.run",
"https://func.ink",
"https://lispy.org",
"https://docker.xiaogenban1993.com"
]
}- Then save the file and exit, restart the Docker service:
sudo systemctl daemon-reload && sudo systemctl restart docker- Now start creating the image, first create the Dockerfile:
mkdir /path/to/save/docker/ros/image
cd /path/to/save/docker/ros/image
vim DockerfileThen write the following content in the Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.tuna.tsinghua.edu.cn/ubuntu/|g' /etc/apt/sources.list && \
sed -i 's|http://security.ubuntu.com/ubuntu/|http://mirrors.tuna.tsinghua.edu.cn/ubuntu/|g' /etc/apt/sources.list
RUN apt-get update && apt-get install -y locales tzdata gnupg lsb-release
RUN locale-gen en_US.UTF-8
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
# Set ROS debian sources
RUN sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
# Add ROS keys
RUN apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
# Install ROS Noetic
# Set keyboard layout to Chinese if necessary
RUN apt-get update && \
apt-get install -y keyboard-configuration apt-utils && \
echo 'keyboard-configuration keyboard-configuration/layoutcode string cn' | debconf-set-selections && \
echo 'keyboard-configuration keyboard-configuration/modelcode string pc105' | debconf-set-selections && \
echo 'keyboard-configuration keyboard-configuration/variant string ' | debconf-set-selections && \
apt-get install -y ros-noetic-desktop-full && \
apt-get install -y python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential && \
rm -rf /var/lib/apt/lists/*
# Initialize rosdep
RUN rosdep initAfter writing, save and exit. Build the Ubuntu 20.04 + ROS Noetic image:
sudo docker build -t ubt2004_ros_noetic .After the build is complete, enter the image. For the first time starting the container and loading the image:
sudo docker run -it --name ubuntu_ros_container ubt2004_ros_noetic /bin/bash
# Or GPU launch (recommended)
sudo docker run -it --gpus all --runtime nvidia --name ubuntu_ros_container ubt2004_ros_noetic /bin/bash
# Optional, mount local directory paths, etc.
# sudo docker run -it --gpus all --runtime nvidia --name ubuntu_ros_container -v /path/to/your/code:/root/code ubt2004_ros_noetic /bin/bashFor subsequent launches:
sudo docker start ubuntu_ros_container
sudo docker exec -it ubuntu_ros_container /bin/bashAfter entering the image, initialize the ROS environment variables, then start roscore:
source /opt/ros/noetic/setup.bash
roscoreIf everything is correct, the Docker configuration for Ubuntu 20.04 + ROS Noetic is complete.
# SSH
git clone --depth=1 [email protected]:LejuRobotics/kuavo_data_challenge.git
# Or
# HTTPS
git clone --depth=1 https://github.com/LejuRobotics/kuavo_data_challenge.gitUpdate the lerobot submodule under third_party:
cd kuavo_data_challenge
git submodule init
git submodule update --recursiveUse conda (recommended) or python venv to create a virtual environment (Python 3.10 recommended):
conda create -n kdc python=3.10
conda activate kdcOr: Install python3.10 first, then use venv to create a virtual environment:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install -y python3.10 python3.10-venv python3.10-dev
python3.10 -m venv kdc
source kdc/bin/activateCheck and ensure correct installation:
python # Check Python version, confirm output is 3.10.xxx (usually 3.10.18)
# Example output:
# Python 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] on linux
# Type "help", "copyright", "credits" or "license" for more information.
# >>>
pip --version # Check pip version, confirm output shows pip for 3.10
# Example output: pip 25.1 from /path/to/your/env/python3.10/site-packages/pip (python 3.10)Install dependencies:
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple # It is recommended to change the source first to speed up download and installation
pip install -r requirements_ilcode.txt # No ROS Noetic required, but can only use kuavo_train imitation learning training code. kuavo_data (data conversion) and kuavo_deploy (deployment code) both depend on ROS
# Or
pip install -r requirements_total.txt # Ensure ROS Noetic is installed (recommended)If you encounter ffmpeg or torchcodec errors when running:
conda install ffmpeg==6.1.1
# Or
pip uninstall torchcodecConvert Kuavo native rosbag data to parquet format usable by the Lerobot framework:
python kuavo_data/CvtRosbag2Lerobot.py \
--config-path=../configs/data/ \
--config-name=KuavoRosbag2Lerobot.yaml \
rosbag.rosbag_dir=/path/to/rosbag \
rosbag.lerobot_dir=/path/to/lerobot_dataDescription:
rosbag.rosbag_dir: Path to original rosbag datarosbag.lerobot_dir: Path to save converted lerobot-parquet data. A subfolder named lerobot is usually created in this directoryconfigs/data/KuavoRosbag2Lerobot.yaml: Please review and select cameras to enable and whether to use depth images as needed
Use the converted data for imitation learning training:
python kuavo_train/train_policy.py \
--config-path=../configs/policy/ \
--config-name=diffusion_config.yaml \
task=your_task_name \
method=your_method_name \
root=/path/to/lerobot_data/lerobot \
training.batch_size=128 \
policy_name=diffusionDescription:
task: Custom task name (preferably corresponding to the task definition in data conversion), e.g.,pick and placemethod: Custom method name, used to distinguish different training runs, e.g.,diffusion_bs128_usedepth_nofuse, etc.root: Local path to training data. Note to include lerobot. Should correspond to the data conversion save path in step 1:/path/to/lerobot_data/lerobottraining.batch_size: Batch size, can be adjusted according to GPU memorypolicy_name: Policy to use, used for policy instantiation. Currently supportsdiffusionandact- For other parameters, please refer to the yaml file documentation. It is recommended to directly modify the yaml file to avoid command-line input errors
After training is complete, you can start the Mujoco simulator and call the deployment code for evaluation:
a. Start Mujoco simulator: For details, see readme for simulator
b. Call deployment code
-
Configuration files are located in
./configs/deploy/:kuavo_sim_env.yaml: Simulator running configurationkuavo_real_env.yaml: Real robot running configuration
-
Please review the yaml file and modify the
# inference configsrelated parameters (model loading), etc. -
Start automated inference deployment:
bash kuavo_deploy/eval_kuavo.sh
-
Follow the instructions. Generally, finally select
"8. Auto-test model in simulation, execute eval_episodes times:". For details on this operation, see kuavo deploy
Same steps as part a in step 3, change the specified configuration file to kuavo_real_env.yaml to deploy and test on the real robot.
Simulation Environment:
| Topic Name | Description |
|---|---|
/cam_h/color/image_raw/compressed |
Top camera RGB color image |
/cam_h/depth/image_raw/compressedDepth |
Top camera depth image |
/cam_l/color/image_raw/compressed |
Left camera RGB color image |
/cam_l/depth/image_rect_raw/compressedDepth |
Left camera depth image |
/cam_r/color/image_raw/compressed |
Right camera RGB color image |
/cam_r/depth/image_rect_raw/compressedDepth |
Right camera depth image |
/gripper/command |
Simulated rq2f85 gripper control command |
/gripper/state |
Simulated rq2f85 gripper current state |
/joint_cmd |
Control commands for all joints, including legs |
/kuavo_arm_traj |
Robot arm trajectory control |
/sensors_data_raw |
Raw data from all sensors |
Real Robot Environment:
| Topic Name | Description |
|---|---|
/cam_h/color/image_raw/compressed |
Top camera RGB color image |
/cam_h/depth/image_raw/compressedDepth |
Top camera depth image, realsense |
/cam_l/color/image_raw/compressed |
Left camera RGB color image |
/cam_l/depth/image_rect_raw/compressedDepth |
Left camera depth image, realsense |
/cam_r/color/image_raw/compressed |
Right camera RGB color image |
/cam_r/depth/image_rect_raw/compressedDepth |
Right camera depth image, realsense |
/control_robot_hand_position |
Dexterous hand joint angle control command |
/dexhand/state |
Dexterous hand current joint angle state |
/leju_claw_state |
Leju claw current joint angle state |
/leju_claw_command |
Leju claw joint angle control command |
/joint_cmd |
Control commands for all joints, including legs |
/kuavo_arm_traj |
Robot arm trajectory control |
/sensors_data_raw |
Raw data from all sensors |
outputs/
├── train/<task>/<method>/run_<timestamp>/ # Training models and parameters
├── eval/<task>/<method>/run_<timestamp>/ # Test logs and videos
KUAVO_DATA_CHALLENGE/
├── configs/ # Configuration files
├── kuavo_data/ # Data processing and conversion module
├── kuavo_deploy/ # Deployment scripts (simulator/real robot)
├── kuavo_train/ # Imitation learning training code
├── lerobot_patches/ # Lerobot runtime patches
├── outputs/ # Models and results
├── third_party/ # Lerobot dependencies
└── requirements_xxx.txt # Dependency lists
└── README.md # Documentation
This directory contains compatibility patches for Lerobot, with main features including:
- Extend
FeatureTypeto support RGB and Depth images - Customize
compute_episode_statsandcreate_stats_buffersfor statistical calculations of images and depth data, min, max, mean, std, etc. - Modify
dataset_to_policy_featuresto ensure correct mapping of Kuavo RGB + Depth FeatureType
If you need to use Lerobot-based custom designs such as depth data, new FeatureTypes, normalization methods, etc., you can add them yourself. When using, import at the very beginning of the entry script (such as kuavo_train/train_policy.py and other training file code):
import lerobot_patches.custom_patches # Ensure custom patches are applied, DON'T REMOVE THIS LINE!This project is extended based on Lerobot. Thanks to the HuggingFace team for developing the open-source robot learning framework, which provides an important foundation for this project.