Skip to content

Nicosoh/Hybrid_Learning_MPC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

240 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Simulator - Controller for Model Predicitve Control

Overview

This repository provides a modular framework for simulating and controlling robotic systems (e.g., cartpole, pendulum, manipulators) using Model Predictive Control (MPC) with MuJoCo as the simulator and acados as the controller. Below are some example simulations.

Block diagram overview of the framework. Two distinct blocks for the simulator and controller where each is independent of the other, allowing the simulator to be replaced by an actual robot. The simulator uses MuJoCo while the controller uses acados. Additionally, Pinocchio provides efficient rigid body dynamics algorithms which is then converted to CasADi symbolic expressions to integrate with acados. PyTorch is used as the deep learning framework which is also converted into CasADi symbolic expressions through the use of L4CasADi. Finally, if inverse kinematics is required, PINK is utilized. Dotted lines signify optional connections.

System Architecture

System Architecture

System architecture diagram showing module interactions, configuration inputs, and data flow between the simulator, controller, neural-network components, and robot model utilities. Dotted lines signify optional connections.

System Architecture

Installation & Usage

The codebase uses pixi to manage the environmant and run all the tasks. I assume that pixi has already been installed. If not visit pixi installation.

  1. Clone the repository

    git clone git@github.com:Nicosoh/mpc_MuJoCo.git
    
  2. Enter the project directory and install dependencies

    cd mpc_mujoco
    pixi install
    
  3. Build and install ACADOS from source

    pixi run acados_full
    
  4. Verify ACADOS installation

    pixi run minimal_example
    
  5. Run the Cartpole model

    pixi run main_cartpole
    

    or

    pixi shell
    python main.py cartpole
    

Results are saved in the data folder. A video, config and also graph. If render is set to false, the video will not be recorded.

Data Collection

To collect data for a robot model:

pixi run data_collector [robot_model]

And to visualize the data:

pixi run data_viz [robot_model] [folder_name]

For example with the pendulum:

pixi run data_collector pendulum
pixi run data_viz pendulum 2025-12-02_17-59-3e5_pendulum_data_collection

Plots will be saved in the [folder_name]/plots

Training a Neural Network

To train a model, the dataset and DL model needs to be defined in the train config.

pixi run train_model [train_config]

To evaluate the model, the test dataset and model weights needs to be defiend in the test config.

pixi run evaluate_model [test_config]

Configurations

Each module has its own configuration file:

References

About

Test bed for learning based MPC that supports multiple robot models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors