|
1 |
| -# Routing_Anything |
| 1 | +<h1 align="center"> MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts </h1> |
2 | 2 |
|
3 |
| -## Model Structure |
| 3 | +<p align="center"> |
| 4 | + <a href="https://openreview.net/forum?id=lsQnneYa8p"><img src="https://img.shields.io/static/v1?label=OpenReview&message=Forum&color=green&style=flat-square" alt="Paper"></a> <a href=""><img alt="License" src="https://img.shields.io/static/v1?label=ICML'24&message=Vienna&color=9cf&style=flat-square"></a> <a href="https://github.com/RoyalSkye/Routing-MVMoE/blob/main/LICENSE"><img src="https://img.shields.io/static/v1?label=License&message=MIT&color=orange&style=flat-square" alt="Paper"></a> |
| 5 | + </p> |
| 6 | +The PyTorch Implementation of *ICML 2024 Poster -- [MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts]()*. MVMoE is a unified neural solver that can cope with 16 VRP variants simultaneously, even in a zero-shot manner. Concretely, the training tasks include `CVRP`, `OVRP`, `VRPB`, `VRPL`, `VRPTW`, and `OVRPTW`. The test tasks include `OVRPB`, `OVRPL`, `VRPBL`, `VRPBTW`, `VRPLTW`, `OVRPBL`, `OVRPBTW`, `OVRPLTW`, `VRPBLTW`, and `OVRPBLTW`. |
4 | 7 |
|
5 |
| -## Multiple VRPs |
| 8 | +* ☺️ *We will attend ICML 2024 in person. Welcome to stop by our poster for discussion.* |
6 | 9 |
|
7 |
| -## Few-shot (e.g, Prompt-tuning) |
| 10 | +<p align="center"><img src="./assets/mvmoe.png" width=98%></p> |
| 11 | + |
| 12 | +## Dependencies |
| 13 | + |
| 14 | +* Python >= 3.8 |
| 15 | +* Pytorch >= 1.12 |
| 16 | + |
| 17 | +## How to Run |
| 18 | + |
| 19 | +<details> |
| 20 | + <summary><strong>Train</strong></summary> |
| 21 | + |
| 22 | +```shell |
| 23 | +# Default: --problem_size=100 --pomo_size=100 --gpu_id=0 |
| 24 | +# 0. POMO |
| 25 | +python train.py --problem={PROBLEM} --model_type=SINGLE |
| 26 | + |
| 27 | +# 1. POMO-MTL |
| 28 | +python train.py --problem=Train_ALL --model_type=MTL |
| 29 | + |
| 30 | +# 2. MVMoE/4E |
| 31 | +python train.py --problem=Train_ALL --model_type=MOE --num_experts=4 --routing_level=node --routing_method=input_choice |
| 32 | + |
| 33 | +# 3. MVMoE/4E-L |
| 34 | +python train.py --problem=Train_ALL --model_type=MOE_LIGHT --num_experts=4 --routing_level=node --routing_method=input_choice |
| 35 | +``` |
| 36 | + |
| 37 | +</details> |
| 38 | + |
| 39 | +<details> |
| 40 | + <summary><strong>Evaluation</strong></summary> |
| 41 | + |
| 42 | +```shell |
| 43 | +# 0. POMO |
| 44 | +python test.py --problem={PROBLEM} --model_type=SINGLE --checkpoint={MODEL_PATH} |
| 45 | + |
| 46 | +# 1. POMO-MTL |
| 47 | +python test.py --problem=ALL --model_type=MTL --checkpoint={MODEL_PATH} |
| 48 | + |
| 49 | +# 2. MVMoE/4E |
| 50 | +python test.py --problem=ALL --model_type=MOE --num_experts=4 --routing_level=node --routing_method=input_choice --checkpoint={MODEL_PATH} |
| 51 | + |
| 52 | +# 3. MVMoE/4E-L |
| 53 | +python test.py --problem=ALL --model_type=MOE_LIGHT --num_experts=4 --routing_level=node --routing_method=input_choice --checkpoint={MODEL_PATH} |
| 54 | + |
| 55 | +# 4. Evaluation on CVRPLIB |
| 56 | +python test.py --problem=CVRP --model_type={MODEL_TYPE} --checkpoint={MODEL_PATH} --test_set_path=../data/CVRP-LIB |
| 57 | +``` |
| 58 | + |
| 59 | +</details> |
| 60 | + |
| 61 | +<details> |
| 62 | + <summary><strong>Baseline</strong></summary> |
| 63 | + |
| 64 | +```shell |
| 65 | +# 0. LKH3 - Support for ["CVRP", "OVRP", "VRPL", "VRPTW"] |
| 66 | +python LKH_baseline.py --problem={PROBLEM} --datasets={DATASET_PATH} -n=1000 --cpus=32 -runs=1 -max_trials=10000 |
| 67 | + |
| 68 | +# 1. HGS - Support for ["CVRP", "VRPTW"] |
| 69 | +python HGS_baseline.py --problem={PROBLEM} --datasets={DATASET_PATH} -n=1000 --cpus=32 -max_iteration=20000 |
| 70 | + |
| 71 | +# 2. OR-Tools - Support for all 16 VRP variants |
| 72 | +python OR-Tools_baseline.py --problem={PROBLEM} --datasets={DATASET_PATH} -n=1000 --cpus=32 -timelimit=20 |
| 73 | +``` |
| 74 | + |
| 75 | +</details> |
| 76 | + |
| 77 | + |
| 78 | +## How to Customize MoE |
| 79 | + |
| 80 | +MoEs can be easily used in Transformer-based models by replacing a Linear/MLP with an MoE layer. The input and output dimensions are kept the same as the original layer. Below, we provide two examples of how to customize MoEs. |
| 81 | + |
| 82 | +```python |
| 83 | +# 0. Our implementation based on https://github.com/davidmrau/mixture-of-experts |
| 84 | +# Supported routing levels: node/instance/problem |
| 85 | +# Supported routing methods: input_choice/expert_choice/soft_moe/random (only for node/instance gating levels) |
| 86 | +from MOELayer import MoE |
| 87 | +moe_layer = MoE(input_size={INPUT_DIM}, output_size={OUTPUT_DIM}, hidden_size={HIDDEN_DIM}, |
| 88 | + num_experts={NUM_EXPERTS}, k=2, T=1.0, noisy_gating=True, |
| 89 | + routing_level="node", routing_method="input_choice", moe_model="MLP") |
| 90 | + |
| 91 | +# 1. tutel - https://github.com/microsoft/tutel |
| 92 | +from tutel import moe as tutel_moe |
| 93 | +moe_layer = tutel_moe.moe_layer( |
| 94 | + gate_type={'type': 'top', 'k': 2}, |
| 95 | + model_dim={INPUT_DIM}, |
| 96 | + experts={'type': 'ffn', 'count_per_node': {NUM_EXPERTS}, |
| 97 | + 'hidden_size_per_expert': {HIDDEN_DIM}, |
| 98 | + 'activation_fn': lambda x: torch.nn.functional.relu(x)}, |
| 99 | + ) |
| 100 | +``` |
| 101 | + |
| 102 | + |
| 103 | +## Citation |
| 104 | + |
| 105 | +```tex |
| 106 | +@inproceedings{zhou2024mvmoe, |
| 107 | +title ={MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts}, |
| 108 | +author ={Jianan Zhou and Zhiguang Cao and Yaoxin Wu and Wen Song and Yining Ma and Jie Zhang and Chi Xu}, |
| 109 | +booktitle ={International Conference on Machine Learning}, |
| 110 | +year ={2024} |
| 111 | +} |
| 112 | +``` |
| 113 | + |
| 114 | +## Acknowledgments |
| 115 | + |
| 116 | +* [ICML 2024 Review](https://github.com/RoyalSkye/Routing-MVMoE/blob/main/assets/Reviews_ICML24.md) |
| 117 | +* https://github.com/yd-kwon/POMO |
| 118 | +* https://github.com/davidmrau/mixture-of-experts |
0 commit comments