Skip to content

Latest commit

 

History

History
executable file
·
130 lines (78 loc) · 11.8 KB

README.md

File metadata and controls

executable file
·
130 lines (78 loc) · 11.8 KB

Awesome-LLM-Robotics Awesome

This repo contains a curative list of papers using Large Language/Multi-Modal Models for Robotics/RL. Template from awesome-Implicit-NeRF-Robotics

Please feel free to send me pull requests or email to add papers!

If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others!


Overview


Reasoning

  • Instruct2Act: "Mapping Multi-modality Instructions to Robotic Actions with Large Language Model", arXiv, May 2023. [Paper] [Pytorch Code]

  • TidyBot: "Personalized Robot Assistance with Large Language Models", arXiv, May 2023. [Paper] [Pytorch Code] [Website]

  • PaLM-E: "PaLM-E: An Embodied Multimodal Language Model", arXiv, Mar 2023, [Paper] [Webpage]

  • RT-1: "RT-1: Robotics Transformer for Real-World Control at Scale", arXiv, Dec 2022. [Paper] [GitHub] [Website]

  • ProgPrompt: "Generating Situated Robot Task Plans using Large Language Models", arXiv, Sept 2022. [Paper] [Github] [Website]

  • Code-As-Policies: "Code as Policies: Language Model Programs for Embodied Control", arXiv, Sept 2022. [Paper] [Colab] [Website]

  • Say-Can: "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances", arXiv, Apr 2021. [Paper] [Colab] [Website]

  • Socratic: "Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language", arXiv, Apr 2021. [Paper] [Pytorch Code] [Website]

  • PIGLeT: "PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World", ACL, Jun 2021. [Paper] [Pytorch Code] [Website]


Planning

  • LLM+P:"LLM+P: Empowering Large Language Models with Optimal Planning Proficiency", arXiv, Apr 2023, [Paper] [Code]

  • "Foundation Models for Decision Making: Problems, Methods, and Opportunities", arXiv, Mar 2023, [Paper]

  • PromptCraft: "ChatGPT for Robotics: Design Principles and Model Abilities", Blog, Feb 2023, [Paper] [Website]

  • Text2Motion: "Text2Motion: From Natural Language Instructions to Feasible Plans", arXiV, Mar 2023, [Paper] [Website]

  • ChatGPT-Prompts: "ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application", arXiv, Apr 2023, [Paper] [Code/Prompts]

  • LM-Nav: "Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action", arXiv, July 2022. [Paper] [Pytorch Code] [Website]

  • InnerMonlogue: "Inner Monologue: Embodied Reasoning through Planning with Language Models", arXiv, July 2022. [Paper] [Website]

  • Housekeep: "Housekeep: Tidying Virtual Households using Commonsense Reasoning", arXiv, May 2022. [Paper] [Pytorch Code] [Website]

  • LID: "Pre-Trained Language Models for Interactive Decision-Making", arXiv, Feb 2022. [Paper] [Pytorch Code] [Website]

  • ZSP: "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents", ICML, Jan 2022. [Paper] [Pytorch Code] [Website]


Manipulation

  • ProgramPort:"Programmatically Grounded, Compositionally Generalizable Robotic Manipulation", "ICLR, Apr 2023", [Paper] [[Website] (https://progport.github.io/)]

  • CoTPC:"Chain-of-Thought Predictive Control", "arXiv, Apr 2023", [Paper] [Code]

  • DIAL:"Robotic Skill Acquistion via Instruction Augmentation with Vision-Language Models", "arXiv, Nov 2022", [Paper] [Website]

  • CLIP-Fields:"CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory", "arXiv, Oct 2022", [Paper] [PyTorch Code] [Website]

  • VIMA:"VIMA: General Robot Manipulation with Multimodal Prompts", "arXiv, Oct 2022", [Paper] [Pytorch Code] [Website]

  • Perceiver-Actor:"A Multi-Task Transformer for Robotic Manipulation", CoRL, Sep 2022. [Paper] [Pytorch Code] [Website]

  • LaTTe: "LaTTe: Language Trajectory TransformEr", arXiv, Aug 2022. [Paper] [TensorFlow Code] [Website]

  • Robots Enact Malignant Stereotypes: "Robots Enact Malignant Stereotypes", FAccT, Jun 2022. [Paper] [Pytorch Code] [Website] [Washington Post] [Wired] (code access on request)

  • ATLA: "Leveraging Language for Accelerated Learning of Tool Manipulation", CoRL, Jun 2022. [Paper]

  • ZeST: "Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?", L4DC, Apr 2022. [Paper]

  • LSE-NGU: "Semantic Exploration from Language Abstractions and Pretrained Representations", arXiv, Apr 2022. [Paper]

  • Embodied-CLIP: "Simple but Effective: CLIP Embeddings for Embodied AI ", CVPR, Nov 2021. [Paper] [Pytorch Code]

  • CLIPort: "CLIPort: What and Where Pathways for Robotic Manipulation", CoRL, Sept 2021. [Paper] [Pytorch Code] [Website]


Instructions and Navigation

  • ADAPT: "ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts", CVPR, May 2022. [Paper]

  • "The Unsurprising Effectiveness of Pre-Trained Vision Models for Control", ICML, Mar 2022. [Paper] [Pytorch Code] [Website]

  • CoW: "CLIP on Wheels: Zero-Shot Object Navigation as Object Localization and Exploration", arXiv, Mar 2022. [Paper]

  • Recurrent VLN-BERT: "A Recurrent Vision-and-Language BERT for Navigation", CVPR, Jun 2021 [Paper] [Pytorch Code]

  • VLN-BERT: "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web", ECCV, Apr 2020 [Paper] [Pytorch Code]

  • "Interactive Language: Talking to Robots in Real Time", arXiv, Oct 2022 [Paper] [Website]


Simulation Frameworks

  • MineDojo: "MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge", arXiv, Jun 2022. [Paper] [Code] [Website] [Open Database]
  • Habitat 2.0: "Habitat 2.0: Training Home Assistants to Rearrange their Habitat", NeurIPS, Dec 2021. [Paper] [Code] [Website]
  • BEHAVIOR: "BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments", CoRL, Nov 2021. [Paper] [Code] [Website]
  • iGibson 1.0: "iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes", IROS, Sep 2021. [Paper] [Code] [Website]
  • ALFRED: "ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks", CVPR, Jun 2020. [Paper] [Code] [Website]
  • BabyAI: "BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning", ICLR, May 2019. [Paper] [Code]

Citation

If you find this repository useful, please consider citing this list:

@misc{kira2022llmroboticspaperslist,
    title = {Awesome-LLM-Robotics},
    author = {Zsolt Kira},
    journal = {GitHub repository},
    url = {https://github.com/GT-RIPL/Awesome-LLM-Robotics},
    year = {2022},
}