From f48b4480451faddd9b05b32887abe040c9d4c241 Mon Sep 17 00:00:00 2001 From: Jiafei Duan <51585075+jiafei1224@users.noreply.github.com> Date: Wed, 2 Oct 2024 16:07:11 -0700 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index fcdae9b..01c8164 100755 --- a/README.md +++ b/README.md @@ -101,7 +101,7 @@ If you find this repository useful, please consider [citing](#citation) and STAR --- ## Manipulation -* ** Manipulate-Anything**: "Manipulate-Anything: Automating Real-World Robots using Vision-Language Models", *CoRL, Nov 2024*. [[Paper](https://robot-ma.github.io/MA_paper.pdf)] [[Code](https://robot-ma.github.io/)] [[Website]([link](https://robot-ma.github.io/))] +* **Manipulate-Anything**: "Manipulate-Anything: Automating Real-World Robots using Vision-Language Models", *CoRL, Nov 2024*. [[Paper](https://robot-ma.github.io/MA_paper.pdf)] [[Code](https://robot-ma.github.io/)] [[Website](https://robot-ma.github.io/)] * **Plan-Seq-Learn**:"Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks", *ICLR, May 2024*. [[Paper](https://arxiv.org/pdf/2405.01534)], [[PyTorch Code](https://github.com/mihdalal/planseqlearn)] [[Website](https://mihdalal.github.io/planseqlearn/)] * **ManipVQA**:"ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models", *arXiv, Mar 2024*, [[Paper](https://arxiv.org/abs/2403.11289)] [[PyTorch Code](https://github.com/SiyuanHuang95/ManipVQA)] * **BOSS**: "Bootstrap Your Own Skills: Learning to Solve New Tasks with LLM Guidance", *CoRL, Nov 2023*. [[Paper](https://openreview.net/forum?id=a0mFRgadGO)] [[Website](https://clvrai.github.io/boss/)]