From e0bed915b506e7bcbf29a53287ad8c5c049e61a1 Mon Sep 17 00:00:00 2001 From: Bernard Tan <30761156+thkkk@users.noreply.github.com> Date: Thu, 17 Aug 2023 09:34:20 +0800 Subject: [PATCH 1/2] add paper: Language to Rewards for Robotic Skill Synthesis --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 52eef19..22b3b05 100755 --- a/README.md +++ b/README.md @@ -67,6 +67,9 @@ If you find this repository useful, please consider [citing](#citation) and STAR * "Collaborating with language models for embodied reasoning", *NeurIPS, Feb 2022*. [[Paper](https://arxiv.org/abs/2302.00763v1)] * **LLM-Brain**: "LLM as A Robotic Brain: Unifying Egocentric Memory and Control", arXiv, Apr 2023. [[Paper](https://arxiv.org/abs/2304.09349v1)] * **Co-LLM-Agents**: "Building Cooperative Embodied Agents Modularly with Large Language Models", *arXiv, Jul 2023*. [[Paper](https://arxiv.org/abs/2307.02485)] [[Code](https://github.com/UMass-Foundation-Model/Co-LLM-Agents)] [[Website](https://vis-www.cs.umass.edu/Co-LLM-Agents/)] +* **LLM-Reward**: "Language to Rewards for Robotic Skill Synthesis +", *arXiv, Jun 2023*. [[Paper](https://language-to-reward.github.io/assets/l2r.pdf)] [[Website](https://language-to-reward.github.io/)] + --- ## Manipulation From c9b7a04313e9cc194e030f62a9ed5e1c27c68e82 Mon Sep 17 00:00:00 2001 From: Bernard Tan <30761156+thkkk@users.noreply.github.com> Date: Thu, 17 Aug 2023 09:37:27 +0800 Subject: [PATCH 2/2] format --- README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 22b3b05..5347edb 100755 --- a/README.md +++ b/README.md @@ -67,8 +67,7 @@ If you find this repository useful, please consider [citing](#citation) and STAR * "Collaborating with language models for embodied reasoning", *NeurIPS, Feb 2022*. [[Paper](https://arxiv.org/abs/2302.00763v1)] * **LLM-Brain**: "LLM as A Robotic Brain: Unifying Egocentric Memory and Control", arXiv, Apr 2023. [[Paper](https://arxiv.org/abs/2304.09349v1)] * **Co-LLM-Agents**: "Building Cooperative Embodied Agents Modularly with Large Language Models", *arXiv, Jul 2023*. [[Paper](https://arxiv.org/abs/2307.02485)] [[Code](https://github.com/UMass-Foundation-Model/Co-LLM-Agents)] [[Website](https://vis-www.cs.umass.edu/Co-LLM-Agents/)] -* **LLM-Reward**: "Language to Rewards for Robotic Skill Synthesis -", *arXiv, Jun 2023*. [[Paper](https://language-to-reward.github.io/assets/l2r.pdf)] [[Website](https://language-to-reward.github.io/)] +* **LLM-Reward**: "Language to Rewards for Robotic Skill Synthesis", *arXiv, Jun 2023*. [[Paper](https://language-to-reward.github.io/assets/l2r.pdf)] [[Website](https://language-to-reward.github.io/)] --- @@ -87,7 +86,7 @@ If you find this repository useful, please consider [citing](#citation) and STAR * **ATLA**: "Leveraging Language for Accelerated Learning of Tool Manipulation", *CoRL, Jun 2022*. [[Paper](https://arxiv.org/abs/2206.13074)] * **ZeST**: "Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?", *L4DC, Apr 2022*. [[Paper](https://arxiv.org/abs/2204.11134)] * **LSE-NGU**: "Semantic Exploration from Language Abstractions and Pretrained Representations", *arXiv, Apr 2022*. [[Paper](https://arxiv.org/abs/2204.05080)] - * **Embodied-CLIP**: "Simple but Effective: CLIP Embeddings for Embodied AI ", *CVPR, Nov 2021*. [[Paper](https://arxiv.org/abs/2111.09888)] [[Pytorch Code](https://github.com/allenai/embodied-clip)] + * **Embodied-CLIP**: "Simple but Effective: CLIP Embeddings for Embodied AI", *CVPR, Nov 2021*. [[Paper](https://arxiv.org/abs/2111.09888)] [[Pytorch Code](https://github.com/allenai/embodied-clip)] * **CLIPort**: "CLIPort: What and Where Pathways for Robotic Manipulation", *CoRL, Sept 2021*. [[Paper](https://arxiv.org/abs/2109.12098)] [[Pytorch Code](https://github.com/cliport/cliport)] [[Website](https://cliport.github.io/)] * **TIP**: "Multimodal Procedural Planning via Dual Text-Image Prompting", *arXiV, May 2023*, [[Paper](https://arxiv.org/abs/2305.01795)] * **VLaMP**: "Pretrained Language Models as Visual Planners for Human Assistance", *arXiV, Apr 2023*, [[Paper](https://arxiv.org/abs/2304.09179)]