From 4ad3acc025489ed74febeae8ab8d2f9b8aee3b37 Mon Sep 17 00:00:00 2001 From: Andrew Hundt Date: Mon, 29 Jul 2024 19:02:18 -0400 Subject: [PATCH] Update README.md, add 3 papers on safety and risks of LLM-powered robots to list. Added 3 papers to the list, all focusing on safety, risks, and ethical concerns regarding the deployment of LLMs and VLMs in robotics. The papers cover topics like discrimination, violence, and unlawful actions and read teamed prompting that LLM-driven robots could potentially enact, as well as highlighting broader safety concerns. --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index 1be1cc9..1dff8d0 100755 --- a/README.md +++ b/README.md @@ -164,6 +164,14 @@ If you find this repository useful, please consider [citing](#citation) and STAR * **ALFRED**: "ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks", *CVPR, Jun 2020*. [[Paper](https://arxiv.org/abs/1912.01734)] [[Code](https://github.com/askforalfred/alfred)] [[Website](https://askforalfred.com/)] * **BabyAI**: "BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning", *ICLR, May 2019*. [[Paper](https://openreview.net/pdf?id=rJeXCo0cYX)] [[Code](https://github.com/mila-iqia/babyai/tree/iclr19)] +--- +## Safety, Risks, Red Teaming, and Adversarial Testing + +* **LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions**: *arXiv, Jun 2024*. [[Paper](https://arxiv.org/abs/2406.08824)] + +* **Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics**: *arXiv, Feb 2024*. [[Paper](https://arxiv.org/abs/2402.10340)] + +* **Robots Enact Malignant Stereotypes**: *FAccT, Jun 2022*. [[arXiv](https://arxiv.org/abs/2207.11569)] [[DOI](https://doi.org/10.1145/3531146.3533138)] [[Code](https://github.com/ahundt/RobotsEnactMalignantStereotypes)] [[Website](https://sites.google.com/view/robots-enact-stereotypes/home)] ---- ## Citation