Update README.md, add 3 papers on safety and risks of LLM-powered robots to list.
Added 3 papers to the list, all focusing on safety, risks, and ethical concerns regarding the deployment of LLMs and VLMs in robotics. The papers cover topics like discrimination, violence, and unlawful actions and read teamed prompting that LLM-driven robots could potentially enact, as well as highlighting broader safety concerns.
This commit is contained in:
parent
21e2a84f16
commit
4ad3acc025
|
@ -164,6 +164,14 @@ If you find this repository useful, please consider [citing](#citation) and STAR
|
|||
* **ALFRED**: "ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks", *CVPR, Jun 2020*. [[Paper](https://arxiv.org/abs/1912.01734)] [[Code](https://github.com/askforalfred/alfred)] [[Website](https://askforalfred.com/)]
|
||||
* **BabyAI**: "BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning", *ICLR, May 2019*. [[Paper](https://openreview.net/pdf?id=rJeXCo0cYX)] [[Code](https://github.com/mila-iqia/babyai/tree/iclr19)]
|
||||
|
||||
---
|
||||
## Safety, Risks, Red Teaming, and Adversarial Testing
|
||||
|
||||
* **LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions**: *arXiv, Jun 2024*. [[Paper](https://arxiv.org/abs/2406.08824)]
|
||||
|
||||
* **Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics**: *arXiv, Feb 2024*. [[Paper](https://arxiv.org/abs/2402.10340)]
|
||||
|
||||
* **Robots Enact Malignant Stereotypes**: *FAccT, Jun 2022*. [[arXiv](https://arxiv.org/abs/2207.11569)] [[DOI](https://doi.org/10.1145/3531146.3533138)] [[Code](https://github.com/ahundt/RobotsEnactMalignantStereotypes)] [[Website](https://sites.google.com/view/robots-enact-stereotypes/home)]
|
||||
|
||||
----
|
||||
## Citation
|
||||
|
|
Loading…
Reference in New Issue