lerobot/README.md

293 lines
13 KiB
Markdown
Raw Normal View History

<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="media/lerobot-logo-thumbnail.png">
<source media="(prefers-color-scheme: light)" srcset="media/lerobot-logo-thumbnail.png">
<img alt="LeRobot, Hugging Face Robotics Library" src="media/lerobot-logo-thumbnail.png" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
2024-03-20 00:29:25 +08:00
2024-03-27 19:49:10 +08:00
<div align="center">
2024-03-20 00:29:25 +08:00
[![Tests](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml/badge.svg?branch=main)](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml?query=branch%3Amain)
2024-03-27 19:49:10 +08:00
[![Coverage](https://codecov.io/gh/huggingface/lerobot/branch/main/graph/badge.svg?token=TODO)](https://codecov.io/gh/huggingface/lerobot)
[![Python versions](https://img.shields.io/pypi/pyversions/lerobot)](https://www.python.org/downloads/)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/huggingface/lerobot/blob/main/LICENSE)
[![Status](https://img.shields.io/pypi/status/lerobot)](https://pypi.org/project/lerobot/)
[![Version](https://img.shields.io/pypi/v/lerobot)](https://pypi.org/project/lerobot/)
[![Examples](https://img.shields.io/badge/Examples-green.svg)](https://github.com/huggingface/lerobot/tree/main/examples)
2024-04-16 19:37:20 +08:00
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.1%20adopted-ff69b4.svg)](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
2024-03-27 19:49:10 +08:00
[![Discord](https://dcbadge.vercel.app/api/server/C5P34WJ68S?style=flat)](https://discord.gg/s3KuuzsPFb)
2024-03-27 00:23:46 +08:00
2024-03-27 19:49:10 +08:00
</div>
<h3 align="center">
<p>State-of-the-art Machine Learning for real-world robotics</p>
</h3>
2024-03-27 19:49:10 +08:00
---
2024-03-20 00:29:25 +08:00
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
2024-03-20 00:29:25 +08:00
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
2024-03-20 00:29:25 +08:00
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulated environments so that everyone can get started. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
2024-03-20 18:19:55 +08:00
🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
2024-03-20 00:29:25 +08:00
#### Examples of pretrained models and environments
2024-03-20 00:29:25 +08:00
<table>
<tr>
<td><img src="http://remicadene.com/assets/gif/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
<td><img src="http://remicadene.com/assets/gif/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
<td><img src="http://remicadene.com/assets/gif/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
</tr>
<tr>
<td align="center">ACT policy on ALOHA env</td>
<td align="center">TDMPC policy on SimXArm env</td>
<td align="center">Diffusion policy on PushT env</td>
</tr>
</table>
### Acknowledgment
- ACT policy and ALOHA environment are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha/)
- Diffusion policy and Pusht environment are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu/)
- TDMPC policy and Simxarm environment are adapted from [FOWM](https://www.yunhaifeng.com/FOWM/)
- Abstractions and utilities for Reinforcement Learning come from [TorchRL](https://github.com/pytorch/rl)
## Installation
2024-03-28 17:01:33 +08:00
Download our source code:
```bash
2024-04-08 22:20:16 +08:00
git clone https://github.com/huggingface/lerobot.git && cd lerobot
2024-02-28 17:57:08 +08:00
```
2024-03-28 17:01:33 +08:00
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
```bash
2024-04-08 22:20:16 +08:00
conda create -y -n lerobot python=3.10 && conda activate lerobot
2024-02-28 17:57:08 +08:00
```
2024-04-08 22:20:16 +08:00
Install 🤗 LeRobot:
```bash
pip install .
2024-02-28 17:57:08 +08:00
```
2024-04-08 22:20:16 +08:00
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
- [aloha](https://github.com/huggingface/gym-aloha)
- [xarm](https://github.com/huggingface/gym-xarm)
- [pusht](https://github.com/huggingface/gym-pusht)
For instance, to install 🤗 LeRobot with aloha and pusht, use:
```bash
pip install ".[aloha, pusht]"
2024-04-08 22:20:16 +08:00
```
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
```bash
2024-03-11 20:09:46 +08:00
wandb login
```
## Walkthrough
2024-02-22 20:14:12 +08:00
```
.
2024-05-05 04:25:45 +08:00
├── examples # contains demonstration examples, start here to learn about LeRobot
├── lerobot
| ├── configs # contains hydra yaml files with all options that you can override in the command line
| | ├── default.yaml # selected by default, it loads pusht environment and diffusion policy
2024-04-08 22:20:16 +08:00
| | ├── env # various sim environments and their datasets: aloha.yaml, pusht.yaml, xarm.yaml
| | └── policy # various policies: act.yaml, diffusion.yaml, tdmpc.yaml
| ├── common # contains classes and utilities
2024-04-08 22:20:16 +08:00
| | ├── datasets # various datasets of human demonstrations: aloha, pusht, xarm
| | ├── envs # various sim environments: aloha, pusht, xarm
2024-05-05 04:25:45 +08:00
| | ├── policies # various policies: act, diffusion, tdmpc
| | └── utils # various utilities
| └── scripts # contains functions to execute via command line
| ├── eval.py # load policy and evaluate it on an environment
| ├── train.py # train a policy via imitation learning and/or reinforcement learning
| ├── push_dataset_to_hub.py # convert your dataset into LeRobot dataset format and upload it to the Hugging Face hub
| └── visualize_dataset.py # load a dataset and render its demonstrations
├── outputs # contains results of scripts execution: logs, videos, model checkpoints
└── tests # contains pytest utilities for continuous integration
2024-02-22 20:14:12 +08:00
```
### Visualize datasets
2024-02-22 20:14:12 +08:00
You can easily visualize episodes from a dataset by executing our script from the command line:
```bash
2024-02-22 20:14:12 +08:00
python lerobot/scripts/visualize_dataset.py \
--repo-id lerobot/pusht \
--episode-index 0
2024-02-22 20:14:12 +08:00
```
Check out [example 1](./examples/1_load_lerobot_dataset.py) to learn how you can import and use our dataset class and download the data from the Hugging Face hub.
### Evaluate a pretrained policy
2024-02-22 20:14:12 +08:00
Check out [example 2](./examples/2_evaluate_pretrained_policy.py) to see how you can load a pretrained policy from Hugging Face hub, load up the corresponding environment and model, and run an evaluation.
Or you can achieve the same result by executing our script from the command line:
```bash
python lerobot/scripts/eval.py \
-p lerobot/diffusion_pusht \
2024-05-05 20:46:29 +08:00
eval.n_episodes=10 \
eval.batch_size=10
```
2024-03-27 00:28:16 +08:00
After training your own policy, you can also re-evaluate the checkpoints with:
```bash
python lerobot/scripts/eval.py \
2024-05-05 20:46:29 +08:00
-p PATH/TO/TRAIN/OUTPUT/FOLDER
```
See `python lerobot/scripts/eval.py --help` for more instructions.
### Train your own policy
Check out [example 3](./examples/3_train_policy.py) to see how you can start training a model on a dataset, which will be automatically downloaded if needed.
2024-05-05 21:04:23 +08:00
In general, you can use our training script to easily train any policy on its environment:
```bash
2024-05-05 21:04:23 +08:00
# TODO(aliberts): not working
python lerobot/scripts/train.py \
2024-05-05 04:25:45 +08:00
env=aloha \
task=sim_insertion \
repo_id=lerobot/aloha_sim_insertion_scripted \
policy=act \
hydra.run.dir=outputs/train/aloha_act
```
After training, you may want to revisit model evaluation to change the evaluation settings. In fact, during training every checkpoint is already evaluated but on a low number of episodes for efficiency. Check out [example](./examples) to evaluate any model checkpoint on more episodes to increase statistical significance.
2024-01-31 07:30:14 +08:00
## Contribute
2024-04-16 16:33:58 +08:00
If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
2024-03-11 20:49:08 +08:00
2024-03-27 23:56:21 +08:00
### Add a new dataset
2024-03-15 08:30:11 +08:00
To add a dataset to the hub, begin by logging in with a token that has write access, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
2024-03-20 00:53:07 +08:00
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
2024-03-15 08:30:11 +08:00
```
Then, push your dataset to the hub using the following command:
```bash
python lerobot/scripts/push_dataset_to_hub.py \
--data-dir data \
--dataset-id pusht \
--raw-format pusht_zarr \
--community-id lerobot \
--revision v1.3 \
--dry-run 0 \
--save-to-disk 0 \
--save-tests-to-disk 0 \
--debug 0
2024-03-15 08:30:11 +08:00
```
For detailed explanations of the arguments, consult the help command:
2024-03-20 00:53:07 +08:00
```bash
python lerobot/scripts/push_dataset_to_hub.py --help
2024-03-15 08:30:11 +08:00
```
We currently support the following raw formats:
```
pusht_zarr | umi_zarr | aloha_hdf5 | xarm_pkl
```
For the `revision` parameter, set the version to match `CODEBASE_VERSION` using:
```bash
python -c "from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION; print(CODEBASE_VERSION)"
```
If there is a need to update the unit tests, set `save-tests-to-disk` to 1 to mock the dataset:
```bash
python lerobot/scripts/push_dataset_to_hub.py \
--data-dir data \
--dataset-id pusht \
--raw-format pusht_zarr \
--community-id lerobot \
--revision v1.3 \
--dry-run 0 \
--save-to-disk 0 \
--save-tests-to-disk 1 \
--debug 0
```
The mock dataset will be located in `tests/data/$COMMUNITY_ID/$DATASET_ID/`, which can be used to update the unit tests.
2024-03-20 00:53:07 +08:00
To implement a new raw format, create a file in `lerobot/common/datasets/push_dataset_to_hub/{raw_format}_format.py` and implement the functions: `check_format`, `load_from_raw`, and `to_hf_dataset`. Combine these functions in `from_raw_to_lerobot_format`. You can find examples here: [pusht_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/pusht_zarr_format.py), [umi_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/umi_zarr_format.py), [aloha_hdf5](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/aloha_hdf5_format.py), and [xarm_pkl](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/xarm_pkl_format.py). Then, add the new format to [`get_from_raw_to_lerobot_format_fn`](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/push_dataset_to_hub.py#L69) in [`lerobot/scripts/push_dataset_to_hub.py`](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/push_dataset_to_hub.py). Et voilà! You are now ready to use this new format in [`push_dataset_to_hub.py`](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/push_dataset_to_hub.py) and can submit a PR to add it 🤗.
2024-03-20 00:53:07 +08:00
2024-03-22 18:26:55 +08:00
2024-03-27 23:56:21 +08:00
### Add a pretrained policy
2024-03-22 18:26:55 +08:00
```python
# TODO(rcadene, alexander-soare): rewrite this section
```
Once you have trained a policy you may upload it to the Hugging Face hub.
2024-03-22 18:26:55 +08:00
Firstly, make sure you have a model repository set up on the hub. The hub ID looks like HF_USER/REPO_NAME.
Secondly, assuming you have trained a policy, you need the following (which should all be in any of the subdirectories of `checkpoints` in your training output folder, if you've used the LeRobot training script):
2024-03-22 18:26:55 +08:00
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: The `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
- `config.yaml`: This is the consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility.
2024-03-22 18:26:55 +08:00
To upload these to the hub, run the following with a desired revision ID.
2024-03-22 18:26:55 +08:00
```bash
huggingface-cli upload $HUB_ID PATH/TO/OUTPUT/DIR --revision $REVISION_ID
2024-03-22 18:26:55 +08:00
```
If you want this to be the default revision also run the following (don't worry, it won't upload the files again; it will just adjust the file pointers):
```bash
huggingface-cli upload $HUB_ID PATH/TO/OUTPUT/DIR
2024-03-22 18:26:55 +08:00
```
See `eval.py` for an example of how a user may use your policy.
2024-03-27 23:56:21 +08:00
### Improve your code with profiling
An example of a code snippet to profile the evaluation of a policy:
```python
from torch.profiler import profile, record_function, ProfilerActivity
def trace_handler(prof):
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(
wait=2,
warmup=2,
active=3,
),
on_trace_ready=trace_handler
) as prof:
with record_function("eval_policy"):
for i in range(num_episodes):
prof.step()
# insert code to profile, potentially whole body of eval_policy function
```
```bash
python lerobot/scripts/eval.py \
2024-05-05 21:09:08 +08:00
--config outputs/pusht/.hydra/config.yaml \
pretrained_model_path=outputs/pusht/model.pt \
eval_episodes=7
```