202 lines
4.5 KiB
Markdown
202 lines
4.5 KiB
Markdown
# LeRobot
|
|
|
|
## Installation
|
|
|
|
Create a virtual environment with Python 3.10, e.g. using `conda`:
|
|
```
|
|
conda create -y -n lerobot python=3.10
|
|
conda activate lerobot
|
|
```
|
|
|
|
[Install `poetry`](https://python-poetry.org/docs/#installation) (if you don't have it already)
|
|
```
|
|
curl -sSL https://install.python-poetry.org | python -
|
|
```
|
|
|
|
Install dependencies
|
|
```
|
|
poetry install
|
|
```
|
|
|
|
If you encounter a disk space error, try to change your tmp dir to a location where you have enough disk space, e.g.
|
|
```
|
|
mkdir ~/tmp
|
|
export TMPDIR='~/tmp'
|
|
```
|
|
|
|
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiments tracking, log in with
|
|
```
|
|
wandb login
|
|
```
|
|
|
|
## Usage
|
|
|
|
|
|
### Train
|
|
|
|
```
|
|
python lerobot/scripts/train.py \
|
|
hydra.job.name=pusht \
|
|
env=pusht
|
|
```
|
|
|
|
### Visualize offline buffer
|
|
|
|
```
|
|
python lerobot/scripts/visualize_dataset.py \
|
|
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
|
|
env=pusht
|
|
```
|
|
|
|
### Visualize online buffer / Eval
|
|
|
|
```
|
|
python lerobot/scripts/eval.py \
|
|
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
|
|
env=pusht
|
|
```
|
|
|
|
|
|
## TODO
|
|
|
|
If you are not sure how to contribute or want to know the next features we working on, look on this project page: [LeRobot TODO](https://github.com/users/Cadene/projects/1)
|
|
|
|
Ask [Remi Cadene](re.cadene@gmail.com) for access if needed.
|
|
|
|
|
|
## Profile
|
|
|
|
**Example**
|
|
```python
|
|
from torch.profiler import profile, record_function, ProfilerActivity
|
|
|
|
def trace_handler(prof):
|
|
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
|
|
|
|
with profile(
|
|
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
|
|
schedule=torch.profiler.schedule(
|
|
wait=2,
|
|
warmup=2,
|
|
active=3,
|
|
),
|
|
on_trace_ready=trace_handler
|
|
) as prof:
|
|
with record_function("eval_policy"):
|
|
for i in range(num_episodes):
|
|
prof.step()
|
|
```
|
|
|
|
```bash
|
|
python lerobot/scripts/eval.py \
|
|
pretrained_model_path=/home/rcadene/code/fowm/logs/xarm_lift/all/default/2/models/final.pt \
|
|
eval_episodes=7
|
|
```
|
|
|
|
## Contribute
|
|
|
|
**Style**
|
|
```
|
|
# install if needed
|
|
pre-commit install
|
|
# apply style and linter checks before git commit
|
|
pre-commit run -a
|
|
```
|
|
|
|
**Adding dependencies (temporary)**
|
|
|
|
Right now, for the CI to work, whenever a new dependency is added it needs to be also added to the cpu env, eg:
|
|
|
|
```
|
|
# Run in this directory, adds the package to the main env with cuda
|
|
poetry add some-package
|
|
|
|
# Adds the same package to the cpu env
|
|
cd .github/poetry/cpu && poetry add some-package
|
|
```
|
|
|
|
**Tests**
|
|
|
|
Install [git lfs](https://git-lfs.com/) to retrieve test artifacts (if you don't have it already).
|
|
|
|
On Mac:
|
|
```
|
|
brew install git-lfs
|
|
git lfs install
|
|
```
|
|
|
|
On Ubuntu:
|
|
```
|
|
sudo apt-get install git-lfs
|
|
git lfs install
|
|
```
|
|
|
|
Pull artifacts if they're not in [tests/data](tests/data)
|
|
```
|
|
git lfs pull
|
|
```
|
|
|
|
When adding a new dataset, mock it with
|
|
```
|
|
python tests/scripts/mock_dataset.py --in-data-dir data/$DATASET --out-data-dir tests/data/$DATASET
|
|
```
|
|
|
|
Run tests
|
|
```
|
|
DATA_DIR="tests/data" pytest -sx tests
|
|
```
|
|
|
|
**Datasets**
|
|
|
|
To add a pytorch rl dataset to the hub, first login and use a token generated from [huggingface settings](https://huggingface.co/settings/tokens) with write access:
|
|
```
|
|
huggingface-cli login --token $HUGGINGFACE_TOKEN --add-to-git-credential
|
|
```
|
|
|
|
Then you can upload it to the hub with:
|
|
```
|
|
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli upload $HF_USER/$DATASET data/$DATASET \
|
|
--repo-type dataset \
|
|
--revision v1.0
|
|
```
|
|
|
|
For instance, for [cadene/pusht](https://huggingface.co/datasets/cadene/pusht), we used:
|
|
```
|
|
HF_USER=cadene
|
|
DATASET=pusht
|
|
```
|
|
|
|
If you want to improve an existing dataset, you can download it locally with:
|
|
```
|
|
mkdir -p data/$DATASET
|
|
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download $HF_USER/$DATASET \
|
|
--repo-type dataset \
|
|
--local-dir data/$DATASET \
|
|
--local-dir-use-symlinks=False \
|
|
--revision v1.0
|
|
```
|
|
|
|
Iterate on your code and dataset with:
|
|
```
|
|
DATA_DIR=data python train.py
|
|
```
|
|
|
|
Then upload a new version:
|
|
```
|
|
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli upload $HF_USER/$DATASET data/$DATASET \
|
|
--repo-type dataset \
|
|
--revision v1.1 \
|
|
--delete "*"
|
|
```
|
|
|
|
And you might want to mock the dataset if you need to update the unit tests as well:
|
|
```
|
|
python tests/scripts/mock_dataset.py --in-data-dir data/$DATASET --out-data-dir tests/data/$DATASET
|
|
```
|
|
|
|
|
|
## Acknowledgment
|
|
- Our Diffusion policy and Pusht environment are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu/)
|
|
- Our TDMPC policy and Simxarm environment are adapted from [FOWM](https://www.yunhaifeng.com/FOWM/)
|
|
- Our ACT policy and ALOHA environment are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha/)
|