2024-01-29 20:49:30 +08:00
# LeRobot
## Installation
2024-03-12 17:05:05 +08:00
Create a virtual environment with Python 3.10, e.g. using `conda` :
2024-01-29 20:49:30 +08:00
```
2024-02-29 02:11:29 +08:00
conda create -y -n lerobot python=3.10
2024-01-29 20:49:30 +08:00
conda activate lerobot
```
2024-02-29 02:11:29 +08:00
[Install `poetry` ](https://python-poetry.org/docs/#installation ) (if you don't have it already)
2024-02-28 17:57:08 +08:00
```
2024-02-29 19:26:35 +08:00
curl -sSL https://install.python-poetry.org | python -
2024-02-28 17:57:08 +08:00
```
2024-02-29 02:11:29 +08:00
Install dependencies
2024-02-28 17:57:08 +08:00
```
2024-02-29 02:11:29 +08:00
poetry install
2024-02-28 17:57:08 +08:00
```
2024-02-29 02:11:29 +08:00
If you encounter a disk space error, try to change your tmp dir to a location where you have enough disk space, e.g.
2024-02-28 17:57:08 +08:00
```
2024-02-29 02:11:29 +08:00
mkdir ~/tmp
export TMPDIR='~/tmp'
2024-02-28 17:57:08 +08:00
```
2024-03-12 17:05:05 +08:00
To use [Weights and Biases ](https://docs.wandb.ai/quickstart ) for experiments tracking, log in with
2024-03-11 20:09:46 +08:00
```
wandb login
```
2024-02-22 20:14:12 +08:00
## Usage
### Train
```
python lerobot/scripts/train.py \
2024-02-27 19:44:26 +08:00
hydra.job.name=pusht \
env=pusht
2024-02-22 20:14:12 +08:00
```
### Visualize offline buffer
```
python lerobot/scripts/visualize_dataset.py \
2024-02-27 19:44:26 +08:00
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
env=pusht
2024-02-22 20:14:12 +08:00
```
### Visualize online buffer / Eval
```
python lerobot/scripts/eval.py \
2024-02-27 19:44:26 +08:00
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
env=pusht
2024-02-22 20:14:12 +08:00
```
2024-02-16 23:13:24 +08:00
## TODO
Add Aloha env and ACT policy
WIP Aloha env tests pass
Rendering works (fps look fast tho? TODO action bounding is too wide [-1,1])
Update README
Copy past from act repo
Remove download.py add a WIP for Simxarm
Remove download.py add a WIP for Simxarm
Add act yaml (TODO: try train.py)
Training can runs (TODO: eval)
Add tasks without end_effector that are compatible with dataset, Eval can run (TODO: training and pretrained model)
Add AbstractEnv, Refactor AlohaEnv, Add rendering_hook in env, Minor modifications, (TODO: Refactor Pusht and Simxarm)
poetry lock
fix bug in compute_stats for action normalization
fix more bugs in normalization
fix training
fix import
PushtEnv inheriates AbstractEnv, Improve factory Normalization
Add _make_env to EnvAbstract
Add call_rendering_hooks to pusht env
SimxarmEnv inherites from AbstractEnv (NOT TESTED)
Add aloha tests artifacts + update pusht stats
fix image normalization: before env was in [0,1] but dataset in [0,255], and now both in [0,255]
Small fix on simxarm
Add next to obs
Add top camera to Aloha env (TODO: make it compatible with set of cameras)
Add top camera to Aloha env (TODO: make it compatible with set of cameras)
2024-03-08 17:47:39 +08:00
If you are not sure how to contribute or want to know the next features we working on, look on this project page: [LeRobot TODO ](https://github.com/users/Cadene/projects/1 )
Ask [Remi Cadene ](re.cadene@gmail.com ) for access if needed.
2024-02-10 23:46:24 +08:00
2024-02-25 02:18:39 +08:00
## Profile
**Example**
```python
from torch.profiler import profile, record_function, ProfilerActivity
def trace_handler(prof):
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(
wait=2,
warmup=2,
active=3,
),
on_trace_ready=trace_handler
) as prof:
with record_function("eval_policy"):
for i in range(num_episodes):
prof.step()
```
```bash
python lerobot/scripts/eval.py \
pretrained_model_path=/home/rcadene/code/fowm/logs/xarm_lift/all/default/2/models/final.pt \
eval_episodes=7
```
2024-01-31 07:30:14 +08:00
## Contribute
2024-02-25 18:52:31 +08:00
**Style**
2024-01-31 07:30:14 +08:00
```
2024-03-01 07:31:32 +08:00
# install if needed
2024-02-29 19:26:35 +08:00
pre-commit install
2024-03-01 07:31:32 +08:00
# apply style and linter checks before git commit
pre-commit run -a
2024-01-31 07:30:14 +08:00
```
2024-02-25 18:52:31 +08:00
2024-03-14 22:24:38 +08:00
**Adding dependencies (temporary)**
Right now, for the CI to work, whenever a new dependency is added it needs to be also added to the cpu env, eg:
```
# Run in this directory, adds the package to the main env with cuda
poetry add some-package
# Adds the same package to the cpu env
cd .github/poetry/cpu & & poetry add some-package
```
2024-02-25 18:52:31 +08:00
**Tests**
2024-03-08 23:48:33 +08:00
Install [git lfs ](https://git-lfs.com/ ) to retrieve test artifacts (if you don't have it already).
2024-03-11 20:09:46 +08:00
2024-03-08 23:48:33 +08:00
On Mac:
```
brew install git-lfs
git lfs install
```
On Ubuntu:
```
sudo apt-get install git-lfs
git lfs install
```
Pull artifacts if they're not in [tests/data ](tests/data )
```
git lfs pull
```
2024-03-09 22:57:29 +08:00
When adding a new dataset, mock it with
```
2024-03-19 23:49:45 +08:00
python tests/scripts/mock_dataset.py --in-data-dir data/$DATASET --out-data-dir tests/data/$DATASET
2024-03-09 22:57:29 +08:00
```
Run tests
2024-02-25 18:52:31 +08:00
```
2024-03-08 23:48:33 +08:00
DATA_DIR="tests/data" pytest -sx tests
2024-02-29 19:26:35 +08:00
```
2024-03-11 20:49:08 +08:00
2024-03-15 08:30:11 +08:00
**Datasets**
To add a pytorch rl dataset to the hub, first login and use a token generated from [huggingface settings ](https://huggingface.co/settings/tokens ) with write access:
```
huggingface-cli login --token $HUGGINGFACE_TOKEN --add-to-git-credential
```
Then you can upload it to the hub with:
```
2024-03-19 23:49:45 +08:00
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli upload $HF_USER/$DATASET data/$DATASET \
--repo-type dataset \
--revision v1.0
2024-03-15 08:30:11 +08:00
```
For instance, for [cadene/pusht ](https://huggingface.co/datasets/cadene/pusht ), we used:
```
HF_USER=cadene
DATASET=pusht
```
2024-03-19 23:49:45 +08:00
If you want to improve an existing dataset, you can download it locally with:
```
mkdir -p data/$DATASET
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download $HF_USER/$DATASET \
--repo-type dataset \
--local-dir data/$DATASET \
--local-dir-use-symlinks=False \
--revision v1.0
```
Iterate on your code and dataset with:
```
DATA_DIR=data python train.py
```
2024-03-20 00:44:19 +08:00
Then upload a new version (v2.0 or v1.1 if the changes are respectively more or less significant):
2024-03-19 23:49:45 +08:00
```
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli upload $HF_USER/$DATASET data/$DATASET \
--repo-type dataset \
--revision v1.1 \
--delete "*"
```
And you might want to mock the dataset if you need to update the unit tests as well:
```
python tests/scripts/mock_dataset.py --in-data-dir data/$DATASET --out-data-dir tests/data/$DATASET
```
2024-03-15 08:30:11 +08:00
2024-03-12 17:05:05 +08:00
## Acknowledgment
2024-03-11 20:49:08 +08:00
- Our Diffusion policy and Pusht environment are adapted from [Diffusion Policy ](https://diffusion-policy.cs.columbia.edu/ )
- Our TDMPC policy and Simxarm environment are adapted from [FOWM ](https://www.yunhaifeng.com/FOWM/ )
2024-03-11 21:20:05 +08:00
- Our ACT policy and ALOHA environment are adapted from [ALOHA ](https://tonyzhaozh.github.io/aloha/ )