2024-01-29 20:49:30 +08:00
|
|
|
# LeRobot
|
|
|
|
|
|
|
|
## Installation
|
|
|
|
|
2024-02-29 02:11:29 +08:00
|
|
|
Create a virtual environment with python 3.10, e.g. using `conda`:
|
2024-01-29 20:49:30 +08:00
|
|
|
```
|
2024-02-29 02:11:29 +08:00
|
|
|
conda create -y -n lerobot python=3.10
|
2024-01-29 20:49:30 +08:00
|
|
|
conda activate lerobot
|
|
|
|
```
|
|
|
|
|
2024-02-29 02:11:29 +08:00
|
|
|
[Install `poetry`](https://python-poetry.org/docs/#installation) (if you don't have it already)
|
2024-02-28 17:57:08 +08:00
|
|
|
```
|
2024-02-29 19:26:35 +08:00
|
|
|
curl -sSL https://install.python-poetry.org | python -
|
2024-02-28 17:57:08 +08:00
|
|
|
```
|
|
|
|
|
2024-02-29 02:11:29 +08:00
|
|
|
Install dependencies
|
2024-02-28 17:57:08 +08:00
|
|
|
```
|
2024-02-29 02:11:29 +08:00
|
|
|
poetry install
|
2024-02-28 17:57:08 +08:00
|
|
|
```
|
|
|
|
|
2024-02-29 02:11:29 +08:00
|
|
|
If you encounter a disk space error, try to change your tmp dir to a location where you have enough disk space, e.g.
|
2024-02-28 17:57:08 +08:00
|
|
|
```
|
2024-02-29 02:11:29 +08:00
|
|
|
mkdir ~/tmp
|
|
|
|
export TMPDIR='~/tmp'
|
2024-02-28 17:57:08 +08:00
|
|
|
```
|
|
|
|
|
2024-03-11 20:09:46 +08:00
|
|
|
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiments tracking, login with
|
|
|
|
```
|
|
|
|
wandb login
|
|
|
|
```
|
|
|
|
|
|
|
|
|
2024-01-31 07:30:14 +08:00
|
|
|
|
2024-02-22 20:14:12 +08:00
|
|
|
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
### Train
|
|
|
|
|
|
|
|
```
|
|
|
|
python lerobot/scripts/train.py \
|
2024-02-27 19:44:26 +08:00
|
|
|
hydra.job.name=pusht \
|
|
|
|
env=pusht
|
2024-02-22 20:14:12 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
### Visualize offline buffer
|
|
|
|
|
|
|
|
```
|
|
|
|
python lerobot/scripts/visualize_dataset.py \
|
2024-02-27 19:44:26 +08:00
|
|
|
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
|
|
|
|
env=pusht
|
2024-02-22 20:14:12 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
### Visualize online buffer / Eval
|
|
|
|
|
|
|
|
```
|
|
|
|
python lerobot/scripts/eval.py \
|
2024-02-27 19:44:26 +08:00
|
|
|
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
|
|
|
|
env=pusht
|
2024-02-22 20:14:12 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
|
2024-02-16 23:13:24 +08:00
|
|
|
## TODO
|
|
|
|
|
2024-02-16 23:14:59 +08:00
|
|
|
- [x] priority update doesnt match FOWM or original paper
|
|
|
|
- [x] self.step=100000 should be updated at every step to adjust to horizon of planner
|
2024-02-16 23:13:24 +08:00
|
|
|
- [ ] prefetch replay buffer to speedup training
|
|
|
|
- [ ] parallelize env to speedup eval
|
2024-02-17 00:27:54 +08:00
|
|
|
- [ ] clean checkpointing / loading
|
|
|
|
- [ ] clean logging
|
|
|
|
- [ ] clean config
|
|
|
|
- [ ] clean hyperparameter tuning
|
|
|
|
- [ ] add pusht
|
|
|
|
- [ ] add aloha
|
|
|
|
- [ ] add act
|
|
|
|
- [ ] add diffusion
|
|
|
|
- [ ] add aloha 2
|
2024-02-10 23:46:24 +08:00
|
|
|
|
2024-02-25 02:18:39 +08:00
|
|
|
## Profile
|
|
|
|
|
|
|
|
**Example**
|
|
|
|
```python
|
|
|
|
from torch.profiler import profile, record_function, ProfilerActivity
|
|
|
|
|
|
|
|
def trace_handler(prof):
|
|
|
|
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
|
|
|
|
|
|
|
|
with profile(
|
|
|
|
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
|
|
|
|
schedule=torch.profiler.schedule(
|
|
|
|
wait=2,
|
|
|
|
warmup=2,
|
|
|
|
active=3,
|
|
|
|
),
|
|
|
|
on_trace_ready=trace_handler
|
|
|
|
) as prof:
|
|
|
|
with record_function("eval_policy"):
|
|
|
|
for i in range(num_episodes):
|
|
|
|
prof.step()
|
|
|
|
```
|
|
|
|
|
|
|
|
```bash
|
|
|
|
python lerobot/scripts/eval.py \
|
|
|
|
pretrained_model_path=/home/rcadene/code/fowm/logs/xarm_lift/all/default/2/models/final.pt \
|
|
|
|
eval_episodes=7
|
|
|
|
```
|
|
|
|
|
2024-01-31 07:30:14 +08:00
|
|
|
## Contribute
|
|
|
|
|
2024-02-25 18:52:31 +08:00
|
|
|
**Style**
|
2024-01-31 07:30:14 +08:00
|
|
|
```
|
2024-03-01 07:31:32 +08:00
|
|
|
# install if needed
|
2024-02-29 19:26:35 +08:00
|
|
|
pre-commit install
|
2024-03-01 07:31:32 +08:00
|
|
|
# apply style and linter checks before git commit
|
|
|
|
pre-commit run -a
|
2024-01-31 07:30:14 +08:00
|
|
|
```
|
2024-02-25 18:52:31 +08:00
|
|
|
|
|
|
|
**Tests**
|
2024-03-08 23:48:33 +08:00
|
|
|
|
|
|
|
Install [git lfs](https://git-lfs.com/) to retrieve test artifacts (if you don't have it already).
|
2024-03-11 20:09:46 +08:00
|
|
|
|
2024-03-08 23:48:33 +08:00
|
|
|
On Mac:
|
|
|
|
```
|
|
|
|
brew install git-lfs
|
|
|
|
git lfs install
|
|
|
|
```
|
|
|
|
|
|
|
|
On Ubuntu:
|
|
|
|
```
|
|
|
|
sudo apt-get install git-lfs
|
|
|
|
git lfs install
|
|
|
|
```
|
|
|
|
|
|
|
|
Pull artifacts if they're not in [tests/data](tests/data)
|
|
|
|
```
|
|
|
|
git lfs pull
|
|
|
|
```
|
|
|
|
|
2024-03-09 22:57:29 +08:00
|
|
|
When adding a new dataset, mock it with
|
|
|
|
```
|
|
|
|
python tests/scripts/mock_dataset.py --in-data-dir data/<dataset_id> --out-data-dir tests/data/<dataset_id>
|
|
|
|
```
|
|
|
|
|
|
|
|
Run tests
|
2024-02-25 18:52:31 +08:00
|
|
|
```
|
2024-03-08 23:48:33 +08:00
|
|
|
DATA_DIR="tests/data" pytest -sx tests
|
2024-02-29 19:26:35 +08:00
|
|
|
```
|
2024-03-11 20:49:08 +08:00
|
|
|
|
|
|
|
## Acknowledgements
|
|
|
|
- Our Diffusion policy and Pusht environment are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu/)
|
|
|
|
- Our TDMPC policy and Simxarm environment are adapted from [FOWM](https://www.yunhaifeng.com/FOWM/)
|
2024-03-11 21:20:05 +08:00
|
|
|
- Our ACT policy and ALOHA environment are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha/)
|