🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Go to file
Cadene cde866dac0 reupload 2024-03-09 15:36:20 +01:00
.github/workflows Add pusht test artifact 2024-03-09 15:36:20 +01:00
lerobot Add pusht test artifact 2024-03-09 15:36:20 +01:00
tests reupload 2024-03-09 15:36:20 +01:00
.gitattributes Add pusht test artifact 2024-03-09 15:36:20 +01:00
.gitignore Add pusht test artifact 2024-03-09 15:36:20 +01:00
.pre-commit-config.yaml Added pre-commit 2024-02-29 12:27:30 +01:00
README.md Update readme 2024-03-09 15:36:20 +01:00
poetry.lock Add h5py 2024-03-06 12:28:20 +01:00
pyproject.toml Add h5py 2024-03-06 12:28:20 +01:00
sbatch.sh Revert "WIP" 2024-03-04 22:41:47 +00:00
sbatch_hopper.sh Add sbatch_hopper.sh 2024-03-04 22:41:31 +00:00

README.md

LeRobot

Installation

Create a virtual environment with python 3.10, e.g. using conda:

conda create -y -n lerobot python=3.10
conda activate lerobot

Install poetry (if you don't have it already)

curl -sSL https://install.python-poetry.org | python -

Install dependencies

poetry install

If you encounter a disk space error, try to change your tmp dir to a location where you have enough disk space, e.g.

mkdir ~/tmp
export TMPDIR='~/tmp'

Install diffusion_policy #HACK

# from this directory
git clone https://github.com/real-stanford/diffusion_policy
cp -r diffusion_policy/diffusion_policy $(poetry env info -p)/lib/python3.10/site-packages/

Usage

Train

python lerobot/scripts/train.py \
hydra.job.name=pusht \
env=pusht

Visualize offline buffer

python lerobot/scripts/visualize_dataset.py \
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
env=pusht

Visualize online buffer / Eval

python lerobot/scripts/eval.py \
hydra.run.dir=tmp/$(date +"%Y_%m_%d") \
env=pusht

TODO

  • priority update doesnt match FOWM or original paper
  • self.step=100000 should be updated at every step to adjust to horizon of planner
  • prefetch replay buffer to speedup training
  • parallelize env to speedup eval
  • clean checkpointing / loading
  • clean logging
  • clean config
  • clean hyperparameter tuning
  • add pusht
  • add aloha
  • add act
  • add diffusion
  • add aloha 2

Profile

Example

from torch.profiler import profile, record_function, ProfilerActivity

def trace_handler(prof):
    prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")

with profile(
    activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
    schedule=torch.profiler.schedule(
        wait=2,
        warmup=2,
        active=3,
    ),
    on_trace_ready=trace_handler
) as prof:
    with record_function("eval_policy"):
        for i in range(num_episodes):
            prof.step()
python lerobot/scripts/eval.py \
pretrained_model_path=/home/rcadene/code/fowm/logs/xarm_lift/all/default/2/models/final.pt \
eval_episodes=7

Contribute

Style

# install if needed
pre-commit install
# apply style and linter checks before git commit
pre-commit run -a

Tests

Install git lfs to retrieve test artifacts (if you don't have it already). On Mac:

brew install git-lfs
git lfs install

On Ubuntu:

sudo apt-get install git-lfs
git lfs install

Pull artifacts if they're not in tests/data

git lfs pull
DATA_DIR="tests/data" pytest -sx tests