diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 2aeaa828..dd8f97e2 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -129,26 +129,34 @@ Follow these steps to start contributing:
🚨 **Do not** work on the `main` branch.
-4. Instead of using `pip` directly, we use `poetry` for development purposes to easily track our dependencies.
+4. for development, we use `poetry` instead of just `pip` to easily track our dependencies.
If you don't have it already, follow the [instructions](https://python-poetry.org/docs/#installation) to install it.
- Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
- Install the project with dev dependencies and all environments:
- ```bash
- poetry install --sync --with dev --all-extras
- ```
- This command should be run when pulling code with and updated version of `pyproject.toml` and `poetry.lock` in order to synchronize your virtual environment with the dependencies.
- To selectively install environments (for example aloha and pusht) use:
+ Set up a development environment with conda or miniconda:
```bash
- poetry install --sync --with dev --extras "aloha pusht"
+ conda create -y -n lerobot-dev python=3.10 && conda activate lerobot-dev
```
+ To develop on 🤗 LeRobot, you will at least need to install the `dev` and `test` extras dependencies along with the core library:
+ ```bash
+ poetry install --sync --extras "dev test"
+ ```
+
+ You can also install the project with all its dependencies (including environments):
+ ```bash
+ poetry install --sync --all-extras
+ ```
+
+ > **Note:** If you don't install simulation environments with `--all-extras`, the tests that require them will be skipped when running the pytest suite locally. However, they *will* be tested in the CI. In general, we advise you to install everything and test locally before pushing.
+
+ Whichever command you chose to install the project (e.g. `poetry install --sync --all-extras`), you should run it again when pulling code with an updated version of `pyproject.toml` and `poetry.lock` in order to synchronize your virtual environment with the new dependencies.
+
The equivalent of `pip install some-package`, would just be:
```bash
poetry add some-package
```
- When changes are made to the poetry sections of the `pyproject.toml`, you should run the following command to lock dependencies.
+ When making changes to the poetry sections of the `pyproject.toml`, you should run the following command to lock dependencies.
```bash
poetry lock --no-update
```
diff --git a/README.md b/README.md
index a44d0af0..35a2f422 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,7 @@
-[](https://github.com/huggingface/lerobot/actions/workflows/test.yml?query=branch%3Amain)
+[](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml?query=branch%3Amain)
[](https://codecov.io/gh/huggingface/lerobot)
[](https://www.python.org/downloads/)
[](https://github.com/huggingface/lerobot/blob/main/LICENSE)
@@ -73,7 +73,7 @@ conda create -y -n lerobot python=3.10 && conda activate lerobot
Install 🤗 LeRobot:
```bash
-python -m pip install .
+pip install .
```
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
@@ -83,7 +83,7 @@ For simulations, 🤗 LeRobot comes with gymnasium environments that can be inst
For instance, to install 🤗 LeRobot with aloha and pusht, use:
```bash
-python -m pip install ".[aloha, pusht]"
+pip install ".[aloha, pusht]"
```
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiments tracking, log in with
diff --git a/examples/1_load_hugging_face_dataset.py b/examples/1_load_hugging_face_dataset.py
deleted file mode 100644
index ca66769c..00000000
--- a/examples/1_load_hugging_face_dataset.py
+++ /dev/null
@@ -1,69 +0,0 @@
-"""
-This script demonstrates the visualization of various robotic datasets from Hugging Face hub.
-It covers the steps from loading the datasets, filtering specific episodes, and converting the frame data to MP4 videos.
-Importantly, the dataset format is agnostic to any deep learning library and doesn't require using `lerobot` functions.
-It is compatible with pytorch, jax, numpy, etc.
-
-As an example, this script saves frames of episode number 5 of the PushT dataset to a mp4 video and saves the result here:
-`outputs/examples/1_visualize_hugging_face_datasets/episode_5.mp4`
-
-This script supports several Hugging Face datasets, among which:
-1. [Pusht](https://huggingface.co/datasets/lerobot/pusht)
-2. [Xarm Lift Medium](https://huggingface.co/datasets/lerobot/xarm_lift_medium)
-3. [Xarm Lift Medium Replay](https://huggingface.co/datasets/lerobot/xarm_lift_medium_replay)
-4. [Xarm Push Medium](https://huggingface.co/datasets/lerobot/xarm_push_medium)
-5. [Xarm Push Medium Replay](https://huggingface.co/datasets/lerobot/xarm_push_medium_replay)
-6. [Aloha Sim Insertion Human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human)
-7. [Aloha Sim Insertion Scripted](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_scripted)
-8. [Aloha Sim Transfer Cube Human](https://huggingface.co/datasets/lerobot/aloha_sim_transfer_cube_human)
-9. [Aloha Sim Transfer Cube Scripted](https://huggingface.co/datasets/lerobot/aloha_sim_transfer_cube_scripted)
-
-To try a different Hugging Face dataset, you can replace this line:
-```python
-hf_dataset, fps = load_dataset("lerobot/pusht", split="train"), 10
-```
-by one of these:
-```python
-hf_dataset, fps = load_dataset("lerobot/xarm_lift_medium", split="train"), 15
-hf_dataset, fps = load_dataset("lerobot/xarm_lift_medium_replay", split="train"), 15
-hf_dataset, fps = load_dataset("lerobot/xarm_push_medium", split="train"), 15
-hf_dataset, fps = load_dataset("lerobot/xarm_push_medium_replay", split="train"), 15
-hf_dataset, fps = load_dataset("lerobot/aloha_sim_insertion_human", split="train"), 50
-hf_dataset, fps = load_dataset("lerobot/aloha_sim_insertion_scripted", split="train"), 50
-hf_dataset, fps = load_dataset("lerobot/aloha_sim_transfer_cube_human", split="train"), 50
-hf_dataset, fps = load_dataset("lerobot/aloha_sim_transfer_cube_scripted", split="train"), 50
-```
-"""
-# TODO(rcadene): remove this example file of using hf_dataset
-
-from pathlib import Path
-
-import imageio
-from datasets import load_dataset
-
-# TODO(rcadene): list available datasets on lerobot page using `datasets`
-
-# download/load hugging face dataset in pyarrow format
-hf_dataset, fps = load_dataset("lerobot/pusht", split="train", revision="v1.1"), 10
-
-# display name of dataset and its features
-# TODO(rcadene): update to make the print pretty
-print(f"{hf_dataset=}")
-print(f"{hf_dataset.features=}")
-
-# display useful statistics about frames and episodes, which are sequences of frames from the same video
-print(f"number of frames: {len(hf_dataset)=}")
-print(f"number of episodes: {len(hf_dataset.unique('episode_index'))=}")
-print(
- f"average number of frames per episode: {len(hf_dataset) / len(hf_dataset.unique('episode_index')):.3f}"
-)
-
-# select the frames belonging to episode number 5
-hf_dataset = hf_dataset.filter(lambda frame: frame["episode_index"] == 5)
-
-# load all frames of episode 5 in RAM in PIL format
-frames = hf_dataset["observation.image"]
-
-# save episode frames to a mp4 video
-Path("outputs/examples/1_load_hugging_face_dataset").mkdir(parents=True, exist_ok=True)
-imageio.mimsave("outputs/examples/1_load_hugging_face_dataset/episode_5.mp4", frames, fps=fps)
diff --git a/examples/2_load_lerobot_dataset.py b/examples/1_load_lerobot_dataset.py
similarity index 97%
rename from examples/2_load_lerobot_dataset.py
rename to examples/1_load_lerobot_dataset.py
index 26c78de1..e7b3c216 100644
--- a/examples/2_load_lerobot_dataset.py
+++ b/examples/1_load_lerobot_dataset.py
@@ -58,8 +58,8 @@ frames = [(frame * 255).type(torch.uint8) for frame in frames]
frames = [frame.permute((1, 2, 0)).numpy() for frame in frames]
# and finally save them to a mp4 video
-Path("outputs/examples/2_load_lerobot_dataset").mkdir(parents=True, exist_ok=True)
-imageio.mimsave("outputs/examples/2_load_lerobot_dataset/episode_5.mp4", frames, fps=dataset.fps)
+Path("outputs/examples/1_load_lerobot_dataset").mkdir(parents=True, exist_ok=True)
+imageio.mimsave("outputs/examples/1_load_lerobot_dataset/episode_5.mp4", frames, fps=dataset.fps)
# For many machine learning applications we need to load histories of past observations, or trajectorys of future actions. Our datasets can load previous and future frames for each key/modality,
# using timestamps differences with the current loaded frame. For instance:
diff --git a/examples/3_evaluate_pretrained_policy.py b/examples/2_evaluate_pretrained_policy.py
similarity index 100%
rename from examples/3_evaluate_pretrained_policy.py
rename to examples/2_evaluate_pretrained_policy.py
diff --git a/examples/4_train_policy.py b/examples/3_train_policy.py
similarity index 100%
rename from examples/4_train_policy.py
rename to examples/3_train_policy.py
diff --git a/tests/test_examples.py b/tests/test_examples.py
index 888fb2ca..9f86fd03 100644
--- a/tests/test_examples.py
+++ b/tests/test_examples.py
@@ -16,24 +16,18 @@ def _run_script(path):
def test_example_1():
- path = "examples/1_load_hugging_face_dataset.py"
+ path = "examples/1_load_lerobot_dataset.py"
_run_script(path)
- assert Path("outputs/examples/1_load_hugging_face_dataset/episode_5.mp4").exists()
+ assert Path("outputs/examples/1_load_lerobot_dataset/episode_5.mp4").exists()
-def test_example_2():
- path = "examples/2_load_lerobot_dataset.py"
- _run_script(path)
- assert Path("outputs/examples/2_load_lerobot_dataset/episode_5.mp4").exists()
-
-
-def test_examples_4_and_3():
+def test_examples_3_and_2():
"""
Train a model with example 3, check the outputs.
Evaluate the trained model with example 2, check the outputs.
"""
- path = "examples/4_train_policy.py"
+ path = "examples/3_train_policy.py"
with open(path) as file:
file_contents = file.read()
@@ -55,7 +49,7 @@ def test_examples_4_and_3():
for file_name in ["model.pt", "config.yaml"]:
assert Path(f"outputs/train/example_pusht_diffusion/{file_name}").exists()
- path = "examples/3_evaluate_pretrained_policy.py"
+ path = "examples/2_evaluate_pretrained_policy.py"
with open(path) as file:
file_contents = file.read()