diff --git a/.github/workflows/quality.yml b/.github/workflows/quality.yml index 851869a0..c245345f 100644 --- a/.github/workflows/quality.yml +++ b/.github/workflows/quality.yml @@ -50,7 +50,7 @@ jobs: uses: actions/checkout@v3 - name: Install poetry - run: pipx install poetry + run: pipx install "poetry<2.0.0" - name: Poetry check run: poetry check @@ -64,7 +64,7 @@ jobs: uses: actions/checkout@v3 - name: Install poetry - run: pipx install poetry + run: pipx install "poetry<2.0.0" - name: Install poetry-relax run: poetry self add poetry-relax diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 1637151b..58eca320 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -3,7 +3,7 @@ default_language_version: python: python3.10 repos: - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.6.0 + rev: v5.0.0 hooks: - id: check-added-large-files - id: debug-statements @@ -14,11 +14,11 @@ repos: - id: end-of-file-fixer - id: trailing-whitespace - repo: https://github.com/asottile/pyupgrade - rev: v3.16.0 + rev: v3.19.0 hooks: - id: pyupgrade - repo: https://github.com/astral-sh/ruff-pre-commit - rev: v0.5.2 + rev: v0.8.2 hooks: - id: ruff args: [--fix] @@ -32,6 +32,6 @@ repos: - "--check" - "--no-update" - repo: https://github.com/gitleaks/gitleaks - rev: v8.18.4 + rev: v8.21.2 hooks: - id: gitleaks diff --git a/README.md b/README.md index 9331bdec..849a14de 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,7 @@ ### Acknowledgment -- Thanks to Tony Zaho, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io). +- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io). - Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io). - Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM). - Thanks to Antonio Loquercio and Ashish Kumar for their early support. diff --git a/benchmarks/video/README.md b/benchmarks/video/README.md index 890c1142..56cd1d1e 100644 --- a/benchmarks/video/README.md +++ b/benchmarks/video/README.md @@ -21,7 +21,7 @@ How to decode videos? ## Variables **Image content & size** -We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution). +We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an apartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution). For these reasons, we run this benchmark on four representative datasets: - `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera. - `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera. @@ -63,7 +63,7 @@ This of course is affected by the `-g` parameter during encoding, which specifie Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`. -Additionally, because some policies might request single timestamps that are a few frames appart, we also have the following scenario: +Additionally, because some policies might request single timestamps that are a few frames apart, we also have the following scenario: - `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`), However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded. @@ -85,8 +85,8 @@ However, due to how video decoding is implemented with `pyav`, we don't have acc **Average Structural Similarity Index Measure (higher is better)** `avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity. -One aspect that can't be measured here with those metrics is the compatibility of the encoding accross platforms, in particular on web browser, for visualization purposes. -h264, h265 and AV1 are all commonly used codecs and should not be pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility: +One aspect that can't be measured here with those metrics is the compatibility of the encoding across platforms, in particular on web browser, for visualization purposes. +h264, h265 and AV1 are all commonly used codecs and should not pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility: - `yuv420p` is more widely supported across various platforms, including web browsers. - `yuv444p` offers higher color fidelity but might not be supported as broadly. @@ -116,7 +116,7 @@ Additional encoding parameters exist that are not included in this benchmark. In - `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1. - `-tune` which allows to optimize the encoding for certains aspects (e.g. film quality, fast decoding, etc.). -See the documentation mentioned above for more detailled info on these settings and for a more comprehensive list of other parameters. +See the documentation mentioned above for more detailed info on these settings and for a more comprehensive list of other parameters. Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few: - `torchaudio` diff --git a/examples/10_use_so100.md b/examples/10_use_so100.md index 70e4ed8b..155bbe51 100644 --- a/examples/10_use_so100.md +++ b/examples/10_use_so100.md @@ -1,25 +1,31 @@ -This tutorial explains how to use [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot. +# Using the [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot -## Source the parts + +## A. Source the parts Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already. **Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly. -## Install LeRobot +## B. Install LeRobot On your computer: 1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install): ```bash mkdir -p ~/miniconda3 +# Linux: wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh +# Mac M-series: +# curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o ~/miniconda3/miniconda.sh +# Mac Intel: +# curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash ``` -2. Restart shell or `source ~/.bashrc` +2. Restart shell or `source ~/.bashrc` (*Mac*: `source ~/.bash_profile`) or `source ~/.zshrc` if you're using zshell 3. Create and activate a fresh conda environment for lerobot ```bash @@ -36,23 +42,30 @@ git clone https://github.com/huggingface/lerobot.git ~/lerobot cd ~/lerobot && pip install -e ".[feetech]" ``` -For Linux only (not Mac), install extra dependencies for recording datasets: +*For Linux only (not Mac)*: install extra dependencies for recording datasets: ```bash conda install -y -c conda-forge ffmpeg pip uninstall -y opencv-python conda install -y -c conda-forge "opencv>=4.10.0" ``` -## Configure the motors +## C. Configure the motors -Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the use of our scripts below. +### 1. Find the USB ports associated to each arm -**Find USB ports associated to your arms** -To find the correct ports for each arm, run the utility script twice: +Designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. + +#### a. Run the script to find ports + +Follow Step 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I), which illustrates the use of our scripts below. + +To find the port for each bus servo adapter, run the utility script: ```bash python lerobot/scripts/find_motors_bus_port.py ``` +#### b. Example outputs + Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux): ``` Finding all available ports for the MotorBus. @@ -64,7 +77,6 @@ Remove the usb cable from your DynamixelMotorsBus and press Enter when done. The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751 Reconnect the usb cable. ``` - Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux): ``` Finding all available ports for the MotorBus. @@ -77,13 +89,20 @@ The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081 Reconnect the usb cable. ``` -Troubleshooting: On Linux, you might need to give access to the USB ports by running: +#### c. Troubleshooting +On Linux, you might need to give access to the USB ports by running: ```bash sudo chmod 666 /dev/ttyACM0 sudo chmod 666 /dev/ttyACM1 ``` -**Configure your motors** +#### d. Update YAML file + +Now that you have the ports, modify the *port* sections in `so100.yaml` + +### 2. Configure the motors + +#### a. Set IDs for all 12 motors Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate: ```bash python lerobot/scripts/configure_motor.py \ @@ -94,7 +113,7 @@ python lerobot/scripts/configure_motor.py \ --ID 1 ``` -Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees). +*Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).* Then unplug your motor and plug the second motor and set its ID to 2. ```bash @@ -108,23 +127,25 @@ python lerobot/scripts/configure_motor.py \ Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm. -**Remove the gears of the 6 leader motors** -Follow step 2 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm. -**Add motor horn to the motors** -Follow step 3 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30. +#### b. Remove the gears of the 6 leader motors + +Follow step 2 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=248). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm. + +#### c. Add motor horn to all 12 motors +Follow step 3 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=569). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30. Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated. -## Assemble the arms +## D. Assemble the arms -Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm. +Follow step 4 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=610). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm. -## Calibrate +## E. Calibrate Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one SO-100 robot to work on another. -**Manual calibration of follower arm** -/!\ Contrarily to step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the auto calibration, we will actually do manual calibration of follower for now. +#### a. Manual calibration of follower arm +/!\ Contrarily to step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the auto calibration, we will actually do manual calibration of follower for now. You will need to move the follower arm to these positions sequentially: @@ -139,8 +160,8 @@ python lerobot/scripts/control_robot.py calibrate \ --robot-overrides '~cameras' --arms main_follower ``` -**Manual calibration of leader arm** -Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially: +#### b. Manual calibration of leader arm +Follow step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially: | 1. Zero position | 2. Rotated position | 3. Rest position | |---|---|---| @@ -153,7 +174,7 @@ python lerobot/scripts/control_robot.py calibrate \ --robot-overrides '~cameras' --arms main_leader ``` -## Teleoperate +## F. Teleoperate **Simple teleop** Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras): @@ -165,14 +186,14 @@ python lerobot/scripts/control_robot.py teleoperate \ ``` -**Teleop with displaying cameras** +#### a. Teleop with displaying cameras Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset. ```bash python lerobot/scripts/control_robot.py teleoperate \ --robot-path lerobot/configs/robot/so100.yaml ``` -## Record a dataset +## G. Record a dataset Once you're familiar with teleoperation, you can record your first dataset with SO-100. @@ -201,7 +222,7 @@ python lerobot/scripts/control_robot.py record \ --push-to-hub 1 ``` -## Visualize a dataset +## H. Visualize a dataset If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by: ```bash @@ -214,7 +235,7 @@ python lerobot/scripts/visualize_dataset_html.py \ --repo-id ${HF_USER}/so100_test ``` -## Replay an episode +## I. Replay an episode Now try to replay the first episode on your robot: ```bash @@ -225,7 +246,7 @@ python lerobot/scripts/control_robot.py replay \ --episode 0 ``` -## Train a policy +## J. Train a policy To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command: ```bash @@ -248,7 +269,7 @@ Let's explain it: Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`. -## Evaluate your policy +## K. Evaluate your policy You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes: ```bash @@ -268,7 +289,7 @@ As you can see, it's almost the same command as previously used to record your t 1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_so100_test`). 2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_so100_test`). -## More +## L. More Information Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot. diff --git a/examples/6_add_image_transforms.py b/examples/6_add_image_transforms.py index 82b70f5c..882710e3 100644 --- a/examples/6_add_image_transforms.py +++ b/examples/6_add_image_transforms.py @@ -10,10 +10,10 @@ from torchvision.transforms import ToPILImage, v2 from lerobot.common.datasets.lerobot_dataset import LeRobotDataset -dataset_repo_id = "lerobot/aloha_static_tape" +dataset_repo_id = "lerobot/aloha_static_screw_driver" # Create a LeRobotDataset with no transformations -dataset = LeRobotDataset(dataset_repo_id) +dataset = LeRobotDataset(dataset_repo_id, episodes=[0]) # This is equivalent to `dataset = LeRobotDataset(dataset_repo_id, image_transforms=None)` # Get the index of the first observation in the first episode @@ -28,12 +28,13 @@ transforms = v2.Compose( [ v2.ColorJitter(brightness=(0.5, 1.5)), v2.ColorJitter(contrast=(0.5, 1.5)), + v2.ColorJitter(hue=(-0.1, 0.1)), v2.RandomAdjustSharpness(sharpness_factor=2, p=1), ] ) # Create another LeRobotDataset with the defined transformations -transformed_dataset = LeRobotDataset(dataset_repo_id, image_transforms=transforms) +transformed_dataset = LeRobotDataset(dataset_repo_id, episodes=[0], image_transforms=transforms) # Get a frame from the transformed dataset transformed_frame = transformed_dataset[first_idx][transformed_dataset.meta.camera_keys[0]] diff --git a/examples/9_use_aloha.md b/examples/9_use_aloha.md index 1abf7c49..f531a2c1 100644 --- a/examples/9_use_aloha.md +++ b/examples/9_use_aloha.md @@ -56,7 +56,7 @@ python lerobot/scripts/control_robot.py teleoperate \ --robot-overrides max_relative_target=5 ``` -By adding `--robot-overrides max_relative_target=5`, we override the default value for `max_relative_target` defined in `lerobot/configs/robot/aloha.yaml`. It is expected to be `5` to limit the magnitude of the movement for more safety, but the teloperation won't be smooth. When you feel confident, you can disable this limit by adding `--robot-overrides max_relative_target=null` to the command line: +By adding `--robot-overrides max_relative_target=5`, we override the default value for `max_relative_target` defined in `lerobot/configs/robot/aloha.yaml`. It is expected to be `5` to limit the magnitude of the movement for more safety, but the teleoperation won't be smooth. When you feel confident, you can disable this limit by adding `--robot-overrides max_relative_target=null` to the command line: ```bash python lerobot/scripts/control_robot.py teleoperate \ --robot-path lerobot/configs/robot/aloha.yaml \ diff --git a/lerobot/common/datasets/image_writer.py b/lerobot/common/datasets/image_writer.py index 9564fb59..85dd6830 100644 --- a/lerobot/common/datasets/image_writer.py +++ b/lerobot/common/datasets/image_writer.py @@ -28,7 +28,7 @@ def safe_stop_image_writer(func): try: return func(*args, **kwargs) except Exception as e: - dataset = kwargs.get("dataset", None) + dataset = kwargs.get("dataset") image_writer = getattr(dataset, "image_writer", None) if dataset else None if image_writer is not None: print("Waiting for image writer to terminate...") diff --git a/lerobot/common/datasets/utils.py b/lerobot/common/datasets/utils.py index af5b03cc..123c5960 100644 --- a/lerobot/common/datasets/utils.py +++ b/lerobot/common/datasets/utils.py @@ -17,9 +17,11 @@ import importlib.resources import json import logging import textwrap +from collections.abc import Iterator from itertools import accumulate from pathlib import Path from pprint import pformat +from types import SimpleNamespace from typing import Any import datasets @@ -477,7 +479,6 @@ def create_lerobot_dataset_card( Note: If specified, license must be one of https://huggingface.co/docs/hub/repositories-licenses. """ card_tags = ["LeRobot"] - card_template_path = importlib.resources.path("lerobot.common.datasets", "card_template.md") if tags: card_tags += tags @@ -497,8 +498,65 @@ def create_lerobot_dataset_card( ], ) + card_template = (importlib.resources.files("lerobot.common.datasets") / "card_template.md").read_text() + return DatasetCard.from_template( card_data=card_data, - template_path=str(card_template_path), + template_str=card_template, **kwargs, ) + + +class IterableNamespace(SimpleNamespace): + """ + A namespace object that supports both dictionary-like iteration and dot notation access. + Automatically converts nested dictionaries into IterableNamespaces. + + This class extends SimpleNamespace to provide: + - Dictionary-style iteration over keys + - Access to items via both dot notation (obj.key) and brackets (obj["key"]) + - Dictionary-like methods: items(), keys(), values() + - Recursive conversion of nested dictionaries + + Args: + dictionary: Optional dictionary to initialize the namespace + **kwargs: Additional keyword arguments passed to SimpleNamespace + + Examples: + >>> data = {"name": "Alice", "details": {"age": 25}} + >>> ns = IterableNamespace(data) + >>> ns.name + 'Alice' + >>> ns.details.age + 25 + >>> list(ns.keys()) + ['name', 'details'] + >>> for key, value in ns.items(): + ... print(f"{key}: {value}") + name: Alice + details: IterableNamespace(age=25) + """ + + def __init__(self, dictionary: dict[str, Any] = None, **kwargs): + super().__init__(**kwargs) + if dictionary is not None: + for key, value in dictionary.items(): + if isinstance(value, dict): + setattr(self, key, IterableNamespace(value)) + else: + setattr(self, key, value) + + def __iter__(self) -> Iterator[str]: + return iter(vars(self)) + + def __getitem__(self, key: str) -> Any: + return vars(self)[key] + + def items(self): + return vars(self).items() + + def values(self): + return vars(self).values() + + def keys(self): + return vars(self).keys() diff --git a/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py b/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py index c8da2fe1..eeeb8fe7 100644 --- a/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py +++ b/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py @@ -159,11 +159,11 @@ DATASETS = { **ALOHA_STATIC_INFO, }, "aloha_static_vinh_cup": { - "single_task": "Pick up the platic cup with the right arm, then pop its lid open with the left arm.", + "single_task": "Pick up the plastic cup with the right arm, then pop its lid open with the left arm.", **ALOHA_STATIC_INFO, }, "aloha_static_vinh_cup_left": { - "single_task": "Pick up the platic cup with the left arm, then pop its lid open with the right arm.", + "single_task": "Pick up the plastic cup with the left arm, then pop its lid open with the right arm.", **ALOHA_STATIC_INFO, }, "aloha_static_ziploc_slide": {"single_task": "Slide open the ziploc bag.", **ALOHA_STATIC_INFO}, diff --git a/lerobot/common/robot_devices/control_utils.py b/lerobot/common/robot_devices/control_utils.py index 9b9649dd..8cc0f326 100644 --- a/lerobot/common/robot_devices/control_utils.py +++ b/lerobot/common/robot_devices/control_utils.py @@ -184,7 +184,7 @@ def init_policy(pretrained_policy_name_or_path, policy_overrides): def warmup_record( robot, events, - enable_teloperation, + enable_teleoperation, warmup_time_s, display_cameras, fps, @@ -195,7 +195,7 @@ def warmup_record( display_cameras=display_cameras, events=events, fps=fps, - teleoperate=enable_teloperation, + teleoperate=enable_teleoperation, ) diff --git a/lerobot/scripts/control_robot.py b/lerobot/scripts/control_robot.py index 12eaf146..30f22cb0 100644 --- a/lerobot/scripts/control_robot.py +++ b/lerobot/scripts/control_robot.py @@ -300,7 +300,7 @@ def record( # TODO(rcadene): add an option to enable teleoperation during reset # Skip reset for the last episode to be recorded if not events["stop_recording"] and ( - (dataset.num_episodes < num_episodes - 1) or events["rerecord_episode"] + (recorded_episodes < num_episodes - 1) or events["rerecord_episode"] ): log_say("Reset the environment", play_sounds) reset_environment(robot, events, reset_time_s) diff --git a/lerobot/scripts/control_sim_robot.py b/lerobot/scripts/control_sim_robot.py index 85dfca64..4fffa8c7 100644 --- a/lerobot/scripts/control_sim_robot.py +++ b/lerobot/scripts/control_sim_robot.py @@ -35,7 +35,7 @@ python lerobot/scripts/visualize_dataset.py \ --episode-index 0 ``` -- Replay a sequence of test episodes: +- Replay a sequence of test episodes: ```bash python lerobot/scripts/control_sim_robot.py replay \ --robot-path lerobot/configs/robot/your_robot_config.yaml \ diff --git a/lerobot/scripts/visualize_dataset_html.py b/lerobot/scripts/visualize_dataset_html.py index 2c81fbfc..cc3f3930 100644 --- a/lerobot/scripts/visualize_dataset_html.py +++ b/lerobot/scripts/visualize_dataset_html.py @@ -53,20 +53,29 @@ python lerobot/scripts/visualize_dataset_html.py \ """ import argparse +import csv +import json import logging +import re import shutil +import tempfile +from io import StringIO from pathlib import Path -import tqdm -from flask import Flask, redirect, render_template, url_for +import numpy as np +import pandas as pd +import requests +from flask import Flask, redirect, render_template, request, url_for +from lerobot import available_datasets from lerobot.common.datasets.lerobot_dataset import LeRobotDataset +from lerobot.common.datasets.utils import IterableNamespace from lerobot.common.utils.utils import init_logging def run_server( - dataset: LeRobotDataset, - episodes: list[int], + dataset: LeRobotDataset | IterableNamespace | None, + episodes: list[int] | None, host: str, port: str, static_folder: Path, @@ -76,10 +85,50 @@ def run_server( app.config["SEND_FILE_MAX_AGE_DEFAULT"] = 0 # specifying not to cache @app.route("/") - def index(): - # home page redirects to the first episode page - [dataset_namespace, dataset_name] = dataset.repo_id.split("/") - first_episode_id = episodes[0] + def hommepage(dataset=dataset): + if dataset: + dataset_namespace, dataset_name = dataset.repo_id.split("/") + return redirect( + url_for( + "show_episode", + dataset_namespace=dataset_namespace, + dataset_name=dataset_name, + episode_id=0, + ) + ) + + dataset_param, episode_param = None, None + all_params = request.args + if "dataset" in all_params: + dataset_param = all_params["dataset"] + if "episode" in all_params: + episode_param = int(all_params["episode"]) + + if dataset_param: + dataset_namespace, dataset_name = dataset_param.split("/") + return redirect( + url_for( + "show_episode", + dataset_namespace=dataset_namespace, + dataset_name=dataset_name, + episode_id=episode_param if episode_param is not None else 0, + ) + ) + + featured_datasets = [ + "lerobot/aloha_static_cups_open", + "lerobot/columbia_cairlab_pusht_real", + "lerobot/taco_play", + ] + return render_template( + "visualize_dataset_homepage.html", + featured_datasets=featured_datasets, + lerobot_datasets=available_datasets, + ) + + @app.route("//") + def show_first_episode(dataset_namespace, dataset_name): + first_episode_id = 0 return redirect( url_for( "show_episode", @@ -90,30 +139,85 @@ def run_server( ) @app.route("///episode_") - def show_episode(dataset_namespace, dataset_name, episode_id): + def show_episode(dataset_namespace, dataset_name, episode_id, dataset=dataset, episodes=episodes): + repo_id = f"{dataset_namespace}/{dataset_name}" + try: + if dataset is None: + dataset = get_dataset_info(repo_id) + except FileNotFoundError: + return ( + "Make sure to convert your LeRobotDataset to v2 & above. See how to convert your dataset at https://github.com/huggingface/lerobot/pull/461", + 400, + ) + dataset_version = ( + dataset.meta._version if isinstance(dataset, LeRobotDataset) else dataset.codebase_version + ) + match = re.search(r"v(\d+)\.", dataset_version) + if match: + major_version = int(match.group(1)) + if major_version < 2: + return "Make sure to convert your LeRobotDataset to v2 & above." + + episode_data_csv_str, columns = get_episode_data(dataset, episode_id) dataset_info = { - "repo_id": dataset.repo_id, - "num_samples": dataset.num_frames, - "num_episodes": dataset.num_episodes, + "repo_id": f"{dataset_namespace}/{dataset_name}", + "num_samples": dataset.num_frames + if isinstance(dataset, LeRobotDataset) + else dataset.total_frames, + "num_episodes": dataset.num_episodes + if isinstance(dataset, LeRobotDataset) + else dataset.total_episodes, "fps": dataset.fps, } - video_paths = [dataset.meta.get_video_file_path(episode_id, key) for key in dataset.meta.video_keys] - tasks = dataset.meta.episodes[episode_id]["tasks"] - videos_info = [ - {"url": url_for("static", filename=video_path), "filename": video_path.name} - for video_path in video_paths - ] + if isinstance(dataset, LeRobotDataset): + video_paths = [ + dataset.meta.get_video_file_path(episode_id, key) for key in dataset.meta.video_keys + ] + videos_info = [ + {"url": url_for("static", filename=video_path), "filename": video_path.parent.name} + for video_path in video_paths + ] + tasks = dataset.meta.episodes[episode_id]["tasks"] + else: + video_keys = [key for key, ft in dataset.features.items() if ft["dtype"] == "video"] + videos_info = [ + { + "url": f"https://huggingface.co/datasets/{repo_id}/resolve/main/" + + dataset.video_path.format( + episode_chunk=int(episode_id) // dataset.chunks_size, + video_key=video_key, + episode_index=episode_id, + ), + "filename": video_key, + } + for video_key in video_keys + ] + + response = requests.get( + f"https://huggingface.co/datasets/{repo_id}/resolve/main/meta/episodes.jsonl" + ) + response.raise_for_status() + # Split into lines and parse each line as JSON + tasks_jsonl = [json.loads(line) for line in response.text.splitlines() if line.strip()] + + filtered_tasks_jsonl = [row for row in tasks_jsonl if row["episode_index"] == episode_id] + tasks = filtered_tasks_jsonl[0]["tasks"] + videos_info[0]["language_instruction"] = tasks - ep_csv_url = url_for("static", filename=get_ep_csv_fname(episode_id)) + if episodes is None: + episodes = list( + range(dataset.num_episodes if isinstance(dataset, LeRobotDataset) else dataset.total_episodes) + ) + return render_template( "visualize_dataset_template.html", episode_id=episode_id, episodes=episodes, dataset_info=dataset_info, videos_info=videos_info, - ep_csv_url=ep_csv_url, - has_policy=False, + episode_data_csv_str=episode_data_csv_str, + columns=columns, ) app.run(host=host, port=port) @@ -124,46 +228,69 @@ def get_ep_csv_fname(episode_id: int): return ep_csv_fname -def write_episode_data_csv(output_dir, file_name, episode_index, dataset): - """Write a csv file containg timeseries data of an episode (e.g. state and action). +def get_episode_data(dataset: LeRobotDataset | IterableNamespace, episode_index): + """Get a csv str containing timeseries data of an episode (e.g. state and action). This file will be loaded by Dygraph javascript to plot data in real time.""" - from_idx = dataset.episode_data_index["from"][episode_index] - to_idx = dataset.episode_data_index["to"][episode_index] + columns = [] - has_state = "observation.state" in dataset.features - has_action = "action" in dataset.features + selected_columns = [col for col, ft in dataset.features.items() if ft["dtype"] == "float32"] + selected_columns.remove("timestamp") # init header of csv with state and action names header = ["timestamp"] - if has_state: - dim_state = dataset.meta.shapes["observation.state"][0] - header += [f"state_{i}" for i in range(dim_state)] - if has_action: - dim_action = dataset.meta.shapes["action"][0] - header += [f"action_{i}" for i in range(dim_action)] - columns = ["timestamp"] - if has_state: - columns += ["observation.state"] - if has_action: - columns += ["action"] + for column_name in selected_columns: + dim_state = ( + dataset.meta.shapes[column_name][0] + if isinstance(dataset, LeRobotDataset) + else dataset.features[column_name].shape[0] + ) + header += [f"{column_name}_{i}" for i in range(dim_state)] - rows = [] - data = dataset.hf_dataset.select_columns(columns) - for i in range(from_idx, to_idx): - row = [data[i]["timestamp"].item()] - if has_state: - row += data[i]["observation.state"].tolist() - if has_action: - row += data[i]["action"].tolist() - rows.append(row) + if "names" in dataset.features[column_name] and dataset.features[column_name]["names"]: + column_names = dataset.features[column_name]["names"] + while not isinstance(column_names, list): + column_names = list(column_names.values())[0] + else: + column_names = [f"motor_{i}" for i in range(dim_state)] + columns.append({"key": column_name, "value": column_names}) - output_dir.mkdir(parents=True, exist_ok=True) - with open(output_dir / file_name, "w") as f: - f.write(",".join(header) + "\n") - for row in rows: - row_str = [str(col) for col in row] - f.write(",".join(row_str) + "\n") + selected_columns.insert(0, "timestamp") + + if isinstance(dataset, LeRobotDataset): + from_idx = dataset.episode_data_index["from"][episode_index] + to_idx = dataset.episode_data_index["to"][episode_index] + data = ( + dataset.hf_dataset.select(range(from_idx, to_idx)) + .select_columns(selected_columns) + .with_format("pandas") + ) + else: + repo_id = dataset.repo_id + + url = f"https://huggingface.co/datasets/{repo_id}/resolve/main/" + dataset.data_path.format( + episode_chunk=int(episode_index) // dataset.chunks_size, episode_index=episode_index + ) + df = pd.read_parquet(url) + data = df[selected_columns] # Select specific columns + + rows = np.hstack( + ( + np.expand_dims(data["timestamp"], axis=1), + *[np.vstack(data[col]) for col in selected_columns[1:]], + ) + ).tolist() + + # Convert data to CSV string + csv_buffer = StringIO() + csv_writer = csv.writer(csv_buffer) + # Write header + csv_writer.writerow(header) + # Write data rows + csv_writer.writerows(rows) + csv_string = csv_buffer.getvalue() + + return csv_string, columns def get_episode_video_paths(dataset: LeRobotDataset, ep_index: int) -> list[str]: @@ -175,9 +302,31 @@ def get_episode_video_paths(dataset: LeRobotDataset, ep_index: int) -> list[str] ] +def get_episode_language_instruction(dataset: LeRobotDataset, ep_index: int) -> list[str]: + # check if the dataset has language instructions + if "language_instruction" not in dataset.features: + return None + + # get first frame index + first_frame_idx = dataset.episode_data_index["from"][ep_index].item() + + language_instruction = dataset.hf_dataset[first_frame_idx]["language_instruction"] + # TODO (michel-aractingi) hack to get the sentence, some strings in openx are badly stored + # with the tf.tensor appearing in the string + return language_instruction.removeprefix("tf.Tensor(b'").removesuffix("', shape=(), dtype=string)") + + +def get_dataset_info(repo_id: str) -> IterableNamespace: + response = requests.get(f"https://huggingface.co/datasets/{repo_id}/resolve/main/meta/info.json") + response.raise_for_status() # Raises an HTTPError for bad responses + dataset_info = response.json() + dataset_info["repo_id"] = repo_id + return IterableNamespace(dataset_info) + + def visualize_dataset_html( - dataset: LeRobotDataset, - episodes: list[int] = None, + dataset: LeRobotDataset | None, + episodes: list[int] | None = None, output_dir: Path | None = None, serve: bool = True, host: str = "127.0.0.1", @@ -186,11 +335,11 @@ def visualize_dataset_html( ) -> Path | None: init_logging() - if len(dataset.meta.image_keys) > 0: - raise NotImplementedError(f"Image keys ({dataset.meta.image_keys=}) are currently not supported.") + template_dir = Path(__file__).resolve().parent.parent / "templates" if output_dir is None: - output_dir = f"outputs/visualize_dataset_html/{dataset.repo_id}" + # Create a temporary directory that will be automatically cleaned up + output_dir = tempfile.mkdtemp(prefix="lerobot_visualize_dataset_") output_dir = Path(output_dir) if output_dir.exists(): @@ -201,28 +350,29 @@ def visualize_dataset_html( output_dir.mkdir(parents=True, exist_ok=True) - # Create a simlink from the dataset video folder containg mp4 files to the output directory - # so that the http server can get access to the mp4 files. static_dir = output_dir / "static" static_dir.mkdir(parents=True, exist_ok=True) - ln_videos_dir = static_dir / "videos" - if not ln_videos_dir.exists(): - ln_videos_dir.symlink_to((dataset.root / "videos").resolve()) - template_dir = Path(__file__).resolve().parent.parent / "templates" + if dataset is None: + if serve: + run_server( + dataset=None, + episodes=None, + host=host, + port=port, + static_folder=static_dir, + template_folder=template_dir, + ) + else: + # Create a simlink from the dataset video folder containg mp4 files to the output directory + # so that the http server can get access to the mp4 files. + if isinstance(dataset, LeRobotDataset): + ln_videos_dir = static_dir / "videos" + if not ln_videos_dir.exists(): + ln_videos_dir.symlink_to((dataset.root / "videos").resolve()) - if episodes is None: - episodes = list(range(dataset.num_episodes)) - - logging.info("Writing CSV files") - for episode_index in tqdm.tqdm(episodes): - # write states and actions in a csv (it can be slow for big datasets) - ep_csv_fname = get_ep_csv_fname(episode_index) - # TODO(rcadene): speedup script by loading directly from dataset, pyarrow, parquet, safetensors? - write_episode_data_csv(static_dir, ep_csv_fname, episode_index, dataset) - - if serve: - run_server(dataset, episodes, host, port, static_dir, template_dir) + if serve: + run_server(dataset, episodes, host, port, static_dir, template_dir) def main(): @@ -231,7 +381,7 @@ def main(): parser.add_argument( "--repo-id", type=str, - required=True, + default=None, help="Name of hugging face repositery containing a LeRobotDataset dataset (e.g. `lerobot/pusht` for https://huggingface.co/datasets/lerobot/pusht).", ) parser.add_argument( @@ -246,6 +396,12 @@ def main(): default=None, help="Root directory for a dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.", ) + parser.add_argument( + "--load-from-hf-hub", + type=int, + default=0, + help="Load videos and parquet files from HF Hub rather than local system.", + ) parser.add_argument( "--episodes", type=int, @@ -287,11 +443,19 @@ def main(): args = parser.parse_args() kwargs = vars(args) repo_id = kwargs.pop("repo_id") + load_from_hf_hub = kwargs.pop("load_from_hf_hub") root = kwargs.pop("root") local_files_only = kwargs.pop("local_files_only") - dataset = LeRobotDataset(repo_id, root=root, local_files_only=local_files_only) - visualize_dataset_html(dataset, **kwargs) + dataset = None + if repo_id: + dataset = ( + LeRobotDataset(repo_id, root=root, local_files_only=local_files_only) + if not load_from_hf_hub + else get_dataset_info(repo_id) + ) + + visualize_dataset_html(dataset, **vars(args)) if __name__ == "__main__": diff --git a/lerobot/templates/visualize_dataset_homepage.html b/lerobot/templates/visualize_dataset_homepage.html new file mode 100644 index 00000000..adff07be --- /dev/null +++ b/lerobot/templates/visualize_dataset_homepage.html @@ -0,0 +1,68 @@ + + + + + + Interactive Video Background Page + + + + +
+ +
+
+
+
+

LeRobot Dataset Visualizer

+ + create & train your own robots + +

+
+

Example Datasets:

+
    + {% for dataset in featured_datasets %} +
  • {{ dataset }}
  • + {% endfor %} +
+
+
+
+ + +
+ +
+ More example datasets +
    + {% for dataset in lerobot_datasets %} +
  • {{ dataset }}
  • + {% endfor %} +
+
+
+ + \ No newline at end of file diff --git a/lerobot/templates/visualize_dataset_template.html b/lerobot/templates/visualize_dataset_template.html index 0fa1e713..3c93d2d6 100644 --- a/lerobot/templates/visualize_dataset_template.html +++ b/lerobot/templates/visualize_dataset_template.html @@ -31,11 +31,16 @@ }">
-

{{ dataset_info.repo_id }}

+ + +

{{ dataset_info.repo_id }}

+
  • - Number of samples/frames: {{ dataset_info.num_frames }} + Number of samples/frames: {{ dataset_info.num_samples }}
  • Number of episodes: {{ dataset_info.num_episodes }} @@ -93,10 +98,35 @@
-
+
+
+ filter videos +
🔽
+
+ +
+
+ +
+
+
+ +
{% for video_info in videos_info %} -
-

{{ video_info.filename }}

+
+

{{ video_info.filename }}