Merge branch 'main' of https://github.com/huggingface/lerobot into smolvlm-pusht

This commit is contained in:
ivelin 2025-01-22 14:07:03 -06:00
commit 94ab721001
19 changed files with 563 additions and 192 deletions

View File

@ -50,7 +50,7 @@ jobs:
uses: actions/checkout@v3 uses: actions/checkout@v3
- name: Install poetry - name: Install poetry
run: pipx install poetry run: pipx install "poetry<2.0.0"
- name: Poetry check - name: Poetry check
run: poetry check run: poetry check
@ -64,7 +64,7 @@ jobs:
uses: actions/checkout@v3 uses: actions/checkout@v3
- name: Install poetry - name: Install poetry
run: pipx install poetry run: pipx install "poetry<2.0.0"
- name: Install poetry-relax - name: Install poetry-relax
run: poetry self add poetry-relax run: poetry self add poetry-relax

View File

@ -3,7 +3,7 @@ default_language_version:
python: python3.10 python: python3.10
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0 rev: v5.0.0
hooks: hooks:
- id: check-added-large-files - id: check-added-large-files
- id: debug-statements - id: debug-statements
@ -14,11 +14,11 @@ repos:
- id: end-of-file-fixer - id: end-of-file-fixer
- id: trailing-whitespace - id: trailing-whitespace
- repo: https://github.com/asottile/pyupgrade - repo: https://github.com/asottile/pyupgrade
rev: v3.16.0 rev: v3.19.0
hooks: hooks:
- id: pyupgrade - id: pyupgrade
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.2 rev: v0.8.2
hooks: hooks:
- id: ruff - id: ruff
args: [--fix] args: [--fix]
@ -32,6 +32,6 @@ repos:
- "--check" - "--check"
- "--no-update" - "--no-update"
- repo: https://github.com/gitleaks/gitleaks - repo: https://github.com/gitleaks/gitleaks
rev: v8.18.4 rev: v8.21.2
hooks: hooks:
- id: gitleaks - id: gitleaks

View File

@ -68,7 +68,7 @@
### Acknowledgment ### Acknowledgment
- Thanks to Tony Zaho, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io). - Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io). - Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM). - Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
- Thanks to Antonio Loquercio and Ashish Kumar for their early support. - Thanks to Antonio Loquercio and Ashish Kumar for their early support.

View File

@ -21,7 +21,7 @@ How to decode videos?
## Variables ## Variables
**Image content & size** **Image content & size**
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution). We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an apartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
For these reasons, we run this benchmark on four representative datasets: For these reasons, we run this benchmark on four representative datasets:
- `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera. - `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
- `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera. - `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
@ -63,7 +63,7 @@ This of course is affected by the `-g` parameter during encoding, which specifie
Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`. Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`.
Additionally, because some policies might request single timestamps that are a few frames appart, we also have the following scenario: Additionally, because some policies might request single timestamps that are a few frames apart, we also have the following scenario:
- `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`), - `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`),
However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded. However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded.
@ -85,8 +85,8 @@ However, due to how video decoding is implemented with `pyav`, we don't have acc
**Average Structural Similarity Index Measure (higher is better)** **Average Structural Similarity Index Measure (higher is better)**
`avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity. `avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity.
One aspect that can't be measured here with those metrics is the compatibility of the encoding accross platforms, in particular on web browser, for visualization purposes. One aspect that can't be measured here with those metrics is the compatibility of the encoding across platforms, in particular on web browser, for visualization purposes.
h264, h265 and AV1 are all commonly used codecs and should not be pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility: h264, h265 and AV1 are all commonly used codecs and should not pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
- `yuv420p` is more widely supported across various platforms, including web browsers. - `yuv420p` is more widely supported across various platforms, including web browsers.
- `yuv444p` offers higher color fidelity but might not be supported as broadly. - `yuv444p` offers higher color fidelity but might not be supported as broadly.
@ -116,7 +116,7 @@ Additional encoding parameters exist that are not included in this benchmark. In
- `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1. - `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1.
- `-tune` which allows to optimize the encoding for certains aspects (e.g. film quality, fast decoding, etc.). - `-tune` which allows to optimize the encoding for certains aspects (e.g. film quality, fast decoding, etc.).
See the documentation mentioned above for more detailled info on these settings and for a more comprehensive list of other parameters. See the documentation mentioned above for more detailed info on these settings and for a more comprehensive list of other parameters.
Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few: Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few:
- `torchaudio` - `torchaudio`

View File

@ -1,25 +1,31 @@
This tutorial explains how to use [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot. # Using the [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot
## Source the parts
## A. Source the parts
Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already. Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already.
**Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly. **Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
## Install LeRobot ## B. Install LeRobot
On your computer: On your computer:
1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install): 1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash ```bash
mkdir -p ~/miniconda3 mkdir -p ~/miniconda3
# Linux:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
# Mac M-series:
# curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o ~/miniconda3/miniconda.sh
# Mac Intel:
# curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash ~/miniconda3/bin/conda init bash
``` ```
2. Restart shell or `source ~/.bashrc` 2. Restart shell or `source ~/.bashrc` (*Mac*: `source ~/.bash_profile`) or `source ~/.zshrc` if you're using zshell
3. Create and activate a fresh conda environment for lerobot 3. Create and activate a fresh conda environment for lerobot
```bash ```bash
@ -36,23 +42,30 @@ git clone https://github.com/huggingface/lerobot.git ~/lerobot
cd ~/lerobot && pip install -e ".[feetech]" cd ~/lerobot && pip install -e ".[feetech]"
``` ```
For Linux only (not Mac), install extra dependencies for recording datasets: *For Linux only (not Mac)*: install extra dependencies for recording datasets:
```bash ```bash
conda install -y -c conda-forge ffmpeg conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0" conda install -y -c conda-forge "opencv>=4.10.0"
``` ```
## Configure the motors ## C. Configure the motors
Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the use of our scripts below. ### 1. Find the USB ports associated to each arm
**Find USB ports associated to your arms** Designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm.
To find the correct ports for each arm, run the utility script twice:
#### a. Run the script to find ports
Follow Step 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I), which illustrates the use of our scripts below.
To find the port for each bus servo adapter, run the utility script:
```bash ```bash
python lerobot/scripts/find_motors_bus_port.py python lerobot/scripts/find_motors_bus_port.py
``` ```
#### b. Example outputs
Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux): Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
``` ```
Finding all available ports for the MotorBus. Finding all available ports for the MotorBus.
@ -64,7 +77,6 @@ Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751 The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
Reconnect the usb cable. Reconnect the usb cable.
``` ```
Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux): Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
``` ```
Finding all available ports for the MotorBus. Finding all available ports for the MotorBus.
@ -77,13 +89,20 @@ The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the usb cable. Reconnect the usb cable.
``` ```
Troubleshooting: On Linux, you might need to give access to the USB ports by running: #### c. Troubleshooting
On Linux, you might need to give access to the USB ports by running:
```bash ```bash
sudo chmod 666 /dev/ttyACM0 sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1 sudo chmod 666 /dev/ttyACM1
``` ```
**Configure your motors** #### d. Update YAML file
Now that you have the ports, modify the *port* sections in `so100.yaml`
### 2. Configure the motors
#### a. Set IDs for all 12 motors
Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate: Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate:
```bash ```bash
python lerobot/scripts/configure_motor.py \ python lerobot/scripts/configure_motor.py \
@ -94,7 +113,7 @@ python lerobot/scripts/configure_motor.py \
--ID 1 --ID 1
``` ```
Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees). *Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).*
Then unplug your motor and plug the second motor and set its ID to 2. Then unplug your motor and plug the second motor and set its ID to 2.
```bash ```bash
@ -108,23 +127,25 @@ python lerobot/scripts/configure_motor.py \
Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm. Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.
**Remove the gears of the 6 leader motors**
Follow step 2 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
**Add motor horn to the motors** #### b. Remove the gears of the 6 leader motors
Follow step 3 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30.
Follow step 2 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=248). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
#### c. Add motor horn to all 12 motors
Follow step 3 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=569). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30.
Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated. Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated.
## Assemble the arms ## D. Assemble the arms
Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm. Follow step 4 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=610). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.
## Calibrate ## E. Calibrate
Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one SO-100 robot to work on another. Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one SO-100 robot to work on another.
**Manual calibration of follower arm** #### a. Manual calibration of follower arm
/!\ Contrarily to step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the auto calibration, we will actually do manual calibration of follower for now. /!\ Contrarily to step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the auto calibration, we will actually do manual calibration of follower for now.
You will need to move the follower arm to these positions sequentially: You will need to move the follower arm to these positions sequentially:
@ -139,8 +160,8 @@ python lerobot/scripts/control_robot.py calibrate \
--robot-overrides '~cameras' --arms main_follower --robot-overrides '~cameras' --arms main_follower
``` ```
**Manual calibration of leader arm** #### b. Manual calibration of leader arm
Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially: Follow step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
| 1. Zero position | 2. Rotated position | 3. Rest position | | 1. Zero position | 2. Rotated position | 3. Rest position |
|---|---|---| |---|---|---|
@ -153,7 +174,7 @@ python lerobot/scripts/control_robot.py calibrate \
--robot-overrides '~cameras' --arms main_leader --robot-overrides '~cameras' --arms main_leader
``` ```
## Teleoperate ## F. Teleoperate
**Simple teleop** **Simple teleop**
Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras): Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
@ -165,14 +186,14 @@ python lerobot/scripts/control_robot.py teleoperate \
``` ```
**Teleop with displaying cameras** #### a. Teleop with displaying cameras
Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset. Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
```bash ```bash
python lerobot/scripts/control_robot.py teleoperate \ python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/so100.yaml --robot-path lerobot/configs/robot/so100.yaml
``` ```
## Record a dataset ## G. Record a dataset
Once you're familiar with teleoperation, you can record your first dataset with SO-100. Once you're familiar with teleoperation, you can record your first dataset with SO-100.
@ -201,7 +222,7 @@ python lerobot/scripts/control_robot.py record \
--push-to-hub 1 --push-to-hub 1
``` ```
## Visualize a dataset ## H. Visualize a dataset
If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by: If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash ```bash
@ -214,7 +235,7 @@ python lerobot/scripts/visualize_dataset_html.py \
--repo-id ${HF_USER}/so100_test --repo-id ${HF_USER}/so100_test
``` ```
## Replay an episode ## I. Replay an episode
Now try to replay the first episode on your robot: Now try to replay the first episode on your robot:
```bash ```bash
@ -225,7 +246,7 @@ python lerobot/scripts/control_robot.py replay \
--episode 0 --episode 0
``` ```
## Train a policy ## J. Train a policy
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command: To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash ```bash
@ -248,7 +269,7 @@ Let's explain it:
Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`. Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
## Evaluate your policy ## K. Evaluate your policy
You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes: You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash ```bash
@ -268,7 +289,7 @@ As you can see, it's almost the same command as previously used to record your t
1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_so100_test`). 1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_so100_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_so100_test`). 2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_so100_test`).
## More ## L. More Information
Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot. Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.

View File

@ -10,10 +10,10 @@ from torchvision.transforms import ToPILImage, v2
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
dataset_repo_id = "lerobot/aloha_static_tape" dataset_repo_id = "lerobot/aloha_static_screw_driver"
# Create a LeRobotDataset with no transformations # Create a LeRobotDataset with no transformations
dataset = LeRobotDataset(dataset_repo_id) dataset = LeRobotDataset(dataset_repo_id, episodes=[0])
# This is equivalent to `dataset = LeRobotDataset(dataset_repo_id, image_transforms=None)` # This is equivalent to `dataset = LeRobotDataset(dataset_repo_id, image_transforms=None)`
# Get the index of the first observation in the first episode # Get the index of the first observation in the first episode
@ -28,12 +28,13 @@ transforms = v2.Compose(
[ [
v2.ColorJitter(brightness=(0.5, 1.5)), v2.ColorJitter(brightness=(0.5, 1.5)),
v2.ColorJitter(contrast=(0.5, 1.5)), v2.ColorJitter(contrast=(0.5, 1.5)),
v2.ColorJitter(hue=(-0.1, 0.1)),
v2.RandomAdjustSharpness(sharpness_factor=2, p=1), v2.RandomAdjustSharpness(sharpness_factor=2, p=1),
] ]
) )
# Create another LeRobotDataset with the defined transformations # Create another LeRobotDataset with the defined transformations
transformed_dataset = LeRobotDataset(dataset_repo_id, image_transforms=transforms) transformed_dataset = LeRobotDataset(dataset_repo_id, episodes=[0], image_transforms=transforms)
# Get a frame from the transformed dataset # Get a frame from the transformed dataset
transformed_frame = transformed_dataset[first_idx][transformed_dataset.meta.camera_keys[0]] transformed_frame = transformed_dataset[first_idx][transformed_dataset.meta.camera_keys[0]]

View File

@ -56,7 +56,7 @@ python lerobot/scripts/control_robot.py teleoperate \
--robot-overrides max_relative_target=5 --robot-overrides max_relative_target=5
``` ```
By adding `--robot-overrides max_relative_target=5`, we override the default value for `max_relative_target` defined in `lerobot/configs/robot/aloha.yaml`. It is expected to be `5` to limit the magnitude of the movement for more safety, but the teloperation won't be smooth. When you feel confident, you can disable this limit by adding `--robot-overrides max_relative_target=null` to the command line: By adding `--robot-overrides max_relative_target=5`, we override the default value for `max_relative_target` defined in `lerobot/configs/robot/aloha.yaml`. It is expected to be `5` to limit the magnitude of the movement for more safety, but the teleoperation won't be smooth. When you feel confident, you can disable this limit by adding `--robot-overrides max_relative_target=null` to the command line:
```bash ```bash
python lerobot/scripts/control_robot.py teleoperate \ python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/aloha.yaml \ --robot-path lerobot/configs/robot/aloha.yaml \

View File

@ -28,7 +28,7 @@ def safe_stop_image_writer(func):
try: try:
return func(*args, **kwargs) return func(*args, **kwargs)
except Exception as e: except Exception as e:
dataset = kwargs.get("dataset", None) dataset = kwargs.get("dataset")
image_writer = getattr(dataset, "image_writer", None) if dataset else None image_writer = getattr(dataset, "image_writer", None) if dataset else None
if image_writer is not None: if image_writer is not None:
print("Waiting for image writer to terminate...") print("Waiting for image writer to terminate...")

View File

@ -17,9 +17,11 @@ import importlib.resources
import json import json
import logging import logging
import textwrap import textwrap
from collections.abc import Iterator
from itertools import accumulate from itertools import accumulate
from pathlib import Path from pathlib import Path
from pprint import pformat from pprint import pformat
from types import SimpleNamespace
from typing import Any from typing import Any
import datasets import datasets
@ -477,7 +479,6 @@ def create_lerobot_dataset_card(
Note: If specified, license must be one of https://huggingface.co/docs/hub/repositories-licenses. Note: If specified, license must be one of https://huggingface.co/docs/hub/repositories-licenses.
""" """
card_tags = ["LeRobot"] card_tags = ["LeRobot"]
card_template_path = importlib.resources.path("lerobot.common.datasets", "card_template.md")
if tags: if tags:
card_tags += tags card_tags += tags
@ -497,8 +498,65 @@ def create_lerobot_dataset_card(
], ],
) )
card_template = (importlib.resources.files("lerobot.common.datasets") / "card_template.md").read_text()
return DatasetCard.from_template( return DatasetCard.from_template(
card_data=card_data, card_data=card_data,
template_path=str(card_template_path), template_str=card_template,
**kwargs, **kwargs,
) )
class IterableNamespace(SimpleNamespace):
"""
A namespace object that supports both dictionary-like iteration and dot notation access.
Automatically converts nested dictionaries into IterableNamespaces.
This class extends SimpleNamespace to provide:
- Dictionary-style iteration over keys
- Access to items via both dot notation (obj.key) and brackets (obj["key"])
- Dictionary-like methods: items(), keys(), values()
- Recursive conversion of nested dictionaries
Args:
dictionary: Optional dictionary to initialize the namespace
**kwargs: Additional keyword arguments passed to SimpleNamespace
Examples:
>>> data = {"name": "Alice", "details": {"age": 25}}
>>> ns = IterableNamespace(data)
>>> ns.name
'Alice'
>>> ns.details.age
25
>>> list(ns.keys())
['name', 'details']
>>> for key, value in ns.items():
... print(f"{key}: {value}")
name: Alice
details: IterableNamespace(age=25)
"""
def __init__(self, dictionary: dict[str, Any] = None, **kwargs):
super().__init__(**kwargs)
if dictionary is not None:
for key, value in dictionary.items():
if isinstance(value, dict):
setattr(self, key, IterableNamespace(value))
else:
setattr(self, key, value)
def __iter__(self) -> Iterator[str]:
return iter(vars(self))
def __getitem__(self, key: str) -> Any:
return vars(self)[key]
def items(self):
return vars(self).items()
def values(self):
return vars(self).values()
def keys(self):
return vars(self).keys()

View File

@ -159,11 +159,11 @@ DATASETS = {
**ALOHA_STATIC_INFO, **ALOHA_STATIC_INFO,
}, },
"aloha_static_vinh_cup": { "aloha_static_vinh_cup": {
"single_task": "Pick up the platic cup with the right arm, then pop its lid open with the left arm.", "single_task": "Pick up the plastic cup with the right arm, then pop its lid open with the left arm.",
**ALOHA_STATIC_INFO, **ALOHA_STATIC_INFO,
}, },
"aloha_static_vinh_cup_left": { "aloha_static_vinh_cup_left": {
"single_task": "Pick up the platic cup with the left arm, then pop its lid open with the right arm.", "single_task": "Pick up the plastic cup with the left arm, then pop its lid open with the right arm.",
**ALOHA_STATIC_INFO, **ALOHA_STATIC_INFO,
}, },
"aloha_static_ziploc_slide": {"single_task": "Slide open the ziploc bag.", **ALOHA_STATIC_INFO}, "aloha_static_ziploc_slide": {"single_task": "Slide open the ziploc bag.", **ALOHA_STATIC_INFO},

View File

@ -184,7 +184,7 @@ def init_policy(pretrained_policy_name_or_path, policy_overrides):
def warmup_record( def warmup_record(
robot, robot,
events, events,
enable_teloperation, enable_teleoperation,
warmup_time_s, warmup_time_s,
display_cameras, display_cameras,
fps, fps,
@ -195,7 +195,7 @@ def warmup_record(
display_cameras=display_cameras, display_cameras=display_cameras,
events=events, events=events,
fps=fps, fps=fps,
teleoperate=enable_teloperation, teleoperate=enable_teleoperation,
) )

View File

@ -300,7 +300,7 @@ def record(
# TODO(rcadene): add an option to enable teleoperation during reset # TODO(rcadene): add an option to enable teleoperation during reset
# Skip reset for the last episode to be recorded # Skip reset for the last episode to be recorded
if not events["stop_recording"] and ( if not events["stop_recording"] and (
(dataset.num_episodes < num_episodes - 1) or events["rerecord_episode"] (recorded_episodes < num_episodes - 1) or events["rerecord_episode"]
): ):
log_say("Reset the environment", play_sounds) log_say("Reset the environment", play_sounds)
reset_environment(robot, events, reset_time_s) reset_environment(robot, events, reset_time_s)

View File

@ -53,20 +53,29 @@ python lerobot/scripts/visualize_dataset_html.py \
""" """
import argparse import argparse
import csv
import json
import logging import logging
import re
import shutil import shutil
import tempfile
from io import StringIO
from pathlib import Path from pathlib import Path
import tqdm import numpy as np
from flask import Flask, redirect, render_template, url_for import pandas as pd
import requests
from flask import Flask, redirect, render_template, request, url_for
from lerobot import available_datasets
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.datasets.utils import IterableNamespace
from lerobot.common.utils.utils import init_logging from lerobot.common.utils.utils import init_logging
def run_server( def run_server(
dataset: LeRobotDataset, dataset: LeRobotDataset | IterableNamespace | None,
episodes: list[int], episodes: list[int] | None,
host: str, host: str,
port: str, port: str,
static_folder: Path, static_folder: Path,
@ -76,10 +85,50 @@ def run_server(
app.config["SEND_FILE_MAX_AGE_DEFAULT"] = 0 # specifying not to cache app.config["SEND_FILE_MAX_AGE_DEFAULT"] = 0 # specifying not to cache
@app.route("/") @app.route("/")
def index(): def hommepage(dataset=dataset):
# home page redirects to the first episode page if dataset:
[dataset_namespace, dataset_name] = dataset.repo_id.split("/") dataset_namespace, dataset_name = dataset.repo_id.split("/")
first_episode_id = episodes[0] return redirect(
url_for(
"show_episode",
dataset_namespace=dataset_namespace,
dataset_name=dataset_name,
episode_id=0,
)
)
dataset_param, episode_param = None, None
all_params = request.args
if "dataset" in all_params:
dataset_param = all_params["dataset"]
if "episode" in all_params:
episode_param = int(all_params["episode"])
if dataset_param:
dataset_namespace, dataset_name = dataset_param.split("/")
return redirect(
url_for(
"show_episode",
dataset_namespace=dataset_namespace,
dataset_name=dataset_name,
episode_id=episode_param if episode_param is not None else 0,
)
)
featured_datasets = [
"lerobot/aloha_static_cups_open",
"lerobot/columbia_cairlab_pusht_real",
"lerobot/taco_play",
]
return render_template(
"visualize_dataset_homepage.html",
featured_datasets=featured_datasets,
lerobot_datasets=available_datasets,
)
@app.route("/<string:dataset_namespace>/<string:dataset_name>")
def show_first_episode(dataset_namespace, dataset_name):
first_episode_id = 0
return redirect( return redirect(
url_for( url_for(
"show_episode", "show_episode",
@ -90,30 +139,85 @@ def run_server(
) )
@app.route("/<string:dataset_namespace>/<string:dataset_name>/episode_<int:episode_id>") @app.route("/<string:dataset_namespace>/<string:dataset_name>/episode_<int:episode_id>")
def show_episode(dataset_namespace, dataset_name, episode_id): def show_episode(dataset_namespace, dataset_name, episode_id, dataset=dataset, episodes=episodes):
repo_id = f"{dataset_namespace}/{dataset_name}"
try:
if dataset is None:
dataset = get_dataset_info(repo_id)
except FileNotFoundError:
return (
"Make sure to convert your LeRobotDataset to v2 & above. See how to convert your dataset at https://github.com/huggingface/lerobot/pull/461",
400,
)
dataset_version = (
dataset.meta._version if isinstance(dataset, LeRobotDataset) else dataset.codebase_version
)
match = re.search(r"v(\d+)\.", dataset_version)
if match:
major_version = int(match.group(1))
if major_version < 2:
return "Make sure to convert your LeRobotDataset to v2 & above."
episode_data_csv_str, columns = get_episode_data(dataset, episode_id)
dataset_info = { dataset_info = {
"repo_id": dataset.repo_id, "repo_id": f"{dataset_namespace}/{dataset_name}",
"num_samples": dataset.num_frames, "num_samples": dataset.num_frames
"num_episodes": dataset.num_episodes, if isinstance(dataset, LeRobotDataset)
else dataset.total_frames,
"num_episodes": dataset.num_episodes
if isinstance(dataset, LeRobotDataset)
else dataset.total_episodes,
"fps": dataset.fps, "fps": dataset.fps,
} }
video_paths = [dataset.meta.get_video_file_path(episode_id, key) for key in dataset.meta.video_keys] if isinstance(dataset, LeRobotDataset):
tasks = dataset.meta.episodes[episode_id]["tasks"] video_paths = [
dataset.meta.get_video_file_path(episode_id, key) for key in dataset.meta.video_keys
]
videos_info = [ videos_info = [
{"url": url_for("static", filename=video_path), "filename": video_path.name} {"url": url_for("static", filename=video_path), "filename": video_path.parent.name}
for video_path in video_paths for video_path in video_paths
] ]
tasks = dataset.meta.episodes[episode_id]["tasks"]
else:
video_keys = [key for key, ft in dataset.features.items() if ft["dtype"] == "video"]
videos_info = [
{
"url": f"https://huggingface.co/datasets/{repo_id}/resolve/main/"
+ dataset.video_path.format(
episode_chunk=int(episode_id) // dataset.chunks_size,
video_key=video_key,
episode_index=episode_id,
),
"filename": video_key,
}
for video_key in video_keys
]
response = requests.get(
f"https://huggingface.co/datasets/{repo_id}/resolve/main/meta/episodes.jsonl"
)
response.raise_for_status()
# Split into lines and parse each line as JSON
tasks_jsonl = [json.loads(line) for line in response.text.splitlines() if line.strip()]
filtered_tasks_jsonl = [row for row in tasks_jsonl if row["episode_index"] == episode_id]
tasks = filtered_tasks_jsonl[0]["tasks"]
videos_info[0]["language_instruction"] = tasks videos_info[0]["language_instruction"] = tasks
ep_csv_url = url_for("static", filename=get_ep_csv_fname(episode_id)) if episodes is None:
episodes = list(
range(dataset.num_episodes if isinstance(dataset, LeRobotDataset) else dataset.total_episodes)
)
return render_template( return render_template(
"visualize_dataset_template.html", "visualize_dataset_template.html",
episode_id=episode_id, episode_id=episode_id,
episodes=episodes, episodes=episodes,
dataset_info=dataset_info, dataset_info=dataset_info,
videos_info=videos_info, videos_info=videos_info,
ep_csv_url=ep_csv_url, episode_data_csv_str=episode_data_csv_str,
has_policy=False, columns=columns,
) )
app.run(host=host, port=port) app.run(host=host, port=port)
@ -124,46 +228,69 @@ def get_ep_csv_fname(episode_id: int):
return ep_csv_fname return ep_csv_fname
def write_episode_data_csv(output_dir, file_name, episode_index, dataset): def get_episode_data(dataset: LeRobotDataset | IterableNamespace, episode_index):
"""Write a csv file containg timeseries data of an episode (e.g. state and action). """Get a csv str containing timeseries data of an episode (e.g. state and action).
This file will be loaded by Dygraph javascript to plot data in real time.""" This file will be loaded by Dygraph javascript to plot data in real time."""
from_idx = dataset.episode_data_index["from"][episode_index] columns = []
to_idx = dataset.episode_data_index["to"][episode_index]
has_state = "observation.state" in dataset.features selected_columns = [col for col, ft in dataset.features.items() if ft["dtype"] == "float32"]
has_action = "action" in dataset.features selected_columns.remove("timestamp")
# init header of csv with state and action names # init header of csv with state and action names
header = ["timestamp"] header = ["timestamp"]
if has_state:
dim_state = dataset.meta.shapes["observation.state"][0]
header += [f"state_{i}" for i in range(dim_state)]
if has_action:
dim_action = dataset.meta.shapes["action"][0]
header += [f"action_{i}" for i in range(dim_action)]
columns = ["timestamp"] for column_name in selected_columns:
if has_state: dim_state = (
columns += ["observation.state"] dataset.meta.shapes[column_name][0]
if has_action: if isinstance(dataset, LeRobotDataset)
columns += ["action"] else dataset.features[column_name].shape[0]
)
header += [f"{column_name}_{i}" for i in range(dim_state)]
rows = [] if "names" in dataset.features[column_name] and dataset.features[column_name]["names"]:
data = dataset.hf_dataset.select_columns(columns) column_names = dataset.features[column_name]["names"]
for i in range(from_idx, to_idx): while not isinstance(column_names, list):
row = [data[i]["timestamp"].item()] column_names = list(column_names.values())[0]
if has_state: else:
row += data[i]["observation.state"].tolist() column_names = [f"motor_{i}" for i in range(dim_state)]
if has_action: columns.append({"key": column_name, "value": column_names})
row += data[i]["action"].tolist()
rows.append(row)
output_dir.mkdir(parents=True, exist_ok=True) selected_columns.insert(0, "timestamp")
with open(output_dir / file_name, "w") as f:
f.write(",".join(header) + "\n") if isinstance(dataset, LeRobotDataset):
for row in rows: from_idx = dataset.episode_data_index["from"][episode_index]
row_str = [str(col) for col in row] to_idx = dataset.episode_data_index["to"][episode_index]
f.write(",".join(row_str) + "\n") data = (
dataset.hf_dataset.select(range(from_idx, to_idx))
.select_columns(selected_columns)
.with_format("pandas")
)
else:
repo_id = dataset.repo_id
url = f"https://huggingface.co/datasets/{repo_id}/resolve/main/" + dataset.data_path.format(
episode_chunk=int(episode_index) // dataset.chunks_size, episode_index=episode_index
)
df = pd.read_parquet(url)
data = df[selected_columns] # Select specific columns
rows = np.hstack(
(
np.expand_dims(data["timestamp"], axis=1),
*[np.vstack(data[col]) for col in selected_columns[1:]],
)
).tolist()
# Convert data to CSV string
csv_buffer = StringIO()
csv_writer = csv.writer(csv_buffer)
# Write header
csv_writer.writerow(header)
# Write data rows
csv_writer.writerows(rows)
csv_string = csv_buffer.getvalue()
return csv_string, columns
def get_episode_video_paths(dataset: LeRobotDataset, ep_index: int) -> list[str]: def get_episode_video_paths(dataset: LeRobotDataset, ep_index: int) -> list[str]:
@ -175,9 +302,31 @@ def get_episode_video_paths(dataset: LeRobotDataset, ep_index: int) -> list[str]
] ]
def get_episode_language_instruction(dataset: LeRobotDataset, ep_index: int) -> list[str]:
# check if the dataset has language instructions
if "language_instruction" not in dataset.features:
return None
# get first frame index
first_frame_idx = dataset.episode_data_index["from"][ep_index].item()
language_instruction = dataset.hf_dataset[first_frame_idx]["language_instruction"]
# TODO (michel-aractingi) hack to get the sentence, some strings in openx are badly stored
# with the tf.tensor appearing in the string
return language_instruction.removeprefix("tf.Tensor(b'").removesuffix("', shape=(), dtype=string)")
def get_dataset_info(repo_id: str) -> IterableNamespace:
response = requests.get(f"https://huggingface.co/datasets/{repo_id}/resolve/main/meta/info.json")
response.raise_for_status() # Raises an HTTPError for bad responses
dataset_info = response.json()
dataset_info["repo_id"] = repo_id
return IterableNamespace(dataset_info)
def visualize_dataset_html( def visualize_dataset_html(
dataset: LeRobotDataset, dataset: LeRobotDataset | None,
episodes: list[int] = None, episodes: list[int] | None = None,
output_dir: Path | None = None, output_dir: Path | None = None,
serve: bool = True, serve: bool = True,
host: str = "127.0.0.1", host: str = "127.0.0.1",
@ -186,11 +335,11 @@ def visualize_dataset_html(
) -> Path | None: ) -> Path | None:
init_logging() init_logging()
if len(dataset.meta.image_keys) > 0: template_dir = Path(__file__).resolve().parent.parent / "templates"
raise NotImplementedError(f"Image keys ({dataset.meta.image_keys=}) are currently not supported.")
if output_dir is None: if output_dir is None:
output_dir = f"outputs/visualize_dataset_html/{dataset.repo_id}" # Create a temporary directory that will be automatically cleaned up
output_dir = tempfile.mkdtemp(prefix="lerobot_visualize_dataset_")
output_dir = Path(output_dir) output_dir = Path(output_dir)
if output_dir.exists(): if output_dir.exists():
@ -201,26 +350,27 @@ def visualize_dataset_html(
output_dir.mkdir(parents=True, exist_ok=True) output_dir.mkdir(parents=True, exist_ok=True)
# Create a simlink from the dataset video folder containg mp4 files to the output directory
# so that the http server can get access to the mp4 files.
static_dir = output_dir / "static" static_dir = output_dir / "static"
static_dir.mkdir(parents=True, exist_ok=True) static_dir.mkdir(parents=True, exist_ok=True)
if dataset is None:
if serve:
run_server(
dataset=None,
episodes=None,
host=host,
port=port,
static_folder=static_dir,
template_folder=template_dir,
)
else:
# Create a simlink from the dataset video folder containg mp4 files to the output directory
# so that the http server can get access to the mp4 files.
if isinstance(dataset, LeRobotDataset):
ln_videos_dir = static_dir / "videos" ln_videos_dir = static_dir / "videos"
if not ln_videos_dir.exists(): if not ln_videos_dir.exists():
ln_videos_dir.symlink_to((dataset.root / "videos").resolve()) ln_videos_dir.symlink_to((dataset.root / "videos").resolve())
template_dir = Path(__file__).resolve().parent.parent / "templates"
if episodes is None:
episodes = list(range(dataset.num_episodes))
logging.info("Writing CSV files")
for episode_index in tqdm.tqdm(episodes):
# write states and actions in a csv (it can be slow for big datasets)
ep_csv_fname = get_ep_csv_fname(episode_index)
# TODO(rcadene): speedup script by loading directly from dataset, pyarrow, parquet, safetensors?
write_episode_data_csv(static_dir, ep_csv_fname, episode_index, dataset)
if serve: if serve:
run_server(dataset, episodes, host, port, static_dir, template_dir) run_server(dataset, episodes, host, port, static_dir, template_dir)
@ -231,7 +381,7 @@ def main():
parser.add_argument( parser.add_argument(
"--repo-id", "--repo-id",
type=str, type=str,
required=True, default=None,
help="Name of hugging face repositery containing a LeRobotDataset dataset (e.g. `lerobot/pusht` for https://huggingface.co/datasets/lerobot/pusht).", help="Name of hugging face repositery containing a LeRobotDataset dataset (e.g. `lerobot/pusht` for https://huggingface.co/datasets/lerobot/pusht).",
) )
parser.add_argument( parser.add_argument(
@ -246,6 +396,12 @@ def main():
default=None, default=None,
help="Root directory for a dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.", help="Root directory for a dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.",
) )
parser.add_argument(
"--load-from-hf-hub",
type=int,
default=0,
help="Load videos and parquet files from HF Hub rather than local system.",
)
parser.add_argument( parser.add_argument(
"--episodes", "--episodes",
type=int, type=int,
@ -287,11 +443,19 @@ def main():
args = parser.parse_args() args = parser.parse_args()
kwargs = vars(args) kwargs = vars(args)
repo_id = kwargs.pop("repo_id") repo_id = kwargs.pop("repo_id")
load_from_hf_hub = kwargs.pop("load_from_hf_hub")
root = kwargs.pop("root") root = kwargs.pop("root")
local_files_only = kwargs.pop("local_files_only") local_files_only = kwargs.pop("local_files_only")
dataset = LeRobotDataset(repo_id, root=root, local_files_only=local_files_only) dataset = None
visualize_dataset_html(dataset, **kwargs) if repo_id:
dataset = (
LeRobotDataset(repo_id, root=root, local_files_only=local_files_only)
if not load_from_hf_hub
else get_dataset_info(repo_id)
)
visualize_dataset_html(dataset, **vars(args))
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -0,0 +1,68 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Interactive Video Background Page</title>
<script src="https://cdn.tailwindcss.com"></script>
<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3.x.x/dist/cdn.min.js"></script>
</head>
<body class="h-screen overflow-hidden font-mono text-white" x-data="{
inputValue: '',
navigateToDataset() {
const trimmedValue = this.inputValue.trim();
if (trimmedValue) {
window.location.href = `/${trimmedValue}`;
}
}
}">
<div class="fixed inset-0 w-full h-full overflow-hidden">
<video class="absolute min-w-full min-h-full w-auto h-auto top-1/2 left-1/2 transform -translate-x-1/2 -translate-y-1/2" autoplay muted loop>
<source src="https://huggingface.co/datasets/cadene/koch_bimanual_folding/resolve/v1.6/videos/observation.images.phone_episode_000037.mp4" type="video/mp4">
Your browser does not support HTML5 video.
</video>
</div>
<div class="fixed inset-0 bg-black bg-opacity-80"></div>
<div class="relative z-10 flex flex-col items-center justify-center h-screen">
<div class="text-center mb-8">
<h1 class="text-4xl font-bold mb-4">LeRobot Dataset Visualizer</h1>
<a href="https://x.com/RemiCadene/status/1825455895561859185" target="_blank" rel="noopener noreferrer" class="underline">create & train your own robots</a>
<p class="text-xl mb-4"></p>
<div class="text-left inline-block">
<h3 class="font-semibold mb-2 mt-4">Example Datasets:</h3>
<ul class="list-disc list-inside">
{% for dataset in featured_datasets %}
<li><a href="/{{ dataset }}" class="text-blue-300 hover:text-blue-100 hover:underline">{{ dataset }}</a></li>
{% endfor %}
</ul>
</div>
</div>
<div class="flex w-full max-w-lg px-4 mb-4">
<input
type="text"
x-model="inputValue"
@keyup.enter="navigateToDataset"
placeholder="enter dataset id (ex: lerobot/droid_100)"
class="flex-grow px-4 py-2 rounded-l bg-white bg-opacity-20 text-white placeholder-gray-300 focus:outline-none focus:ring-2 focus:ring-blue-300"
>
<button
@click="navigateToDataset"
class="px-4 py-2 bg-blue-500 text-white rounded-r hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-blue-300"
>
Go
</button>
</div>
<details class="mt-4 max-w-full px-4">
<summary>More example datasets</summary>
<ul class="list-disc list-inside max-h-28 overflow-y-auto break-all">
{% for dataset in lerobot_datasets %}
<li><a href="/{{ dataset }}" class="text-blue-300 hover:text-blue-100 hover:underline">{{ dataset }}</a></li>
{% endfor %}
</ul>
</details>
</div>
</body>
</html>

View File

@ -31,11 +31,16 @@
}"> }">
<!-- Sidebar --> <!-- Sidebar -->
<div x-ref="sidebar" class="bg-slate-900 p-5 break-words overflow-y-auto shrink-0 md:shrink md:w-60 md:max-h-screen"> <div x-ref="sidebar" class="bg-slate-900 p-5 break-words overflow-y-auto shrink-0 md:shrink md:w-60 md:max-h-screen">
<a href="https://github.com/huggingface/lerobot" target="_blank" class="hidden md:block">
<img src="https://github.com/huggingface/lerobot/raw/main/media/lerobot-logo-thumbnail.png">
</a>
<a href="https://huggingface.co/datasets/{{ dataset_info.repo_id }}" target="_blank">
<h1 class="mb-4 text-xl font-semibold">{{ dataset_info.repo_id }}</h1> <h1 class="mb-4 text-xl font-semibold">{{ dataset_info.repo_id }}</h1>
</a>
<ul> <ul>
<li> <li>
Number of samples/frames: {{ dataset_info.num_frames }} Number of samples/frames: {{ dataset_info.num_samples }}
</li> </li>
<li> <li>
Number of episodes: {{ dataset_info.num_episodes }} Number of episodes: {{ dataset_info.num_episodes }}
@ -93,10 +98,35 @@
</div> </div>
<!-- Videos --> <!-- Videos -->
<div class="flex flex-wrap gap-1"> <div class="max-w-32 relative text-sm mb-4 select-none"
@click.outside="isVideosDropdownOpen = false">
<div
@click="isVideosDropdownOpen = !isVideosDropdownOpen"
class="p-2 border border-slate-500 rounded flex justify-between items-center cursor-pointer"
>
<span class="truncate">filter videos</span>
<div class="transition-transform" :class="{ 'rotate-180': isVideosDropdownOpen }">🔽</div>
</div>
<div x-show="isVideosDropdownOpen"
class="absolute mt-1 border border-slate-500 rounded shadow-lg z-10">
<div>
<template x-for="option in videosKeys" :key="option">
<div
@click="videosKeysSelected = videosKeysSelected.includes(option) ? videosKeysSelected.filter(v => v !== option) : [...videosKeysSelected, option]"
class="p-2 cursor-pointer bg-slate-900"
:class="{ 'bg-slate-700': videosKeysSelected.includes(option) }"
x-text="option"
></div>
</template>
</div>
</div>
</div>
<div class="flex flex-wrap gap-x-2 gap-y-6">
{% for video_info in videos_info %} {% for video_info in videos_info %}
<div x-show="!videoCodecError" class="max-w-96"> <div x-show="!videoCodecError && videosKeysSelected.includes('{{ video_info.filename }}')" class="max-w-96 relative">
<p class="text-sm text-gray-300 bg-gray-800 px-2 rounded-t-xl truncate">{{ video_info.filename }}</p> <p class="absolute inset-x-0 -top-4 text-sm text-gray-300 bg-gray-800 px-2 rounded-t-xl truncate">{{ video_info.filename }}</p>
<video muted loop type="video/mp4" class="object-contain w-full h-full" @canplaythrough="videoCanPlay" @timeupdate="() => { <video muted loop type="video/mp4" class="object-contain w-full h-full" @canplaythrough="videoCanPlay" @timeupdate="() => {
if (video.duration) { if (video.duration) {
const time = video.currentTime; const time = video.currentTime;
@ -182,12 +212,12 @@
<thead> <thead>
<tr> <tr>
<th></th> <th></th>
<template x-for="(_, colIndex) in Array.from({length: nColumns}, (_, index) => index)"> <template x-for="(_, colIndex) in Array.from({length: columns.length}, (_, index) => index)">
<th class="border border-slate-700"> <th class="border border-slate-700">
<div class="flex gap-x-2 justify-between px-2"> <div class="flex gap-x-2 justify-between px-2">
<input type="checkbox" :checked="isColumnChecked(colIndex)" <input type="checkbox" :checked="isColumnChecked(colIndex)"
@change="toggleColumn(colIndex)"> @change="toggleColumn(colIndex)">
<p x-text="`${columnNames[colIndex]}`"></p> <p x-text="`${columns[colIndex].key}`"></p>
</div> </div>
</th> </th>
</template> </template>
@ -197,10 +227,10 @@
<template x-for="(row, rowIndex) in rows"> <template x-for="(row, rowIndex) in rows">
<tr class="odd:bg-gray-800 even:bg-gray-900"> <tr class="odd:bg-gray-800 even:bg-gray-900">
<td class="border border-slate-700"> <td class="border border-slate-700">
<div class="flex gap-x-2 w-24 font-semibold px-1"> <div class="flex gap-x-2 max-w-64 font-semibold px-1 break-all">
<input type="checkbox" :checked="isRowChecked(rowIndex)" <input type="checkbox" :checked="isRowChecked(rowIndex)"
@change="toggleRow(rowIndex)"> @change="toggleRow(rowIndex)">
<p x-text="`Motor ${rowIndex}`"></p> <p x-text="`${rowLabels[rowIndex]}`"></p>
</div> </div>
</td> </td>
<template x-for="(cell, colIndex) in row"> <template x-for="(cell, colIndex) in row">
@ -222,16 +252,20 @@
</div> </div>
</div> </div>
<script>
const parentOrigin = "https://huggingface.co";
const searchParams = new URLSearchParams();
searchParams.set("dataset", "{{ dataset_info.repo_id }}");
searchParams.set("episode", "{{ episode_id }}");
window.parent.postMessage({ queryString: searchParams.toString() }, parentOrigin);
</script>
<script> <script>
function createAlpineData() { function createAlpineData() {
return { return {
// state // state
dygraph: null, dygraph: null,
currentFrameData: null, currentFrameData: null,
columnNames: ["state", "action", "pred action"],
nColumns: 2,
nStates: 0,
nActions: 0,
checked: [], checked: [],
dygraphTime: 0.0, dygraphTime: 0.0,
dygraphIndex: 0, dygraphIndex: 0,
@ -241,6 +275,11 @@
nVideos: {{ videos_info | length }}, nVideos: {{ videos_info | length }},
nVideoReadyToPlay: 0, nVideoReadyToPlay: 0,
videoCodecError: false, videoCodecError: false,
isVideosDropdownOpen: false,
videosKeys: {{ videos_info | map(attribute='filename') | list | tojson }},
videosKeysSelected: [],
columns: {{ columns | tojson }},
rowLabels: {{ columns | tojson }}.reduce((colA, colB) => colA.value.length > colB.value.length ? colA : colB).value,
// alpine initialization // alpine initialization
init() { init() {
@ -250,11 +289,19 @@
if(!canPlayVideos){ if(!canPlayVideos){
this.videoCodecError = true; this.videoCodecError = true;
} }
this.videosKeysSelected = this.videosKeys.map(opt => opt)
// process CSV data
const csvDataStr = {{ episode_data_csv_str|tojson|safe }};
// Create a Blob with the CSV data
const blob = new Blob([csvDataStr], { type: 'text/csv;charset=utf-8;' });
// Create a URL for the Blob
const csvUrl = URL.createObjectURL(blob);
// process CSV data // process CSV data
this.videos = document.querySelectorAll('video'); this.videos = document.querySelectorAll('video');
this.video = this.videos[0]; this.video = this.videos[0];
this.dygraph = new Dygraph(document.getElementById("graph"), '{{ ep_csv_url }}', { this.dygraph = new Dygraph(document.getElementById("graph"), csvUrl, {
pixelsPerPoint: 0.01, pixelsPerPoint: 0.01,
legend: 'always', legend: 'always',
labelsDiv: document.getElementById('labels'), labelsDiv: document.getElementById('labels'),
@ -275,21 +322,17 @@
this.colors = this.dygraph.getColors(); this.colors = this.dygraph.getColors();
this.checked = Array(this.colors.length).fill(true); this.checked = Array(this.colors.length).fill(true);
const seriesNames = this.dygraph.getLabels().slice(1);
this.nStates = seriesNames.findIndex(item => item.startsWith('action_'));
this.nActions = seriesNames.length - this.nStates;
const colors = []; const colors = [];
const LIGHTNESS = [30, 65, 85]; // state_lightness, action_lightness, pred_action_lightness let lightness = 30; // const LIGHTNESS = [30, 65, 85]; // state_lightness, action_lightness, pred_action_lightness
// colors for "state" lines for(const column of this.columns){
for (let hue = 0; hue < 360; hue += parseInt(360/this.nStates)) { const nValues = column.value.length;
const color = `hsl(${hue}, 100%, ${LIGHTNESS[0]}%)`; for (let hue = 0; hue < 360; hue += parseInt(360/nValues)) {
const color = `hsl(${hue}, 100%, ${lightness}%)`;
colors.push(color); colors.push(color);
} }
// colors for "action" lines lightness += 35;
for (let hue = 0; hue < 360; hue += parseInt(360/this.nActions)) {
const color = `hsl(${hue}, 100%, ${LIGHTNESS[1]}%)`;
colors.push(color);
} }
this.dygraph.updateOptions({ colors }); this.dygraph.updateOptions({ colors });
this.colors = colors; this.colors = colors;
@ -316,17 +359,19 @@
return []; return [];
} }
const rows = []; const rows = [];
const nRows = Math.max(this.nStates, this.nActions); const nRows = Math.max(...this.columns.map(column => column.value.length));
let rowIndex = 0; let rowIndex = 0;
while(rowIndex < nRows){ while(rowIndex < nRows){
const row = []; const row = [];
// number of states may NOT match number of actions. In this case, we null-pad the 2D array to make a fully rectangular 2d array // number of states may NOT match number of actions. In this case, we null-pad the 2D array to make a fully rectangular 2d array
const nullCell = { isNull: true }; const nullCell = { isNull: true };
const stateValueIdx = rowIndex;
const actionValueIdx = stateValueIdx + this.nStates; // because this.currentFrameData = [state0, state1, ..., stateN, action0, action1, ..., actionN]
// row consists of [state value, action value] // row consists of [state value, action value]
row.push(rowIndex < this.nStates ? this.currentFrameData[stateValueIdx] : nullCell); // push "state value" to row let idx = rowIndex;
row.push(rowIndex < this.nActions ? this.currentFrameData[actionValueIdx] : nullCell); // push "action value" to row for(const column of this.columns){
const nColumn = column.value.length;
row.push(rowIndex < nColumn ? this.currentFrameData[idx] : nullCell);
idx += nColumn; // because this.currentFrameData = [state0, state1, ..., stateN, action0, action1, ..., actionN]
}
rowIndex += 1; rowIndex += 1;
rows.push(row); rows.push(row);
} }

28
poetry.lock generated
View File

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.8.4 and should not be changed by hand. # This file is automatically @generated by Poetry 1.8.5 and should not be changed by hand.
[[package]] [[package]]
name = "absl-py" name = "absl-py"
@ -1294,6 +1294,10 @@ files = [
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:78656d3ae1282a142a5fed410ec3a6f725fdf8d9f9192ed673e336ea3b083e12"}, {file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:78656d3ae1282a142a5fed410ec3a6f725fdf8d9f9192ed673e336ea3b083e12"},
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:681e22c8ecb3b48d11cb9019f8a32d4ae1e353e20d4ce3a0f0eedd0ccbd95e5f"}, {file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:681e22c8ecb3b48d11cb9019f8a32d4ae1e353e20d4ce3a0f0eedd0ccbd95e5f"},
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4598572bab6f726ec41fabb43bf0f7e3cf8082ea0f6f8f4e57845a6c919f31b3"}, {file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4598572bab6f726ec41fabb43bf0f7e3cf8082ea0f6f8f4e57845a6c919f31b3"},
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:157fc1fed50946646f09df75c6d52198735a5973e53d252199bbb1c65e1594d2"},
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_28_armv7l.whl", hash = "sha256:7ae2724c181be10692c24fb8d9ce2a99a9afc57237332c3658e2ea6f4f33c091"},
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_28_i686.whl", hash = "sha256:3d324835f292edd81b962f8c0df44f7f47c0a6f8fe6f7d081951aeb1f5ba57d2"},
{file = "dora_rs-0.3.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:474c087b5e584293685a7d4837165b2ead96dc74fb435ae50d5fa0ac168a0de0"},
{file = "dora_rs-0.3.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:297350f05f5f87a0bf647a1e5b4446728e5f800788c6bb28b462bcd167f1de7f"}, {file = "dora_rs-0.3.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:297350f05f5f87a0bf647a1e5b4446728e5f800788c6bb28b462bcd167f1de7f"},
{file = "dora_rs-0.3.6-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:b1870a8e30f0ac298d17fd546224348d13a648bcfa0cbc51dba7e5136c1af928"}, {file = "dora_rs-0.3.6-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:b1870a8e30f0ac298d17fd546224348d13a648bcfa0cbc51dba7e5136c1af928"},
{file = "dora_rs-0.3.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:182a189212d41be0c960fd3299bf6731af2e771f8858cfb1be7ebcc17d60a254"}, {file = "dora_rs-0.3.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:182a189212d41be0c960fd3299bf6731af2e771f8858cfb1be7ebcc17d60a254"},
@ -4924,6 +4928,8 @@ files = [
{file = "PyAudio-0.2.14-cp311-cp311-win_amd64.whl", hash = "sha256:bbeb01d36a2f472ae5ee5e1451cacc42112986abe622f735bb870a5db77cf903"}, {file = "PyAudio-0.2.14-cp311-cp311-win_amd64.whl", hash = "sha256:bbeb01d36a2f472ae5ee5e1451cacc42112986abe622f735bb870a5db77cf903"},
{file = "PyAudio-0.2.14-cp312-cp312-win32.whl", hash = "sha256:5fce4bcdd2e0e8c063d835dbe2860dac46437506af509353c7f8114d4bacbd5b"}, {file = "PyAudio-0.2.14-cp312-cp312-win32.whl", hash = "sha256:5fce4bcdd2e0e8c063d835dbe2860dac46437506af509353c7f8114d4bacbd5b"},
{file = "PyAudio-0.2.14-cp312-cp312-win_amd64.whl", hash = "sha256:12f2f1ba04e06ff95d80700a78967897a489c05e093e3bffa05a84ed9c0a7fa3"}, {file = "PyAudio-0.2.14-cp312-cp312-win_amd64.whl", hash = "sha256:12f2f1ba04e06ff95d80700a78967897a489c05e093e3bffa05a84ed9c0a7fa3"},
{file = "PyAudio-0.2.14-cp313-cp313-win32.whl", hash = "sha256:95328285b4dab57ea8c52a4a996cb52be6d629353315be5bfda403d15932a497"},
{file = "PyAudio-0.2.14-cp313-cp313-win_amd64.whl", hash = "sha256:692d8c1446f52ed2662120bcd9ddcb5aa2b71f38bda31e58b19fb4672fffba69"},
{file = "PyAudio-0.2.14-cp38-cp38-win32.whl", hash = "sha256:858caf35b05c26d8fc62f1efa2e8f53d5fa1a01164842bd622f70ddc41f55000"}, {file = "PyAudio-0.2.14-cp38-cp38-win32.whl", hash = "sha256:858caf35b05c26d8fc62f1efa2e8f53d5fa1a01164842bd622f70ddc41f55000"},
{file = "PyAudio-0.2.14-cp38-cp38-win_amd64.whl", hash = "sha256:2dac0d6d675fe7e181ba88f2de88d321059b69abd52e3f4934a8878e03a7a074"}, {file = "PyAudio-0.2.14-cp38-cp38-win_amd64.whl", hash = "sha256:2dac0d6d675fe7e181ba88f2de88d321059b69abd52e3f4934a8878e03a7a074"},
{file = "PyAudio-0.2.14-cp39-cp39-win32.whl", hash = "sha256:f745109634a7c19fa4d6b8b7d6967c3123d988c9ade0cd35d4295ee1acdb53e9"}, {file = "PyAudio-0.2.14-cp39-cp39-win32.whl", hash = "sha256:f745109634a7c19fa4d6b8b7d6967c3123d988c9ade0cd35d4295ee1acdb53e9"},
@ -5890,27 +5896,27 @@ use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]] [[package]]
name = "rerun-sdk" name = "rerun-sdk"
version = "0.18.2" version = "0.21.0"
description = "The Rerun Logging SDK" description = "The Rerun Logging SDK"
optional = false optional = false
python-versions = "<3.13,>=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "rerun_sdk-0.18.2-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:bc4e73275f428e4e9feb8e85f88db7a9fd18b997b1570de62f949a926978f1b2"}, {file = "rerun_sdk-0.21.0-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:1e454ceea31c70ae9ec1bb26eaa82828661b7657ab4d2261ca0b94006d6a1975"},
{file = "rerun_sdk-0.18.2-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:efbba40a59710ae83607cb0dc140398a35979c2d2acf5190c9def2ac4697f6a8"}, {file = "rerun_sdk-0.21.0-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:84ecb77b0b5bac71b53e849801ff073de89fcd2f1e0ca0da62fb18fcbeceadf0"},
{file = "rerun_sdk-0.18.2-cp38-abi3-manylinux_2_31_aarch64.whl", hash = "sha256:2a5e3b618b6d1bfde09bd5614a898995f3c318cc69d8f6d569924a2cd41536ce"}, {file = "rerun_sdk-0.21.0-cp38-abi3-manylinux_2_31_aarch64.whl", hash = "sha256:919d921165c3238490dbe5bf00a062c68fdd2c54dc14aac6a1914c82edb5d9c8"},
{file = "rerun_sdk-0.18.2-cp38-abi3-manylinux_2_31_x86_64.whl", hash = "sha256:8fdfc4c51ef2e75cb68d39e56f0d7c196eff250cb9a0260c07d5e2d6736e31b0"}, {file = "rerun_sdk-0.21.0-cp38-abi3-manylinux_2_31_x86_64.whl", hash = "sha256:897649aadcab7014b78096f93c84c61c00a227b80adaf0dec279924b5aab53d8"},
{file = "rerun_sdk-0.18.2-cp38-abi3-win_amd64.whl", hash = "sha256:c929ade91d3be301b26671b25e70fb529524ced915523d266641c6fc667a1eb5"}, {file = "rerun_sdk-0.21.0-cp38-abi3-win_amd64.whl", hash = "sha256:2060bdb536a198f0f04789ba5ba771e66587e7851d668b3dfab257a5efa16819"},
] ]
[package.dependencies] [package.dependencies]
attrs = ">=23.1.0" attrs = ">=23.1.0"
numpy = ">=1.23,<2" numpy = ">=1.23"
pillow = ">=8.0.0" pillow = ">=8.0.0"
pyarrow = ">=14.0.2" pyarrow = ">=14.0.2"
typing-extensions = ">=4.5" typing-extensions = ">=4.5"
[package.extras] [package.extras]
notebook = ["rerun-notebook (==0.18.2)"] notebook = ["rerun-notebook (==0.21.0)"]
tests = ["pytest (==7.1.2)"] tests = ["pytest (==7.1.2)"]
[[package]] [[package]]
@ -7569,4 +7575,4 @@ xarm = ["gym-xarm"]
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = ">=3.10,<3.13" python-versions = ">=3.10,<3.13"
content-hash = "41344f0eb2d06d9a378abcd10df8205aa3926ff0a08ac5ab1a0b1bcae7440fd8" content-hash = "ee60d9251f6a6253d0c371707a72a500a6053d7925c6898e6663d9320ad11503"

View File

@ -57,7 +57,7 @@ pytest-cov = {version = ">=5.0.0", optional = true}
datasets = ">=2.19.0" datasets = ">=2.19.0"
imagecodecs = { version = ">=2024.1.1", optional = true } imagecodecs = { version = ">=2024.1.1", optional = true }
pyav = ">=12.0.5" pyav = ">=12.0.5"
rerun-sdk = ">=0.15.1" rerun-sdk = ">=0.21.0"
deepdiff = ">=7.0.1" deepdiff = ">=7.0.1"
flask = ">=3.0.3" flask = ">=3.0.3"
pandas = {version = ">=2.2.2", optional = true} pandas = {version = ">=2.2.2", optional = true}

View File

@ -14,17 +14,25 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from lerobot.scripts.visualize_dataset_html import visualize_dataset_html from huggingface_hub import DatasetCard
from lerobot.common.datasets.utils import create_lerobot_dataset_card
def test_visualize_dataset_html(tmp_path, lerobot_dataset_factory): def test_default_parameters():
root = tmp_path / "dataset" card = create_lerobot_dataset_card()
output_dir = tmp_path / "outputs" assert isinstance(card, DatasetCard)
dataset = lerobot_dataset_factory(root=root) assert card.data.tags == ["LeRobot"]
visualize_dataset_html( assert card.data.task_categories == ["robotics"]
dataset, assert card.data.configs == [
episodes=[0], {
output_dir=output_dir, "config_name": "default",
serve=False, "data_files": "data/*/*.parquet",
) }
assert (output_dir / "static" / "episode_0.csv").exists() ]
def test_with_tags():
tags = ["tag1", "tag2"]
card = create_lerobot_dataset_card(tags=tags)
assert card.data.tags == ["LeRobot", "tag1", "tag2"]