fixed typos and Hugging Face in Readmes

This commit is contained in:
Kashif Rasul 2024-05-04 09:58:15 +02:00 committed by Simon Alibert
parent f3bba0270d
commit b88a9a9e6e
3 changed files with 13 additions and 13 deletions

View File

@ -29,7 +29,7 @@
--- ---
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier for entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models. 🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier of entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning. 🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
@ -86,7 +86,7 @@ For instance, to install 🤗 LeRobot with aloha and pusht, use:
pip install ".[aloha, pusht]" pip install ".[aloha, pusht]"
``` ```
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiments tracking, log in with To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
```bash ```bash
wandb login wandb login
``` ```
@ -130,7 +130,7 @@ hydra.run.dir=outputs/visualize_dataset/example
### Evaluate a pretrained policy ### Evaluate a pretrained policy
Check out [examples](./examples) to see how you can load a pretrained policy from HuggingFace hub, load up the corresponding environment and model, and run an evaluation. Check out [examples](./examples) to see how you can load a pretrained policy from the Hugging Face hub, load up the corresponding environment and model, and run an evaluation.
Or you can achieve the same result by executing our script from the command line: Or you can achieve the same result by executing our script from the command line:
```bash ```bash
@ -155,7 +155,7 @@ See `python lerobot/scripts/eval.py --help` for more instructions.
Check out [examples](./examples) to see how you can start training a model on a dataset, which will be automatically downloaded if needed. Check out [examples](./examples) to see how you can start training a model on a dataset, which will be automatically downloaded if needed.
In general, you can use our training script to easily train any policy on any environment: In general, you can use our training script to easily train any policy in any environment:
```bash ```bash
python lerobot/scripts/train.py \ python lerobot/scripts/train.py \
env=aloha \ env=aloha \
@ -177,7 +177,7 @@ If you would like to contribute to 🤗 LeRobot, please check out our [contribut
# TODO(rcadene, AdilZouitine): rewrite this section # TODO(rcadene, AdilZouitine): rewrite this section
``` ```
To add a dataset to the hub, first login and use a token generated from [huggingface settings](https://huggingface.co/settings/tokens) with write access: To add a dataset to the hub, first login and use a token generated from [Hugging Face settings](https://huggingface.co/settings/tokens) with write access:
```bash ```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
``` ```
@ -249,7 +249,7 @@ Firstly, make sure you have a model repository set up on the hub. The hub ID loo
Secondly, assuming you have trained a policy, you need the following (which should all be in any of the subdirectories of `checkpoints` in your training output folder, if you've used the LeRobot training script): Secondly, assuming you have trained a policy, you need the following (which should all be in any of the subdirectories of `checkpoints` in your training output folder, if you've used the LeRobot training script):
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config). - `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: The `torch.nn.Module` parameters saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format. - `model.safetensors`: The `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
- `config.yaml`: This is the consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility. - `config.yaml`: This is the consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility.
To upload these to the hub, run the following with a desired revision ID. To upload these to the hub, run the following with a desired revision ID.

View File

@ -37,16 +37,16 @@ How to decode videos?
## Variables ## Variables
**Image content** **Image content**
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, etc. Hence, we run this bechmark on two datasets: `pusht` (simulation) and `umi` (real-world outdoor). We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, etc. Hence, we run this benchmark on two datasets: `pusht` (simulation) and `umi` (real-world outdoor).
**Requested timestamps** **Requested timestamps**
In this benchmark, we focus on the loading time of random access, so we are not interested about sequentially loading all frames of a video like in a movie. However, the number of consecutive timestamps requested and their spacing can greatly affect the `load_time_factor`. In fact, it is expected to get faster loading time by decoding a large number of consecutive frames from a video, than to load the same data from individual images. To reflect our robotics use case, we consider a few settings: In this benchmark, we focus on the loading time of random access, so we are not interested in sequentially loading all frames of a video like in a movie. However, the number of consecutive timestamps requested and their spacing can greatly affect the `load_time_factor`. In fact, it is expected to get faster loading time by decoding a large number of consecutive frames from a video, than to load the same data from individual images. To reflect our robotics use case, we consider a few settings:
- `single_frame`: 1 frame, - `single_frame`: 1 frame,
- `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`), - `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
- `2_frames_4_space`: 2 consecutive frames with 4 frames of spacing (e.g `[t, t + 4 / fps]`), - `2_frames_4_space`: 2 consecutive frames with 4 frames of spacing (e.g `[t, t + 4 / fps]`),
**Data augmentations** **Data augmentations**
We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robusts (e.g. robust to color changes, compression, etc.). We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
## Results ## Results

View File

@ -15,7 +15,7 @@ from tests.utils import require_env
def test_available_env_task(env_name: str, task_name: list): def test_available_env_task(env_name: str, task_name: list):
""" """
This test verifies that all environments listed in `lerobot/__init__.py` can This test verifies that all environments listed in `lerobot/__init__.py` can
be sucessfully imported if they're installed — and that their be successfully imported if they're installed — and that their
`available_tasks_per_env` are valid. `available_tasks_per_env` are valid.
""" """
package_name = f"gym_{env_name}" package_name = f"gym_{env_name}"