fixed typos and Hugging Face in Readmes
This commit is contained in:
parent
f3bba0270d
commit
b88a9a9e6e
18
README.md
18
README.md
|
@ -29,13 +29,13 @@
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier for entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
|
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier of entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
|
||||||
|
|
||||||
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
|
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
|
||||||
|
|
||||||
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulated environments so that everyone can get started. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
|
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulated environments so that everyone can get started. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
|
||||||
|
|
||||||
🤗 LeRobot hosts pretrained models and datasets on this HuggingFace community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
|
🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
|
||||||
|
|
||||||
#### Examples of pretrained models and environments
|
#### Examples of pretrained models and environments
|
||||||
|
|
||||||
|
@ -86,7 +86,7 @@ For instance, to install 🤗 LeRobot with aloha and pusht, use:
|
||||||
pip install ".[aloha, pusht]"
|
pip install ".[aloha, pusht]"
|
||||||
```
|
```
|
||||||
|
|
||||||
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiments tracking, log in with
|
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
|
||||||
```bash
|
```bash
|
||||||
wandb login
|
wandb login
|
||||||
```
|
```
|
||||||
|
@ -118,7 +118,7 @@ wandb login
|
||||||
|
|
||||||
### Visualize datasets
|
### Visualize datasets
|
||||||
|
|
||||||
Check out [examples](./examples) to see how you can import our dataset class, download the data from the HuggingFace hub and use our rendering utilities.
|
Check out [examples](./examples) to see how you can import our dataset class, download the data from the Hugging Face hub and use our rendering utilities.
|
||||||
|
|
||||||
Or you can achieve the same result by executing our script from the command line:
|
Or you can achieve the same result by executing our script from the command line:
|
||||||
```bash
|
```bash
|
||||||
|
@ -130,7 +130,7 @@ hydra.run.dir=outputs/visualize_dataset/example
|
||||||
|
|
||||||
### Evaluate a pretrained policy
|
### Evaluate a pretrained policy
|
||||||
|
|
||||||
Check out [examples](./examples) to see how you can load a pretrained policy from HuggingFace hub, load up the corresponding environment and model, and run an evaluation.
|
Check out [examples](./examples) to see how you can load a pretrained policy from the Hugging Face hub, load up the corresponding environment and model, and run an evaluation.
|
||||||
|
|
||||||
Or you can achieve the same result by executing our script from the command line:
|
Or you can achieve the same result by executing our script from the command line:
|
||||||
```bash
|
```bash
|
||||||
|
@ -155,7 +155,7 @@ See `python lerobot/scripts/eval.py --help` for more instructions.
|
||||||
|
|
||||||
Check out [examples](./examples) to see how you can start training a model on a dataset, which will be automatically downloaded if needed.
|
Check out [examples](./examples) to see how you can start training a model on a dataset, which will be automatically downloaded if needed.
|
||||||
|
|
||||||
In general, you can use our training script to easily train any policy on any environment:
|
In general, you can use our training script to easily train any policy in any environment:
|
||||||
```bash
|
```bash
|
||||||
python lerobot/scripts/train.py \
|
python lerobot/scripts/train.py \
|
||||||
env=aloha \
|
env=aloha \
|
||||||
|
@ -177,7 +177,7 @@ If you would like to contribute to 🤗 LeRobot, please check out our [contribut
|
||||||
# TODO(rcadene, AdilZouitine): rewrite this section
|
# TODO(rcadene, AdilZouitine): rewrite this section
|
||||||
```
|
```
|
||||||
|
|
||||||
To add a dataset to the hub, first login and use a token generated from [huggingface settings](https://huggingface.co/settings/tokens) with write access:
|
To add a dataset to the hub, first login and use a token generated from [Hugging Face settings](https://huggingface.co/settings/tokens) with write access:
|
||||||
```bash
|
```bash
|
||||||
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
||||||
```
|
```
|
||||||
|
@ -242,14 +242,14 @@ python tests/scripts/mock_dataset.py --in-data-dir data/$DATASET --out-data-dir
|
||||||
# TODO(rcadene, alexander-soare): rewrite this section
|
# TODO(rcadene, alexander-soare): rewrite this section
|
||||||
```
|
```
|
||||||
|
|
||||||
Once you have trained a policy you may upload it to the HuggingFace hub.
|
Once you have trained a policy you may upload it to the Hugging Face hub.
|
||||||
|
|
||||||
Firstly, make sure you have a model repository set up on the hub. The hub ID looks like HF_USER/REPO_NAME.
|
Firstly, make sure you have a model repository set up on the hub. The hub ID looks like HF_USER/REPO_NAME.
|
||||||
|
|
||||||
Secondly, assuming you have trained a policy, you need the following (which should all be in any of the subdirectories of `checkpoints` in your training output folder, if you've used the LeRobot training script):
|
Secondly, assuming you have trained a policy, you need the following (which should all be in any of the subdirectories of `checkpoints` in your training output folder, if you've used the LeRobot training script):
|
||||||
|
|
||||||
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
|
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
|
||||||
- `model.safetensors`: The `torch.nn.Module` parameters saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
|
- `model.safetensors`: The `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
|
||||||
- `config.yaml`: This is the consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility.
|
- `config.yaml`: This is the consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility.
|
||||||
|
|
||||||
To upload these to the hub, run the following with a desired revision ID.
|
To upload these to the hub, run the following with a desired revision ID.
|
||||||
|
|
|
@ -37,16 +37,16 @@ How to decode videos?
|
||||||
## Variables
|
## Variables
|
||||||
|
|
||||||
**Image content**
|
**Image content**
|
||||||
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, etc. Hence, we run this bechmark on two datasets: `pusht` (simulation) and `umi` (real-world outdoor).
|
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, etc. Hence, we run this benchmark on two datasets: `pusht` (simulation) and `umi` (real-world outdoor).
|
||||||
|
|
||||||
**Requested timestamps**
|
**Requested timestamps**
|
||||||
In this benchmark, we focus on the loading time of random access, so we are not interested about sequentially loading all frames of a video like in a movie. However, the number of consecutive timestamps requested and their spacing can greatly affect the `load_time_factor`. In fact, it is expected to get faster loading time by decoding a large number of consecutive frames from a video, than to load the same data from individual images. To reflect our robotics use case, we consider a few settings:
|
In this benchmark, we focus on the loading time of random access, so we are not interested in sequentially loading all frames of a video like in a movie. However, the number of consecutive timestamps requested and their spacing can greatly affect the `load_time_factor`. In fact, it is expected to get faster loading time by decoding a large number of consecutive frames from a video, than to load the same data from individual images. To reflect our robotics use case, we consider a few settings:
|
||||||
- `single_frame`: 1 frame,
|
- `single_frame`: 1 frame,
|
||||||
- `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
|
- `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
|
||||||
- `2_frames_4_space`: 2 consecutive frames with 4 frames of spacing (e.g `[t, t + 4 / fps]`),
|
- `2_frames_4_space`: 2 consecutive frames with 4 frames of spacing (e.g `[t, t + 4 / fps]`),
|
||||||
|
|
||||||
**Data augmentations**
|
**Data augmentations**
|
||||||
We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robusts (e.g. robust to color changes, compression, etc.).
|
We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
|
||||||
|
|
||||||
|
|
||||||
## Results
|
## Results
|
||||||
|
|
|
@ -15,7 +15,7 @@ from tests.utils import require_env
|
||||||
def test_available_env_task(env_name: str, task_name: list):
|
def test_available_env_task(env_name: str, task_name: list):
|
||||||
"""
|
"""
|
||||||
This test verifies that all environments listed in `lerobot/__init__.py` can
|
This test verifies that all environments listed in `lerobot/__init__.py` can
|
||||||
be sucessfully imported — if they're installed — and that their
|
be successfully imported — if they're installed — and that their
|
||||||
`available_tasks_per_env` are valid.
|
`available_tasks_per_env` are valid.
|
||||||
"""
|
"""
|
||||||
package_name = f"gym_{env_name}"
|
package_name = f"gym_{env_name}"
|
||||||
|
|
Loading…
Reference in New Issue