From 1bb9baf5778198e088a4ced71161eb7861905b6f Mon Sep 17 00:00:00 2001 From: Alexander Soare Date: Tue, 21 May 2024 15:15:27 +0100 Subject: [PATCH] update instructions for uploading model to the hub --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ab49602a..33d5e952 100644 --- a/README.md +++ b/README.md @@ -228,14 +228,14 @@ If your dataset format is not supported, implement your own in `lerobot/common/d Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)). -You first need to find the checkpoint located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). It should contain: +You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain: - `config.json`: A serialized version of the policy configuration (following the policy's dataclass config). - `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format. - `config.yaml`: A consolidated Hydra training configuration containing the policy, environment, and dataset configs. The policy configuration should match `config.json` exactly. The environment config is useful for anyone who wants to evaluate your policy. The dataset config just serves as a paper trail for reproducibility. To upload these to the hub, run the following: ```bash -huggingface-cli upload ${hf_user}/${repo_name} path/to/checkpoint/dir +huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model ``` See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.