diff --git a/README.md b/README.md index a867e2e1..41a04997 100644 --- a/README.md +++ b/README.md @@ -152,8 +152,9 @@ See `python lerobot/scripts/eval.py --help` for more instructions. Check out [example 3](./examples/3_train_policy.py) to see how you can start training a model on a dataset, which will be automatically downloaded if needed. -In general, you can use our training script to easily train any policy in any environment: +In general, you can use our training script to easily train any policy on its environment: ```bash +# TODO(aliberts): not working python lerobot/scripts/train.py \ env=aloha \ task=sim_insertion \ diff --git a/examples/3_train_policy.py b/examples/3_train_policy.py index 134271ea..2b6c3c6a 100644 --- a/examples/3_train_policy.py +++ b/examples/3_train_policy.py @@ -18,7 +18,7 @@ from lerobot.common.utils.utils import init_hydra_config output_directory = Path("outputs/train/example_pusht_diffusion") os.makedirs(output_directory, exist_ok=True) -# Number of offline training steps (we'll only do offline training for this example. +# Number of offline training steps (we'll only do offline training for this example.) # Adjust as you prefer. 5000 steps are needed to get something worth evaluating. training_steps = 5000 device = torch.device("cuda")