Commit Graph

919 Commits

Author SHA1 Message Date
Michel Aractingi f4f5b26a21 Several fixes to move the actor_server and learner_server code from the maniskill environment to the real robot environment.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Eugene Mironov 434d1e0614 [HIL-SERL port] Add Reward classifier benchmark tracking to chose best visual encoder (#688) 2025-03-28 17:18:24 +00:00
Michel Aractingi 729b4ed697 - Added `lerobot/scripts/server/gym_manipulator.py` that contains all the necessary wrappers to run a gym-style env around the real robot.
- Added `lerobot/scripts/server/find_joint_limits.py` to test the min and max angles of the motion you wish the robot to explore during RL training.
- Added logic in `manipulator.py` to limit the maximum possible joint angles to allow motion within a predefined joint position range. The limits are specified in the yaml config for each robot. Checkout the so100.yaml.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 163bcbcad4 fixed bug in crop_dataset_roi.py
added missing buffer.pt in server dir

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 875662f16b Added additional wrappers for the environment: Action repeat, keyboard interface, reset wrapper
Tested the reset mechanism and keyboard interface and the convert wrapper on the robots.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 87c7eca582 Added crop_dataset_roi.py that allows you to load a lerobotdataset -> crop its images -> create a new lerobot dataset with the cropped and resized images.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 179ee3b1f6 - Added base gym env class for the real robot environment.
- Added several wrappers around the base gym env robot class.
- Including: time limit, reward classifier, crop images, preprocess observations.
- Added an interactive script crop_roi.py where the user can interactively select the roi in the observation images and return the correct crop values that will improve the policy and reward classifier performance.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi b29401e4e2 - Refactor observation encoder in `modeling_sac.py`
- added `torch.compile` to the actor and learner servers.
- organized imports in `train_sac.py`
- optimized the parameters push by not sending the frozen pre-trained encoder.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Yoel faab32fe14 [Port HIL-SERL] Add HF vision encoder option in SAC (#651)
Added support with custom pretrained vision encoder to the modeling sac implementation. Great job @ChorntonYoel !
2025-03-28 17:18:24 +00:00
Michel Aractingi c620b0878f Cleaned `learner_server.py`. Added several block function to improve readability.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 2023289ce8 Added support for checkpointing the policy. We can save and load the policy state dict, optimizers state, optimization step and interaction step
Added functions for converting the replay buffer from and to LeRobotDataset. When we want to save the replay buffer, we convert it first to LeRobotDataset format and save it locally and vice-versa.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 9afd093030 Removed unnecessary time.sleep in the streaming server on the learner side
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi f3c4d6e1ec Added missing config files `env/maniskill_example.yaml` and `policy/sac_maniskill.yaml` that are necessary to run the lerobot implementation of sac with the maniskill baselines.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 18207d995e - Added additional logging information in wandb around the timings of the policy loop and optimization loop.
- Optimized critic design that improves the performance of the learner loop by a factor of 2
- Cleaned the code and fixed style issues

- Completed the config with actor_learner_config field that contains host-ip and port elemnts that are necessary for the actor-learner servers.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi a0a81c0c12 FREEDOM, added back the optimization loop code in `learner_server.py`
Ran experiment with pushcube env from maniskill. The learning seem to work.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi ef64ba91d9 Added server directory in `lerobot/scripts` that contains scripts and the protobuf message types to split training into two processes, acting and learning. The actor rollouts the policy and collects interaction data while the learner recieves the data, trains the policy and sends the updated parameters to the actor. The two scripts are ran simultaneously
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
AdilZouitine 83dc00683c Stable version of rlpd + drq 2025-03-28 17:18:24 +00:00
AdilZouitine 5b92465e38 Add type annotations and restructure SACConfig class fields 2025-03-28 17:18:24 +00:00
Adil Zouitine 4b78ab2789 Change SAC policy implementation with configuration and modeling classes 2025-03-28 17:18:24 +00:00
Adil Zouitine bd8c768f62 SAC works 2025-03-28 17:18:24 +00:00
Adil Zouitine 1e9bafc852 [WIP] correct sac implementation 2025-03-28 17:18:24 +00:00
Adil Zouitine 921ed960fb Add rlpd tricks 2025-03-28 17:18:24 +00:00
Adil Zouitine 67b64e445b SAC works 2025-03-28 17:18:24 +00:00
Adil Zouitine 6c8023e702 remove breakpoint 2025-03-28 17:18:24 +00:00
Adil Zouitine b495b19a6a [WIP] correct sac implementation 2025-03-28 17:18:24 +00:00
Michel Aractingi 6139df553d Extend reward classifier for multiple camera views (#626) 2025-03-28 17:18:24 +00:00
Eugene Mironov b68730474a [Port HIL_SERL] Final fixes for the Reward Classifier (#598) 2025-03-28 17:18:24 +00:00
Michel Aractingi 764925e4a2 added temporary fix for missing task_index key in online environment 2025-03-28 17:18:24 +00:00
Michel Aractingi 7bb142b707 split encoder for critic and actor 2025-03-28 17:18:24 +00:00
Michel Aractingi 2c2ed084cc style fixes 2025-03-28 17:18:24 +00:00
KeWang1017 91fefdecfa Refactor SAC configuration and policy for improved action sampling and stability
- Updated SACConfig to replace standard deviation parameterization with log_std_min and log_std_max for better control over action distributions.
- Modified SACPolicy to streamline action selection and log probability calculations, enhancing stochastic behavior.
- Removed deprecated TanhMultivariateNormalDiag class to simplify the codebase and improve maintainability.

These changes aim to enhance the robustness and performance of the SAC implementation during training and inference.
2025-03-28 17:18:24 +00:00
KeWang1017 70e3b9248c Refine SAC configuration and policy for enhanced performance
- Updated standard deviation parameterization in SACConfig to 'softplus' with defined min and max values for improved stability.
- Modified action sampling in SACPolicy to use reparameterized sampling, ensuring better gradient flow and log probability calculations.
- Cleaned up log probability calculations in TanhMultivariateNormalDiag for clarity and efficiency.
- Increased evaluation frequency in YAML configuration to 50000 for more efficient training cycles.

These changes aim to enhance the robustness and performance of the SAC implementation during training and inference.
2025-03-28 17:18:24 +00:00
KeWang1017 0ecf40d396 Refactor SACPolicy for improved action sampling and standard deviation handling
- Updated action selection to use distribution sampling and log probabilities for better stochastic behavior.
- Enhanced standard deviation clamping to prevent extreme values, ensuring stability in policy outputs.
- Cleaned up code by removing unnecessary comments and improving readability.

These changes aim to refine the SAC implementation, enhancing its robustness and performance during training and inference.
2025-03-28 17:18:24 +00:00
KeWang1017 a113daa81e trying to get sac running 2025-03-28 17:18:24 +00:00
Michel Aractingi 80b86e9bc3 Added normalization schemes and style checks 2025-03-28 17:18:24 +00:00
Michel Aractingi 9dafad15e6 added optimizer and sac to factory.py 2025-03-28 17:18:24 +00:00
Eugene Mironov d96edbf5ac [HIL-SERL PORT] Fix linter issues (#588) 2025-03-28 17:18:24 +00:00
Eugene Mironov 6340d9d17c [Port Hil-SERL] Add unit tests for the reward classifier & fix imports & check script (#578) 2025-03-28 17:18:24 +00:00
Michel Aractingi 66268fcf85 added comments from kewang 2025-03-28 17:18:24 +00:00
KeWang1017 a5228a0dfe Enhance SAC configuration and policy with new parameters and subsampling logic
- Added `num_subsample_critics`, `critic_target_update_weight`, and `utd_ratio` to SACConfig.
- Implemented target entropy calculation in SACPolicy if not provided.
- Introduced subsampling of critics to prevent overfitting during updates.
- Updated temperature loss calculation to use the new target entropy.
- Added comments for future UTD update implementation.

These changes improve the flexibility and performance of the SAC implementation.
2025-03-28 17:18:24 +00:00
KeWang dbadaae28b Port SAC WIP (#581)
Co-authored-by: KeWang1017 <ke.wang@helloleap.ai>
2025-03-28 17:18:24 +00:00
Michel Aractingi 44536d1f0c completed losses 2025-03-28 17:18:24 +00:00
Michel Aractingi 69b6de46d7 nit in control_robot.py 2025-03-28 17:18:24 +00:00
Michel Aractingi 399f834788 Update lerobot/scripts/train_hilserl_classifier.py
Co-authored-by: Yoel <yoel.chornton@gmail.com>
2025-03-28 17:18:24 +00:00
Eugene Mironov df57d372d6 Fixup 2025-03-28 17:18:24 +00:00
Michel Aractingi 76234b7d14 Add human intervention mechanism and eval_robot script to evaluate policy on the robot (#541)
Co-authored-by: Yoel <yoel.chornton@gmail.com>
2025-03-28 17:18:24 +00:00
Yoel 58cc445921 Reward classifier and training (#528)
Co-authored-by: Daniel Ritchie <daniel@brainwavecollective.ai>
Co-authored-by: resolver101757 <kelster101757@hotmail.com>
Co-authored-by: Jannik Grothusen <56967823+J4nn1K@users.noreply.github.com>
Co-authored-by: Remi <re.cadene@gmail.com>
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
2025-03-28 17:18:24 +00:00
Steven Palma b568de35ad
fix(datasets): cast imgs_dir as Path (#915) 2025-03-28 18:08:12 +01:00
Yongjin Cho ae9c81ac39
fix(docs): correct spelling of 'ffmpeg' in installation instructions (#914) 2025-03-28 17:43:33 +01:00
Steven Palma 78fd1a1e04
chore(docs): update docs (#911) 2025-03-27 09:55:06 +01:00