Commit Graph

892 Commits

Author SHA1 Message Date
AdilZouitine 2c799508d7 Update ManiSkill configuration and replay buffer to support truncation and dataset handling
- Reduced image size in ManiSkill environment configuration from 128 to 64
- Added support for truncation in replay buffer and actor server
- Updated SAC policy configuration to use a specific dataset and modify vision encoder settings
- Improved dataset conversion process with progress tracking and task naming
- Added flexibility for joint action space masking in learner server
2025-03-28 17:18:48 +00:00
Michel Aractingi ff223c106d Added caching function in the learner_server and modeling sac in order to limit the number of forward passes through the pretrained encoder when its frozen.
Added tensordict dependencies
Updated the version of torch and torchvision

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:48 +00:00
Eugene Mironov d48161da1b [Port HIL-SERL] Adjust Actor-Learner architecture & clean up dependency management for HIL-SERL (#722) 2025-03-28 17:18:48 +00:00
AdilZouitine 150def839c Refactor SAC policy with performance optimizations and multi-camera support
- Introduced Ensemble and CriticHead classes for more efficient critic network handling
- Added support for multiple camera inputs in observation encoder
- Optimized image encoding by batching image processing
- Updated configuration for ManiSkill environment with reduced image size and action scaling
- Compiled critic networks for improved performance
- Simplified normalization and ensemble handling in critic networks
Co-authored-by: michel-aractingi <michel.aractingi@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 795063aa1b - Fixed big issue in the loading of the policy parameters sent by the learner to the actor -- pass only the actor to the `update_policy_parameters` and remove `strict=False`
- Fixed big issue in the normalization of the actions in the `forward` function of the critic -- remove the `torch.no_grad` decorator in `normalize.py` in the normalization function
- Fixed performance issue to boost the optimization frequency by setting the storage device to be the same as the device of learning.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
AdilZouitine d9cd85d976 Re-enable parameter push thread in learner server
- Uncomment and start the param_push_thread
- Restore thread joining for param_push_thread
2025-03-28 17:18:24 +00:00
AdilZouitine 279e03b6c8 Improve wandb logging and custom step tracking in logger
- Modify logger to support multiple custom step keys
- Update logging method to handle custom step keys more flexibly

- Enhance logging of optimization step and frequency
Co-authored-by: michel-aractingi  <michel.aractingi@gmail.com>
2025-03-28 17:18:24 +00:00
AdilZouitine b7a0ffc3b8 Add maniskill support.
Co-authored-by: Michel Aractingi <michel.aractingi@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 291358d6a2 Fixed bug in the action scale of the intervention actions and offline dataset actions. (scale by inverse delta)
Co-authored-by: Adil Zouitine <adizouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 2aca830a09 Modified crop_dataset_roi interface to automatically write the cropped parameters to a json file in the meta of the dataset
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 2f34d84298 Optimized the replay buffer from the memory side to store data on cpu instead of a gpu device and send the batches to the gpu.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 61b0e9539f nit
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 23c6b891a3 removed uncomment in actor server
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 0847b2119b Changed the init_final value to center the starting mean and std of the policy
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 24fb8a7f47 Changed bounds for a new so100 robot
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi eb7e28d9d9 Hardcoded some normalization parameters. TODO refactor
Added masking actions on the level of the intervention actions and offline dataset

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi a0e0a9a9b1 fix log_alpha in modeling_sac: change to nn.parameter
added pretrained vision model in policy

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 57e09828ce Added logging for interventions to monitor the rate of interventions through time
Added an s keyboard command to force success in the case the reward classifier fails

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 9c14830cd9 Added possiblity to record and replay delta actions during teleoperation rather than absolute actions
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Yoel 4057904238 [PORT-Hilserl] classifier fixes (#695)
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Eugene Mironov 3c58867738 [Port HIL-SERL] Add resnet-10 as default encoder for HIL-SERL (#696)
Co-authored-by: Khalil Meftah <kmeftah.khalil@gmail.com>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: Ke Wang <superwk1017@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi c623824139 - Added JointMaskingActionSpace wrapper in `gym_manipulator` in order to select which joints will be controlled. For example, we can disable the gripper actions for some tasks.
- Added Nan detection mechanisms in the actor, learner and gym_manipulator for the case where we encounter nans in the loop.
- changed the non-blocking in the `.to(device)` functions to only work for the case of cuda because they were causing nans when running the policy on mps
- Added some joint clipping and limits in the env, robot and policy configs. TODO clean this part and make the limits in one config file only.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 3cb43f801c Added sac_real config file in the policym configs dir.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi f4f5b26a21 Several fixes to move the actor_server and learner_server code from the maniskill environment to the real robot environment.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Eugene Mironov 434d1e0614 [HIL-SERL port] Add Reward classifier benchmark tracking to chose best visual encoder (#688) 2025-03-28 17:18:24 +00:00
Michel Aractingi 729b4ed697 - Added `lerobot/scripts/server/gym_manipulator.py` that contains all the necessary wrappers to run a gym-style env around the real robot.
- Added `lerobot/scripts/server/find_joint_limits.py` to test the min and max angles of the motion you wish the robot to explore during RL training.
- Added logic in `manipulator.py` to limit the maximum possible joint angles to allow motion within a predefined joint position range. The limits are specified in the yaml config for each robot. Checkout the so100.yaml.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 163bcbcad4 fixed bug in crop_dataset_roi.py
added missing buffer.pt in server dir

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 875662f16b Added additional wrappers for the environment: Action repeat, keyboard interface, reset wrapper
Tested the reset mechanism and keyboard interface and the convert wrapper on the robots.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 87c7eca582 Added crop_dataset_roi.py that allows you to load a lerobotdataset -> crop its images -> create a new lerobot dataset with the cropped and resized images.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 179ee3b1f6 - Added base gym env class for the real robot environment.
- Added several wrappers around the base gym env robot class.
- Including: time limit, reward classifier, crop images, preprocess observations.
- Added an interactive script crop_roi.py where the user can interactively select the roi in the observation images and return the correct crop values that will improve the policy and reward classifier performance.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi b29401e4e2 - Refactor observation encoder in `modeling_sac.py`
- added `torch.compile` to the actor and learner servers.
- organized imports in `train_sac.py`
- optimized the parameters push by not sending the frozen pre-trained encoder.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Yoel faab32fe14 [Port HIL-SERL] Add HF vision encoder option in SAC (#651)
Added support with custom pretrained vision encoder to the modeling sac implementation. Great job @ChorntonYoel !
2025-03-28 17:18:24 +00:00
Michel Aractingi c620b0878f Cleaned `learner_server.py`. Added several block function to improve readability.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 2023289ce8 Added support for checkpointing the policy. We can save and load the policy state dict, optimizers state, optimization step and interaction step
Added functions for converting the replay buffer from and to LeRobotDataset. When we want to save the replay buffer, we convert it first to LeRobotDataset format and save it locally and vice-versa.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 9afd093030 Removed unnecessary time.sleep in the streaming server on the learner side
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi f3c4d6e1ec Added missing config files `env/maniskill_example.yaml` and `policy/sac_maniskill.yaml` that are necessary to run the lerobot implementation of sac with the maniskill baselines.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi 18207d995e - Added additional logging information in wandb around the timings of the policy loop and optimization loop.
- Optimized critic design that improves the performance of the learner loop by a factor of 2
- Cleaned the code and fixed style issues

- Completed the config with actor_learner_config field that contains host-ip and port elemnts that are necessary for the actor-learner servers.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi a0a81c0c12 FREEDOM, added back the optimization loop code in `learner_server.py`
Ran experiment with pushcube env from maniskill. The learning seem to work.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
Michel Aractingi ef64ba91d9 Added server directory in `lerobot/scripts` that contains scripts and the protobuf message types to split training into two processes, acting and learning. The actor rollouts the policy and collects interaction data while the learner recieves the data, trains the policy and sends the updated parameters to the actor. The two scripts are ran simultaneously
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-28 17:18:24 +00:00
AdilZouitine 83dc00683c Stable version of rlpd + drq 2025-03-28 17:18:24 +00:00
AdilZouitine 5b92465e38 Add type annotations and restructure SACConfig class fields 2025-03-28 17:18:24 +00:00
Adil Zouitine 4b78ab2789 Change SAC policy implementation with configuration and modeling classes 2025-03-28 17:18:24 +00:00
Adil Zouitine bd8c768f62 SAC works 2025-03-28 17:18:24 +00:00
Adil Zouitine 1e9bafc852 [WIP] correct sac implementation 2025-03-28 17:18:24 +00:00
Adil Zouitine 921ed960fb Add rlpd tricks 2025-03-28 17:18:24 +00:00
Adil Zouitine 67b64e445b SAC works 2025-03-28 17:18:24 +00:00
Adil Zouitine 6c8023e702 remove breakpoint 2025-03-28 17:18:24 +00:00
Adil Zouitine b495b19a6a [WIP] correct sac implementation 2025-03-28 17:18:24 +00:00
Michel Aractingi 6139df553d Extend reward classifier for multiple camera views (#626) 2025-03-28 17:18:24 +00:00
Eugene Mironov b68730474a [Port HIL_SERL] Final fixes for the Reward Classifier (#598) 2025-03-28 17:18:24 +00:00