Commit Graph

784 Commits

Author SHA1 Message Date
Michel Aractingi 7d5a9530f7 fixed bug in crop_dataset_roi.py
added missing buffer.pt in server dir

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-05 18:22:50 +00:00
Michel Aractingi e0527b4a6b Added additional wrappers for the environment: Action repeat, keyboard interface, reset wrapper
Tested the reset mechanism and keyboard interface and the convert wrapper on the robots.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-04 17:41:14 +00:00
Michel Aractingi efb1982eec Added crop_dataset_roi.py that allows you to load a lerobotdataset -> crop its images -> create a new lerobot dataset with the cropped and resized images.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 17:48:35 +00:00
Michel Aractingi 2211209be5 - Added base gym env class for the real robot environment.
- Added several wrappers around the base gym env robot class.
- Including: time limit, reward classifier, crop images, preprocess observations.
- Added an interactive script crop_roi.py where the user can interactively select the roi in the observation images and return the correct crop values that will improve the policy and reward classifier performance.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:59 +00:00
Michel Aractingi 506821c7df - Refactor observation encoder in `modeling_sac.py`
- added `torch.compile` to the actor and learner servers.
- organized imports in `train_sac.py`
- optimized the parameters push by not sending the frozen pre-trained encoder.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Yoel f1c8bfe01e [Port HIL-SERL] Add HF vision encoder option in SAC (#651)
Added support with custom pretrained vision encoder to the modeling sac implementation. Great job @ChorntonYoel !
2025-02-03 15:07:58 +00:00
Michel Aractingi 7c89bd1018 Cleaned `learner_server.py`. Added several block function to improve readability.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Michel Aractingi 367dfe51c6 Added support for checkpointing the policy. We can save and load the policy state dict, optimizers state, optimization step and interaction step
Added functions for converting the replay buffer from and to LeRobotDataset. When we want to save the replay buffer, we convert it first to LeRobotDataset format and save it locally and vice-versa.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Michel Aractingi e856ffc91e Removed unnecessary time.sleep in the streaming server on the learner side
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Michel Aractingi 9aabe212ea Added missing config files `env/maniskill_example.yaml` and `policy/sac_maniskill.yaml` that are necessary to run the lerobot implementation of sac with the maniskill baselines.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Michel Aractingi 42618f4bd6 - Added additional logging information in wandb around the timings of the policy loop and optimization loop.
- Optimized critic design that improves the performance of the learner loop by a factor of 2
- Cleaned the code and fixed style issues

- Completed the config with actor_learner_config field that contains host-ip and port elemnts that are necessary for the actor-learner servers.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Michel Aractingi 36576c958f FREEDOM, added back the optimization loop code in `learner_server.py`
Ran experiment with pushcube env from maniskill. The learning seem to work.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
Michel Aractingi 322a78a378 Added server directory in `lerobot/scripts` that contains scripts and the protobuf message types to split training into two processes, acting and learning. The actor rollouts the policy and collects interaction data while the learner recieves the data, trains the policy and sends the updated parameters to the actor. The two scripts are ran simultaneously
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-02-03 15:07:58 +00:00
AdilZouitine d75b44f89f Stable version of rlpd + drq 2025-02-03 15:07:57 +00:00
AdilZouitine 1fb03d4cf2 Add type annotations and restructure SACConfig class fields 2025-02-03 15:07:57 +00:00
Adil Zouitine 7d2970fdfe Change SAC policy implementation with configuration and modeling classes 2025-02-03 15:07:50 +00:00
Adil Zouitine 8105efb338 Add rlpd tricks 2025-02-03 15:06:18 +00:00
Adil Zouitine c1d4bf4b63 SAC works 2025-02-03 15:06:18 +00:00
Adil Zouitine 86df8a433d remove breakpoint 2025-02-03 15:06:18 +00:00
Adil Zouitine 956c547254 [WIP] correct sac implementation 2025-02-03 15:06:18 +00:00
Adil Zouitine be965019bd Add rlpd tricks 2025-02-03 15:06:18 +00:00
Adil Zouitine a0a50de8c9 SAC works 2025-02-03 15:06:18 +00:00
Adil Zouitine c86dace4c2 remove breakpoint 2025-02-03 15:06:18 +00:00
Adil Zouitine 472a7f58ad [WIP] correct sac implementation 2025-02-03 15:06:14 +00:00
Pradeep Kadubandi 068efce3f8 Fix for the issue https://github.com/huggingface/lerobot/issues/638 (#639) 2025-02-03 15:04:03 +00:00
Philip Fung df7310ea40 fixes to SO-100 readme (#600)
Co-authored-by: Philip Fung <no@one>
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2025-02-03 15:04:03 +00:00
Mishig 100f54ee07 [viz] Fixes & updates to html visualizer (#617) 2025-02-03 15:04:03 +00:00
CharlesCNorton c2f7af3339 typo fix: batch_convert_dataset_v1_to_v2.py (#615)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2025-02-03 15:04:03 +00:00
Ville Kuosmanen a1b5d0faf2 fix(visualise): use correct language description for each episode id (#604)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2025-02-03 15:04:03 +00:00
CharlesCNorton d6498150bf fix(docs): typos in benchmark readme.md (#614)
Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com>
2025-02-03 15:04:03 +00:00
Simon Alibert 31c34a4a49 Fix Quality workflow (#622) 2025-02-03 15:04:03 +00:00
CharlesCNorton b1cfb6a710 Update README.md (#612) 2025-02-03 15:04:02 +00:00
Eugene Mironov 4a43c83522 Fix broken `create_lerobot_dataset_card` (#590) 2025-02-03 15:04:02 +00:00
Mishig 0a4e9e25d0 [vizualizer] for LeRobodDataset V2 (#576) 2025-02-03 15:04:02 +00:00
Michel Aractingi 3bb5ed5e91
Extend reward classifier for multiple camera views (#626) 2025-01-13 13:57:49 +01:00
Eugene Mironov c5bca1cf0f
[Port HIL_SERL] Final fixes for the Reward Classifier (#598) 2025-01-06 11:34:00 +01:00
Michel Aractingi 35de91ef2b added temporary fix for missing task_index key in online environment 2024-12-30 13:47:28 +00:00
Michel Aractingi ee306e2f9b split encoder for critic and actor 2024-12-29 23:59:39 +00:00
Michel Aractingi bae3b02928 style fixes 2024-12-29 14:35:21 +00:00
KeWang1017 5b4adc00bb Refactor SAC configuration and policy for improved action sampling and stability
- Updated SACConfig to replace standard deviation parameterization with log_std_min and log_std_max for better control over action distributions.
- Modified SACPolicy to streamline action selection and log probability calculations, enhancing stochastic behavior.
- Removed deprecated TanhMultivariateNormalDiag class to simplify the codebase and improve maintainability.

These changes aim to enhance the robustness and performance of the SAC implementation during training and inference.
2024-12-29 14:27:19 +00:00
KeWang1017 22fbc9ea4a Refine SAC configuration and policy for enhanced performance
- Updated standard deviation parameterization in SACConfig to 'softplus' with defined min and max values for improved stability.
- Modified action sampling in SACPolicy to use reparameterized sampling, ensuring better gradient flow and log probability calculations.
- Cleaned up log probability calculations in TanhMultivariateNormalDiag for clarity and efficiency.
- Increased evaluation frequency in YAML configuration to 50000 for more efficient training cycles.

These changes aim to enhance the robustness and performance of the SAC implementation during training and inference.
2024-12-29 14:21:49 +00:00
KeWang1017 ca74a13d61 Refactor SACPolicy for improved action sampling and standard deviation handling
- Updated action selection to use distribution sampling and log probabilities for better stochastic behavior.
- Enhanced standard deviation clamping to prevent extreme values, ensuring stability in policy outputs.
- Cleaned up code by removing unnecessary comments and improving readability.

These changes aim to refine the SAC implementation, enhancing its robustness and performance during training and inference.
2024-12-29 14:17:25 +00:00
KeWang1017 18a4598986 trying to get sac running 2024-12-29 14:14:13 +00:00
Michel Aractingi dc54d357ca Added normalization schemes and style checks 2024-12-29 12:51:21 +00:00
Michel Aractingi 08ec971086 added optimizer and sac to factory.py 2024-12-23 14:12:03 +01:00
Eugene Mironov b53d6e0ff2
[HIL-SERL PORT] Fix linter issues (#588) 2024-12-23 10:44:29 +01:00
Eugene Mironov 70b652f791
[Port Hil-SERL] Add unit tests for the reward classifier & fix imports & check script (#578) 2024-12-23 10:43:55 +01:00
Michel Aractingi 7b68bfb73b added comments from kewang 2024-12-17 18:03:46 +01:00
KeWang1017 7e0f20fbf2 Enhance SAC configuration and policy with new parameters and subsampling logic
- Added `num_subsample_critics`, `critic_target_update_weight`, and `utd_ratio` to SACConfig.
- Implemented target entropy calculation in SACPolicy if not provided.
- Introduced subsampling of critics to prevent overfitting during updates.
- Updated temperature loss calculation to use the new target entropy.
- Added comments for future UTD update implementation.

These changes improve the flexibility and performance of the SAC implementation.
2024-12-17 17:58:11 +01:00
KeWang def42ff487 Port SAC WIP (#581)
Co-authored-by: KeWang1017 <ke.wang@helloleap.ai>
2024-12-17 16:16:59 +01:00