Commit Graph

868 Commits

Author SHA1 Message Date
pre-commit-ci[bot] ce0008850d [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-24 13:16:39 +00:00
AdilZouitine 313812df16 Update tensor device assignment in ReplayBuffer class
- Changed the device assignment for tensors in the ReplayBuffer class from `device` to `storage_device` for consistency and improved resource management.
2025-03-24 14:13:22 +01:00
pre-commit-ci[bot] 3692f887ca [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-24 14:13:22 +01:00
AdilZouitine 3cdc15c5d4 Initialize log_alpha with the logarithm of temperature_init in SACPolicy
- Updated the SACPolicy class to set log_alpha using the logarithm of the initial temperature value from the configuration.
2025-03-24 14:13:22 +01:00
pre-commit-ci[bot] e9fc9c1a37 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-24 14:13:22 +01:00
AdilZouitine bd6a1f03f6 Remove unused functions and imports from modeling_sac.py
- Deleted the `find_and_copy_params` function and the `Ensemble` class, as they were deemed unnecessary.
- Cleaned up imports by removing `from_modules` from `tensordict` to enhance code clarity.
- Simplified the assertion in the `Policy` class for better readability.
2025-03-24 14:13:21 +01:00
AdilZouitine 6a71bad53a Add intervention rate tracking in act_with_policy function
- Introduced counters for tracking intervention steps and total steps during training.
- Calculated and logged the intervention rate at the end of each episode.
- Reset intervention counters after each episode to ensure accurate tracking.
2025-03-24 14:12:22 +01:00
AdilZouitine 297b45bcb7 - Updated the logging condition to use `log_freq` directly instead of accessing it through `cfg.training.log_freq` for improved readability and speed. 2025-03-24 14:12:22 +01:00
Eugene Mironov 91f8d37938 [PORT HIL-SERL] Optimize training loop, extract config usage (#855)
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 14:12:22 +01:00
AdilZouitine 93803345b9 Enhance training information logging in learner server
- Added tracking for replay buffer size and offline replay buffer size during training steps.
2025-03-24 14:12:21 +01:00
AdilZouitine 2412e746e5 Update configuration files for improved performance and flexibility
- Increased frame rate in `maniskill_example.yaml` from 20 to 400 for enhanced simulation speed.
- Updated `sac_maniskill.yaml` to set `dataset_repo_id` to null and adjusted `grad_clip_norm` from 10.0 to 40.0.
- Changed `storage_device` from "cpu" to "cuda" for better resource utilization.
- Modified `save_freq` from 2000000 to 1000000 to optimize saving intervals.
- Enhanced input normalization parameters for `observation.state` and `observation.image` in SAC policy.
- Adjusted `num_critics` from 10 to 2 and `policy_parameters_push_frequency` from 1 to 4 for improved training dynamics.
- Updated `learner_server.py` to utilize `offline_buffer_capacity` for replay buffer initialization.
- Changed action multiplier in `maniskill_manipulator.py` from 1 to 0.03 for finer control over actions.
2025-03-24 14:12:21 +01:00
pre-commit-ci[bot] bb345238a7 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-24 14:12:21 +01:00
AdilZouitine 7dd581d817 Refactor SACPolicy and learner server for improved replay buffer management
- Updated SACPolicy to create critic heads using a list comprehension for better readability.
- Simplified the saving and loading of models using `save_model` and `load_model` functions from the safetensors library.
- Introduced `initialize_offline_replay_buffer` function in the learner server to streamline offline dataset handling and replay buffer initialization.
- Enhanced logging for dataset loading processes to improve traceability during training.
2025-03-24 14:12:21 +01:00
Michel Aractingi 944355c042 Add end effector action space to hil-serl (#861)
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-24 14:12:21 +01:00
AdilZouitine f8b7b08d5b Enhance SAC configuration and policy with gradient clipping and temperature management
- Introduced `grad_clip_norm` parameter in SAC configuration for gradient clipping
- Updated SACPolicy to store temperature as an instance variable for consistent usage
- Modified loss calculations in SACPolicy to utilize the instance temperature
- Enhanced MLP and CriticHead to support a customizable final activation function
- Implemented gradient clipping in the learner server during training steps for both actor and critic
- Added tracking for gradient norms in training information
2025-03-24 14:12:21 +01:00
pre-commit-ci[bot] 893f56866e [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-24 14:12:21 +01:00
AdilZouitine 8c3432e63f Add custom save and load methods for SAC policy
- Implement `_save_pretrained` method to handle TensorDict state saving
- Add `_from_pretrained` class method for loading SAC policy from files
- Create utility function `find_and_copy_params` to handle parameter copying
2025-03-24 14:12:21 +01:00
AdilZouitine b3a071849f Remove torch.no_grad decorator and optimize next action prediction in SAC policy
- Removed `@torch.no_grad` decorator from Unnormalize forward method

- Added TODO comment for optimizing next action prediction in SAC policy
- Minor formatting adjustment in NaN assertion for log standard deviation
Co-authored-by: Yoel Chornton <yoel.chornton@gmail.com>
2025-03-24 14:12:21 +01:00
s1lent4gnt 0ae4e0aa3a [Port HIL-SERL] Balanced sampler function speed up and refactor to align with train.py (#715)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-24 14:12:21 +01:00
Eugene Mironov 2bd5b9195e [HIL-SERL] Migrate threading to multiprocessing (#759)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-03-24 14:12:21 +01:00
pre-commit-ci[bot] cea24caaf9 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-24 14:12:21 +01:00
AdilZouitine 14931349af Add storage device configuration for SAC policy and replay buffer
- Introduce `storage_device` parameter in SAC configuration and training settings
- Update learner server to use configurable storage device for replay buffer
- Reduce online buffer capacity in ManiSkill configuration
- Modify replay buffer initialization to support custom storage device
2025-03-24 13:52:40 +01:00
AdilZouitine 28425ecae4 Add memory optimization option to ReplayBuffer
- Introduce `optimize_memory` parameter to reduce memory usage in replay buffer
- Implement simplified memory optimization by not storing duplicate next_states
- Update learner server and buffer initialization to use memory optimization by default
2025-03-24 13:52:40 +01:00
AdilZouitine c35701dc71 Add storage device parameter to replay buffer initialization
- Specify storage device for replay buffer to optimize memory management
2025-03-24 13:52:40 +01:00
AdilZouitine 2e163fad88 Refactor ReplayBuffer with tensor-based storage and improved sampling efficiency
- Replaced list-based memory storage with pre-allocated tensor storage
- Optimized sampling process with direct tensor indexing
- Added support for DrQ image augmentation during sampling for offline dataset
- Improved dataset conversion with more robust episode handling
- Enhanced buffer initialization and state tracking
- Added comprehensive testing for buffer conversion and sampling
2025-03-24 13:52:40 +01:00
AdilZouitine 9d341269fa Update ManiSkill configuration and replay buffer to support truncation and dataset handling
- Reduced image size in ManiSkill environment configuration from 128 to 64
- Added support for truncation in replay buffer and actor server
- Updated SAC policy configuration to use a specific dataset and modify vision encoder settings
- Improved dataset conversion process with progress tracking and task naming
- Added flexibility for joint action space masking in learner server
2025-03-24 13:52:40 +01:00
Michel Aractingi 28361a1584 Added caching function in the learner_server and modeling sac in order to limit the number of forward passes through the pretrained encoder when its frozen.
Added tensordict dependencies
Updated the version of torch and torchvision

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:52:40 +01:00
Eugene Mironov 304d7136df [Port HIL-SERL] Adjust Actor-Learner architecture & clean up dependency management for HIL-SERL (#722) 2025-03-24 13:51:38 +01:00
AdilZouitine b8e9ee440b Refactor SAC policy with performance optimizations and multi-camera support
- Introduced Ensemble and CriticHead classes for more efficient critic network handling
- Added support for multiple camera inputs in observation encoder
- Optimized image encoding by batching image processing
- Updated configuration for ManiSkill environment with reduced image size and action scaling
- Compiled critic networks for improved performance
- Simplified normalization and ensemble handling in critic networks
Co-authored-by: michel-aractingi <michel.aractingi@gmail.com>
2025-03-24 13:48:46 +01:00
Michel Aractingi 8d07016c71 - Fixed big issue in the loading of the policy parameters sent by the learner to the actor -- pass only the actor to the `update_policy_parameters` and remove `strict=False`
- Fixed big issue in the normalization of the actions in the `forward` function of the critic -- remove the `torch.no_grad` decorator in `normalize.py` in the normalization function
- Fixed performance issue to boost the optimization frequency by setting the storage device to be the same as the device of learning.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:46 +01:00
AdilZouitine 37e6480fec Re-enable parameter push thread in learner server
- Uncomment and start the param_push_thread
- Restore thread joining for param_push_thread
2025-03-24 13:48:46 +01:00
AdilZouitine c0f39641b1 Improve wandb logging and custom step tracking in logger
- Modify logger to support multiple custom step keys
- Update logging method to handle custom step keys more flexibly

- Enhance logging of optimization step and frequency
Co-authored-by: michel-aractingi  <michel.aractingi@gmail.com>
2025-03-24 13:48:46 +01:00
AdilZouitine 14c707772a Add maniskill support.
Co-authored-by: Michel Aractingi <michel.aractingi@gmail.com>
2025-03-24 13:48:46 +01:00
Michel Aractingi e587600e01 Fixed bug in the action scale of the intervention actions and offline dataset actions. (scale by inverse delta)
Co-authored-by: Adil Zouitine <adizouitinegm@gmail.com>
2025-03-24 13:48:46 +01:00
Michel Aractingi 865316d46d Modified crop_dataset_roi interface to automatically write the cropped parameters to a json file in the meta of the dataset
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:46 +01:00
Michel Aractingi 1e4c85bec3 Optimized the replay buffer from the memory side to store data on cpu instead of a gpu device and send the batches to the gpu.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:22 +01:00
Michel Aractingi e367b54f4f nit
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:22 +01:00
Michel Aractingi 346d5793ee removed uncomment in actor server
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:21 +01:00
Michel Aractingi caf55994eb Changed the init_final value to center the starting mean and std of the policy
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:21 +01:00
Michel Aractingi fcb9d3008b Changed bounds for a new so100 robot
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:21 +01:00
Michel Aractingi 3bc4ed3bbe Hardcoded some normalization parameters. TODO refactor
Added masking actions on the level of the intervention actions and offline dataset

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:48:21 +01:00
Michel Aractingi 9016d520fd fix log_alpha in modeling_sac: change to nn.parameter
added pretrained vision model in policy

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:47:58 +01:00
Michel Aractingi 66ec131844 Added logging for interventions to monitor the rate of interventions through time
Added an s keyboard command to force success in the case the reward classifier fails

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:47:58 +01:00
Michel Aractingi 8fe2c5eaa2 Added possiblity to record and replay delta actions during teleoperation rather than absolute actions
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:47:58 +01:00
Yoel 7b295e159e [PORT-Hilserl] classifier fixes (#695)
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:46:41 +01:00
Eugene Mironov d8f35e9ce9 [Port HIL-SERL] Add resnet-10 as default encoder for HIL-SERL (#696)
Co-authored-by: Khalil Meftah <kmeftah.khalil@gmail.com>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: Ke Wang <superwk1017@gmail.com>
2025-03-24 13:46:41 +01:00
Michel Aractingi af43a706c3 - Added JointMaskingActionSpace wrapper in `gym_manipulator` in order to select which joints will be controlled. For example, we can disable the gripper actions for some tasks.
- Added Nan detection mechanisms in the actor, learner and gym_manipulator for the case where we encounter nans in the loop.
- changed the non-blocking in the `.to(device)` functions to only work for the case of cuda because they were causing nans when running the policy on mps
- Added some joint clipping and limits in the env, robot and policy configs. TODO clean this part and make the limits in one config file only.

Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:46:41 +01:00
Michel Aractingi 3a6bb65f57 Added sac_real config file in the policym configs dir.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:46:41 +01:00
Michel Aractingi 174087eed2 Several fixes to move the actor_server and learner_server code from the maniskill environment to the real robot environment.
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-03-24 13:46:40 +01:00
Eugene Mironov 4891270886 [HIL-SERL port] Add Reward classifier benchmark tracking to chose best visual encoder (#688) 2025-03-24 13:46:00 +01:00