init opensource commit

This commit is contained in:
Ziwen Zhuang 2023-09-10 09:44:56 +08:00
commit daf0e0aee5
171 changed files with 29001 additions and 0 deletions

4
.gitattributes vendored Normal file
View File

@ -0,0 +1,4 @@
*.dae filter=lfs diff=lfs merge=lfs -text
*.obj filter=lfs diff=lfs merge=lfs -text
*.obj text !filter !merge !diff
*.dae text !filter !merge !diff

59
.gitignore vendored Normal file
View File

@ -0,0 +1,59 @@
# These are some examples of commonly ignored file patterns.
# You should customize this list as applicable to your project.
# Learn more about .gitignore:
# https://www.atlassian.com/git/tutorials/saving-changes/gitignore
# Node artifact files
node_modules/
dist/
# Compiled Java class files
*.class
# Compiled Python bytecode
*.py[cod]
# Log files
*.log
# Package files
*.jar
# Maven
target/
dist/
# JetBrains IDE
.idea/
# Unit test reports
TEST*.xml
# Generated by MacOS
.DS_Store
# Generated by Windows
Thumbs.db
# Applications
*.app
*.exe
*.war
# Large media files
*.mp4
*.tiff
*.avi
*.flv
*.mov
*.wmv
# VS Code
.vscode
# logs
logs
runs
# other
*.egg-info
__pycache__

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Ziwen Zhuang
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

47
README.md Normal file
View File

@ -0,0 +1,47 @@
# Robot Parkour Learning #
**Project website**: [https://robot-parkour.github.io/](https://robot-parkour.github.io/)
<p align="center">
<img src="images/teaser.jpeg" width="80%"/>
</p>
This codebase is contains implementation for training and visualizing the result of paper [Robot Parkour Learning](https://openreview.net/forum?id=uo937r5eTE)
To install and run the code, please clone this repository and follow the instructions in [legged_gym/README.md](legged_gym/README.md)
## Repository Structure ##
* `legged_gym`: contains the isaacgym environment and config files.
- `legged_gym/legged_gym/envs/a1/`: contains all the training config files.
- `legged_gym/legged_gym/envs/base/`: contains all the environment implementation.
- `legged_gym/legged_gym/utils/terrain/`: contains the terrain generation code.
* `rsl_rl`: contains the network module and algorithm implementation. You can copy this folder directly to your robot.
- `rsl_rl/rsl_rl/algorithms/`: contains the algorithm implementation.
- `rsl_rl/rsl_rl/modules/`: contains the network module implementation.
## Trouble Shooting ##
If you cannot run the distillation part or all graphics computing goes to GPU 0 dispite you have multiple GPUs and have set the CUDA_VISIBLE_DEVICES, please use docker to isolate each GPU.
## To Do ##
The code is currently only for training and visualizing in simulation.
The code and instructions for real robot is on the way.
**Before November 2023, the code for real robot (A1 and Go1) and the checkpoint will be released.**
## Citation ##
If you find this code useful in your research, please consider citing:
```
@inproceedings{
zhuang2023robot,
title={Robot Parkour Learning},
author={Ziwen Zhuang and Zipeng Fu and Jianren Wang and Christopher G Atkeson and S{\"o}ren Schwertfeger and Chelsea Finn and Hang Zhao},
booktitle={7th Annual Conference on Robot Learning},
year={2023},
url={https://openreview.net/forum?id=uo937r5eTE}
}
```

BIN
images/teaser.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 640 KiB

82
legged_gym/README.md Normal file
View File

@ -0,0 +1,82 @@
# Robot Parkour Learning (Tutorial) #
This is the tutorial for training the skill policy and distilling the parkour policy.
## Installation ##
1. Create a new python virtual env or conda environment with python 3.6, 3.7 or 3.8 (3.8 recommended)
2. Install pytorch 1.10 with cuda-11.3:
- `pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html`
3. Install Isaac Gym
- Download and install Isaac Gym Preview 4 (I didn't test history version) from https://developer.nvidia.com/isaac-gym
- `cd isaacgym/python && pip install -e .`
- Try running an example `cd examples && python 1080_balls_of_solitude.py`
- For troubleshooting check docs `isaacgym/docs/index.html`
4. Install rsl_rl (PPO implementation)
- Using command to direct to the root path of this repository
- `cd rsl_rl && pip install -e .`
5. Install legged_gym
- `cd ../legged_gym && pip install -e .`
## Usage ##
***Always run your script in the root path of this legged_gym folder (which contains a `setup.py` file).***
1. The specialized skill policy is trained using `a1_field_config.py` as task `a1_field`
Run command with `python legged_gym/scripts/train.py --headless --task a1_field`
2. The distillation is done using `a1_field_distill_config.py` as task `a1_distill`
The distillation is done in multiprocess. In general, you need at least 2 processes, each with 1 GPU and can access a shared folder.
With `python legged/scripts/train.py --headless --task a1_distill` you lauched the trainer. The process will prompt you to launch a collector process, where the log directory is corresponding to the task.
With `python legged/scripts/collect.py --headless --task a1_distill --load_run {your training run}` you lauched the collector. The process will load the training policy and start collecting the data. The collected data will be saved in the directory prompted by the trainer. Remove it after you finished distillation.
### Train a walk policy ###
Launch the training by `python legged_gym/scripts/train.py --headless --task a1_field`. You will find the trainin log in `logs/a1_field`. The folder name is also the run name.
### Train each seperate skill ###
- Launch the scirpt with task `a1_climb`, `a1_leap`, `a1_crawl`, `a1_tilt`. The training log will also be saved in `logs/a1_field`.
- The default training is in soft-dynamics constraint. To train in hard-dynamics constraint, change the `virtural_terraion` value to `False` in the corresponding config file.
- Uncomment the `class sensor` part to enable the proprioception delay.
- For `a1_climb`, please update the `climb["depth"]` field when switching to hard-dynamics constraint.
- Do remember to update the `load_run` field in the corresponding log directory to load the policy from previous stage.
### Distill the parkour policy ###
**You will need at least two GPU that can render in IsaacGym and have at least 24GB memory. (typically RTX 3090)**
1. Update the `A1FieldDistillCfgPPO.algorithm.teacher_policy.sub_policy_paths` field with the logdir of your own trained skill policy. (in `a1_field_distill_config.py`)
2. Run data collection using `python legged_gym/scripts/collect.py --headless --task a1_distill`. The data will be saved in the `logs/distill_{robot}_dagger` directory.
***You can generate multiple dataset by running this step multiple times.***
3. Update the `A1FieldDistillCfgPPO.runner.pretrain_dataset.data_dir` field with a list of dataset directories. Comment out the `A1FieldDistillCfgPPO.runner.pretrain_dataset.scan_dir` field. (in `a1_field_distill_config.py`)
4. Run `python legged_gym/scripts/train.py --headless --task a1_distill` to start distillation. The distillation log will be saved in `logs/distill_{robot}`.
5. Comment out `A1FieldDistillCfgPPO.runner.pretrain_dataset.data_dir` field and uncomment `A1FieldDistillCfgPPO.runner.pretrain_dataset.scan_dir` field. (in `a1_field_distill_config.py`)
6. Update the `A1FieldDistillCfgPPO.runner.load_run` field with your last distillation log.
7. Run `python legged_gym/scripts/train.py --headless --task a1_distill` to start dagger. The terminal will prompt you to launch a collector process.
To run the collector process, change `RunnerCls` to `DaggerSaver` (as in line 20-21). Run `python legged_gym/scripts/collect.py --headless --task a1_distill --load_run {the training run prompt by the training process}`.
***You can run multiple collector process to speed up the data collection.***
***The collectors can be run on any other machine as long as the `logs/distill_{robot}_dagger` and `logs/distill_{robot}` directory is shared.***
Press Enter in the training process when you see some data is collected (prompted by the collector process). The training process will load the collected data and start training.
### Visualize the policy in simulation ###
```bash
python legged_gym/scripts/play.py --task {task} --load_run {run_name}
```

View File

@ -0,0 +1,34 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import os
LEGGED_GYM_ROOT_DIR = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
LEGGED_GYM_ENVS_DIR = os.path.join(LEGGED_GYM_ROOT_DIR, 'legged_gym', 'envs')

View File

@ -0,0 +1,22 @@
# import ptvsd
import debugpy
import sys
def break_into_debugger_(port= 6789):
ip_address = ('0.0.0.0', port)
print("Process: " + " ".join(sys.argv[:]))
print("Is waiting for attach at address: %s:%d" % ip_address, flush= True)
# Allow other computers to attach to ptvsd at this IP address and port.
ptvsd.enable_attach(address=ip_address)
# Pause the program until a remote debugger is attached
ptvsd.wait_for_attach()
print("Process attached, start running into experiment...", flush= True)
ptvsd.break_into_debugger()
def break_into_debugger(port= 6789):
ip_address = ('0.0.0.0', port)
print("Process: " + " ".join(sys.argv[:]))
print("Is waiting for attach at address: %s:%d" % ip_address, flush= True)
debugpy.listen(ip_address)
debugpy.wait_for_client()
debugpy.breakpoint()

View File

@ -0,0 +1,72 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR
from legged_gym.envs.a1.a1_config import A1RoughCfg, A1RoughCfgPPO, A1PlaneCfg, A1RoughCfgTPPO
from .base.legged_robot import LeggedRobot
from .base.legged_robot_field import LeggedRobotField
from .base.legged_robot_noisy import LeggedRobotNoisy
from .anymal_c.anymal import Anymal
from .anymal_c.mixed_terrains.anymal_c_rough_config import AnymalCRoughCfg, AnymalCRoughCfgPPO
from .anymal_c.flat.anymal_c_flat_config import AnymalCFlatCfg, AnymalCFlatCfgPPO
from .anymal_b.anymal_b_config import AnymalBRoughCfg, AnymalBRoughCfgPPO
from .cassie.cassie import Cassie
from .cassie.cassie_config import CassieRoughCfg, CassieRoughCfgPPO
from .a1.a1_config import A1RoughCfg, A1RoughCfgPPO
from .a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
from .a1.a1_field_distill_config import A1FieldDistillCfg, A1FieldDistillCfgPPO
from .go1.go1_field_config import Go1FieldCfg, Go1FieldCfgPPO
from .go1.go1_field_distill_config import Go1FieldDistillCfg, Go1FieldDistillCfgPPO
import os
from legged_gym.utils.task_registry import task_registry
task_registry.register( "anymal_c_rough", Anymal, AnymalCRoughCfg(), AnymalCRoughCfgPPO() )
task_registry.register( "anymal_c_flat", Anymal, AnymalCFlatCfg(), AnymalCFlatCfgPPO() )
task_registry.register( "anymal_b", Anymal, AnymalBRoughCfg(), AnymalBRoughCfgPPO() )
task_registry.register( "a1", LeggedRobot, A1RoughCfg(), A1RoughCfgPPO() )
task_registry.register( "a1_teacher", LeggedRobot, A1PlaneCfg(), A1RoughCfgTPPO() )
task_registry.register( "a1_field", LeggedRobotNoisy, A1FieldCfg(), A1FieldCfgPPO() )
task_registry.register( "a1_distill", LeggedRobotNoisy, A1FieldDistillCfg(), A1FieldDistillCfgPPO() )
task_registry.register( "cassie", Cassie, CassieRoughCfg(), CassieRoughCfgPPO() )
task_registry.register( "go1_field", LeggedRobotNoisy, Go1FieldCfg(), Go1FieldCfgPPO())
task_registry.register( "go1_distill", LeggedRobotNoisy, Go1FieldDistillCfg(), Go1FieldDistillCfgPPO())
## The following tasks are for the convinience of opensource
from .a1.a1_climb_config import A1ClimbCfg, A1ClimbCfgPPO
task_registry.register( "a1_climb", LeggedRobotNoisy, A1ClimbCfg(), A1ClimbCfgPPO() )
from .a1.a1_leap_config import A1LeapCfg, A1LeapCfgPPO
task_registry.register( "a1_leap", LeggedRobotNoisy, A1LeapCfg(), A1LeapCfgPPO() )
from .a1.a1_crawl_config import A1CrawlCfg, A1CrawlCfgPPO
task_registry.register( "a1_crawl", LeggedRobotNoisy, A1CrawlCfg(), A1CrawlCfgPPO() )
from .a1.a1_tilt_config import A1TiltCfg, A1TiltCfgPPO
task_registry.register( "a1_tilt", LeggedRobotNoisy, A1TiltCfg(), A1TiltCfgPPO() )

View File

@ -0,0 +1,111 @@
from os import path as osp
import numpy as np
from legged_gym.envs.a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
from legged_gym.utils.helpers import merge_dict
class A1ClimbCfg( A1FieldCfg ):
#### uncomment this to train non-virtual terrain
# class sensor( A1FieldCfg.sensor ):
# class proprioception( A1FieldCfg.sensor.proprioception ):
# delay_action_obs = True
# latency_range = [0.04-0.0025, 0.04+0.0075]
#### uncomment the above to train non-virtual terrain
class terrain( A1FieldCfg.terrain ):
max_init_terrain_level = 2
border_size = 5
slope_treshold = 20.
curriculum = True
BarrierTrack_kwargs = merge_dict(A1FieldCfg.terrain.BarrierTrack_kwargs, dict(
options= [
"climb",
],
track_block_length= 1.6,
climb= dict(
height= (0.2, 0.6), # use this to train in virtual terrain
# height= (0.1, 0.5), # use this to train in non-virtual terrain
depth= (0.1, 0.2),
fake_offset= 0.0, # [m] an offset that make the robot easier to get into the obstacle
climb_down_prob= 0., # probability of climbing down use it in non-virtual terrain
),
virtual_terrain= False, # Change this to False for real terrain
no_perlin_threshold= 0.06,
))
TerrainPerlin_kwargs = merge_dict(A1FieldCfg.terrain.TerrainPerlin_kwargs, dict(
zScale= [0.05, 0.1],
))
class commands( A1FieldCfg.commands ):
class ranges( A1FieldCfg.commands.ranges ):
lin_vel_x = [0.8, 1.5]
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
class termination( A1FieldCfg.termination ):
# additional factors that determines whether to terminates the episode
termination_terms = [
"roll",
"pitch",
"z_low",
"z_high",
"out_of_track",
]
z_low_kwargs = merge_dict(A1FieldCfg.termination.z_low_kwargs, dict(
threshold= -1.,
))
class domain_rand( A1FieldCfg.domain_rand ):
init_base_pos_range = dict(
x= [0.2, 0.6],
y= [-0.25, 0.25],
)
class rewards( A1FieldCfg.rewards ):
class scales:
tracking_ang_vel = 0.1
world_vel_l2norm = -1.
legs_energy_substeps = -1e-6
alive = 2.
penetrate_depth = -1e-2
penetrate_volume = -1e-2
exceed_dof_pos_limits = -1e-1
exceed_torque_limits_i = -2e-1
class curriculum( A1FieldCfg.curriculum ):
penetrate_volume_threshold_harder = 8000
penetrate_volume_threshold_easier = 12000
penetrate_depth_threshold_harder = 1000
penetrate_depth_threshold_easier = 1600
class A1ClimbCfgPPO( A1FieldCfgPPO ):
class algorithm( A1FieldCfgPPO.algorithm ):
entropy_coef = 0.0
clip_min_std = 0.2
class runner( A1FieldCfgPPO.runner ):
policy_class_name = "ActorCriticRecurrent"
experiment_name = "field_a1"
run_name = "".join(["Skill",
("Multi" if len(A1ClimbCfg.terrain.BarrierTrack_kwargs["options"]) > 1 else (A1ClimbCfg.terrain.BarrierTrack_kwargs["options"][0] if A1ClimbCfg.terrain.BarrierTrack_kwargs["options"] else "PlaneWalking")),
("_pEnergySubsteps{:.0e}".format(A1ClimbCfg.rewards.scales.legs_energy_substeps) if A1ClimbCfg.rewards.scales.legs_energy_substeps != -1e-6 else ""),
("_pDofLimit{:.0e}".format(A1ClimbCfg.rewards.scales.exceed_dof_pos_limits) if getattr(A1ClimbCfg.rewards.scales, "exceed_dof_pos_limits") != 0.0 else ""),
("_climbDownProb{:.1f}".format(A1ClimbCfg.terrain.BarrierTrack_kwargs["climb"]["climb_down_prob"]) if A1ClimbCfg.terrain.BarrierTrack_kwargs["climb"].get("climb_down_prob", 0.) > 0. else ""),
("_climbHeight{:.1f}-{:.1f}".format(*A1ClimbCfg.terrain.BarrierTrack_kwargs["climb"]["height"])),
("_propDelay{:.2f}-{:.2f}".format(
A1ClimbCfg.sensor.proprioception.latency_range[0],
A1ClimbCfg.sensor.proprioception.latency_range[1],
) if A1ClimbCfg.sensor.proprioception.delay_action_obs else ""
),
("_virtual" if A1ClimbCfg.terrain.BarrierTrack_kwargs["virtual_terrain"] else ""),
("_trackBlockLen{:.1f}".format(A1ClimbCfg.terrain.BarrierTrack_kwargs["track_block_length"]) if A1ClimbCfg.terrain.BarrierTrack_kwargs["track_block_length"] != 2. else ""),
])
resume = True
load_run = "{Your traind walking model directory}"
load_run = "{Your virtually trained climb model directory}"
max_iterations = 20000
save_interval = 500

View File

@ -0,0 +1,123 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg, LeggedRobotCfgPPO
class A1RoughCfg( LeggedRobotCfg ):
class env( LeggedRobotCfg.env ):
use_lin_vel = True
num_observations = 235 # no measure_heights makes num_obs = 48; with measure_heights makes num_obs 235
class init_state( LeggedRobotCfg.init_state ):
pos = [0.0, 0.0, 0.42] # x,y,z [m]
default_joint_angles = { # = target angles [rad] when action = 0.0
'FR_hip_joint': -0.1 , # [rad]
'FL_hip_joint': 0.1, # [rad]
'RR_hip_joint': -0.1, # [rad]
'RL_hip_joint': 0.1, # [rad]
'FL_thigh_joint': 0.8, # [rad]
'RL_thigh_joint': 1., # [rad]
'FR_thigh_joint': 0.8, # [rad]
'RR_thigh_joint': 1., # [rad]
'FL_calf_joint': -1.5, # [rad]
'RL_calf_joint': -1.5, # [rad]
'FR_calf_joint': -1.5, # [rad]
'RR_calf_joint': -1.5, # [rad]
}
class control( LeggedRobotCfg.control ):
# PD Drive parameters:
control_type = 'P'
stiffness = {'joint': 20.} # [N*m/rad]
damping = {'joint': 0.5} # [N*m*s/rad]
# action scale: target angle = actionScale * action + defaultAngle
action_scale = 0.25
# decimation: Number of control action updates @ sim DT per policy DT
decimation = 4
class asset( LeggedRobotCfg.asset ):
file = '{LEGGED_GYM_ROOT_DIR}/resources/robots/a1/urdf/a1.urdf'
name = "a1"
foot_name = "foot"
penalize_contacts_on = ["thigh", "calf"]
terminate_after_contacts_on = ["base"]
self_collisions = 1 # 1 to disable, 0 to enable...bitwise filter
class rewards( LeggedRobotCfg.rewards ):
soft_dof_pos_limit = 0.9
base_height_target = 0.25
class scales( LeggedRobotCfg.rewards.scales ):
torques = -0.0002
dof_pos_limits = -10.0
class A1RoughCfgPPO( LeggedRobotCfgPPO ):
class algorithm( LeggedRobotCfgPPO.algorithm ):
entropy_coef = 0.01
class runner( LeggedRobotCfgPPO.runner ):
run_name = 'full'
experiment_name = 'rough_a1'
#### To train the model with partial observation ####
class A1PlaneCfg( A1RoughCfg ):
class env( A1RoughCfg.env ):
use_lin_vel = False
num_observations = 48
class control( A1RoughCfg.control ):
stiffness = {'joint': 25.}
class domain_rand( A1RoughCfg.domain_rand ):
randomize_base_mass = True
class terrain( A1RoughCfg.terrain ):
mesh_type = "plane"
measure_heights = True
class A1RoughCfgTPPO( A1RoughCfgPPO ):
class algorithm( A1RoughCfgPPO.algorithm ):
distillation_loss_coef = 50.
teacher_ac_path = "logs/rough_a1/Nov08_07-55-33_full/model_1500.pt"
teacher_policy_class_name = A1RoughCfgPPO.runner.policy_class_name
class teacher_policy( A1RoughCfgPPO.policy ):
num_actor_obs = 235
num_critic_obs = 235
num_actions = 12
class runner( A1RoughCfgPPO.runner ):
algorithm_class_name = "TPPO"
run_name = 'nolinvel_plane_Kp25_aclip1.5_privilegedHeights_distillation50_randomizeMass'
experiment_name = 'teacher_a1'

View File

@ -0,0 +1,95 @@
import numpy as np
from legged_gym.envs.a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
from legged_gym.utils.helpers import merge_dict
class A1CrawlCfg( A1FieldCfg ):
#### uncomment this to train non-virtual terrain
class sensor( A1FieldCfg.sensor ):
class proprioception( A1FieldCfg.sensor.proprioception ):
delay_action_obs = True
latency_range = [0.04-0.0025, 0.04+0.0075]
#### uncomment the above to train non-virtual terrain
class terrain( A1FieldCfg.terrain ):
max_init_terrain_level = 2
border_size = 5
slope_treshold = 20.
curriculum = True
BarrierTrack_kwargs = merge_dict(A1FieldCfg.terrain.BarrierTrack_kwargs, dict(
options= [
"crawl",
],
track_block_length= 1.6,
crawl= dict(
height= (0.25, 0.5),
depth= (0.1, 0.6), # size along the forward axis
wall_height= 0.6,
no_perlin_at_obstacle= False,
),
virtual_terrain= True, # Change this to False for real terrain
))
TerrainPerlin_kwargs = merge_dict(A1FieldCfg.terrain.TerrainPerlin_kwargs, dict(
zScale= 0.1,
))
class commands( A1FieldCfg.commands ):
class ranges( A1FieldCfg.commands.ranges ):
lin_vel_x = [0.3, 0.8]
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
class termination( A1FieldCfg.termination ):
# additional factors that determines whether to terminates the episode
termination_terms = [
"roll",
"pitch",
"z_low",
"z_high",
"out_of_track",
]
class rewards( A1FieldCfg.rewards ):
class scales:
tracking_ang_vel = 0.05
world_vel_l2norm = -1.
legs_energy_substeps = -2e-5
alive = 2.
penetrate_depth = -6e-2 # comment this out if trianing non-virtual terrain
penetrate_volume = -6e-2 # comment this out if trianing non-virtual terrain
exceed_dof_pos_limits = -1e-1
exceed_torque_limits_i = -2e-1
class curriculum( A1FieldCfg.curriculum ):
penetrate_volume_threshold_harder = 1500
penetrate_volume_threshold_easier = 10000
penetrate_depth_threshold_harder = 10
penetrate_depth_threshold_easier = 400
class A1CrawlCfgPPO( A1FieldCfgPPO ):
class algorithm( A1FieldCfgPPO.algorithm ):
entropy_coef = 0.0
clip_min_std = 0.2
class runner( A1FieldCfgPPO.runner ):
policy_class_name = "ActorCriticRecurrent"
experiment_name = "field_a1"
run_name = "".join(["Skill",
("Multi" if len(A1CrawlCfg.terrain.BarrierTrack_kwargs["options"]) > 1 else (A1CrawlCfg.terrain.BarrierTrack_kwargs["options"][0] if A1CrawlCfg.terrain.BarrierTrack_kwargs["options"] else "PlaneWalking")),
("_propDelay{:.2f}-{:.2f}".format(
A1CrawlCfg.sensor.proprioception.latency_range[0],
A1CrawlCfg.sensor.proprioception.latency_range[1],
) if A1CrawlCfg.sensor.proprioception.delay_action_obs else ""
),
("_pEnergy" + np.format_float_scientific(A1CrawlCfg.rewards.scales.legs_energy_substeps, precision= 1, exp_digits= 1, trim= "-") if A1CrawlCfg.rewards.scales.legs_energy_substeps != 0. else ""),
("_virtual" if A1CrawlCfg.terrain.BarrierTrack_kwargs["virtual_terrain"] else ""),
])
resume = True
load_run = "{Your traind walking model directory}"
load_run = "{Your virtually trained crawling model directory}"
max_iterations = 20000
save_interval = 500

View File

@ -0,0 +1,277 @@
import numpy as np
from legged_gym.envs.a1.a1_config import A1RoughCfg, A1RoughCfgPPO
class A1FieldCfg( A1RoughCfg ):
class env( A1RoughCfg.env ):
num_envs = 4096 # 8192
obs_components = [
"proprioception", # 48
# "height_measurements", # 187
"base_pose",
"robot_config",
"engaging_block",
"sidewall_distance",
]
# privileged_use_lin_vel = True # for the possible of setting "proprioception" in obs and privileged obs different
######## configs for training a walk policy ############
# obs_components = [
# "proprioception", # 48
# # "height_measurements", # 187
# # "forward_depth",
# # "base_pose",
# # "robot_config",
# # "engaging_block",
# # "sidewall_distance",
# ]
# privileged_obs_components = [
# "proprioception",
# # "height_measurements",
# # "forward_depth",
# "robot_config",
# ]
######## End configs for training a walk policy ############
class sensor:
class forward_camera:
resolution = [16, 16]
position = [0.26, 0., 0.03] # position in base_link
rotation = [0., 0., 0.] # ZYX Euler angle in base_link
class proprioception:
delay_action_obs = False
latency_range = [0.0, 0.0]
latency_resample_time = 2.0 # [s]
class terrain( A1RoughCfg.terrain ):
mesh_type = "trimesh" # Don't change
num_rows = 20
num_cols = 50
selected = "BarrierTrack" # "BarrierTrack" or "TerrainPerlin", "TerrainPerlin" can be used for training a walk policy.
max_init_terrain_level = 0
border_size = 5
slope_treshold = 20.
curriculum = False # for walk
horizontal_scale = 0.025 # [m]
# vertical_scale = 1. # [m] does not change the value in hightfield
pad_unavailable_info = True
BarrierTrack_kwargs = dict(
options= [
# "climb",
# "crawl",
# "tilt",
# "leap",
], # each race track will permute all the options
track_width= 1.6,
track_block_length= 2., # the x-axis distance from the env origin point
wall_thickness= (0.04, 0.2), # [m]
wall_height= -0.05,
climb= dict(
height= (0.2, 0.6),
depth= (0.1, 0.8), # size along the forward axis
fake_offset= 0.0, # [m] an offset that make the robot easier to get into the obstacle
climb_down_prob= 0.0,
),
crawl= dict(
height= (0.25, 0.5),
depth= (0.1, 0.6), # size along the forward axis
wall_height= 0.6,
no_perlin_at_obstacle= False,
),
tilt= dict(
width= (0.24, 0.32),
depth= (0.4, 1.), # size along the forward axis
opening_angle= 0.0, # [rad] an opening that make the robot easier to get into the obstacle
wall_height= 0.5,
),
leap= dict(
length= (0.2, 1.0),
depth= (0.4, 0.8),
height= 0.2,
),
add_perlin_noise= True,
border_perlin_noise= True,
border_height= 0.,
virtual_terrain= False,
draw_virtual_terrain= True,
engaging_next_threshold= 1.2,
curriculum_perlin= False,
no_perlin_threshold= 0.0,
)
TerrainPerlin_kwargs = dict(
zScale= [0.1, 0.15],
# zScale= 0.1, # Use a constant zScale for training a walk policy
frequency= 10,
)
class commands( A1RoughCfg.commands ):
heading_command = False
resampling_time = 10 # [s]
class ranges( A1RoughCfg.commands.ranges ):
lin_vel_x = [-1.0, 1.0]
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
######## configs for training a walk policy #########
# lin_vel_y = [-1., 1.]
# ang_vel_yaw = [-1., 1.]
class control( A1RoughCfg.control ):
stiffness = {'joint': 50.}
damping = {'joint': 1.}
action_scale = 0.5
torque_limits = 25 # override the urdf
computer_clip_torque = True
motor_clip_torque = False
class asset( A1RoughCfg.asset ):
penalize_contacts_on = ["base", "thigh"]
terminate_after_contacts_on = ["base", "imu"]
class termination:
# additional factors that determines whether to terminates the episode
termination_terms = [
"roll",
"pitch",
"z_low",
"z_high",
# "out_of_track",
]
roll_kwargs = dict(
threshold= 0.8, # [rad]
tilt_threshold= 1.5,
)
pitch_kwargs = dict(
threshold= 1.6, # [rad] # for leap, climb
climb_threshold= 1.6,
leap_threshold= 1.5,
)
z_low_kwargs = dict(
threshold= 0.15, # [m]
)
z_high_kwargs = dict(
threshold= 1.5, # [m]
)
out_of_track_kwargs = dict(
threshold= 1., # [m]
)
check_obstacle_conditioned_threshold = True
timeout_at_border = False
class domain_rand( A1RoughCfg.domain_rand ):
randomize_com = True
class com_range:
x = [-0.05, 0.15]
y = [-0.1, 0.1]
z = [-0.05, 0.05]
randomize_motor = True
leg_motor_strength_range = [0.9, 1.1]
randomize_base_mass = True
added_mass_range = [1.0, 3.0]
randomize_friction = True
friction_range = [0.0, 1.]
init_base_pos_range = dict(
x= [0.2, 0.6],
y= [-0.25, 0.25],
)
push_robots = False
class rewards( A1RoughCfg.rewards ):
class scales:
tracking_ang_vel = 0.05
world_vel_l2norm = -1.
legs_energy_substeps = -2e-5
legs_energy = -0.
alive = 2.
# penalty for hardware safety
exceed_dof_pos_limits = -1e-1
exceed_torque_limits_i = -2e-1
soft_dof_pos_limit = 0.01
class normalization( A1RoughCfg.normalization ):
class obs_scales( A1RoughCfg.normalization.obs_scales ):
forward_depth = 1.
base_pose = [0., 0., 1., 1., 1., 1.]
engaging_block = 1.
class noise( A1RoughCfg.noise ):
class noise_scales( A1RoughCfg.noise.noise_scales ):
forward_depth = 0.1
base_pose = 1.0
class viewer( A1RoughCfg.viewer ):
pos = [0, 0, 5] # [m]
lookat = [5., 5., 2.] # [m]
draw_volume_sample_points = False
class sim( A1RoughCfg.sim ):
body_measure_points = { # transform are related to body frame
"base": dict(
x= [i for i in np.arange(-0.2, 0.31, 0.03)],
y= [-0.08, -0.04, 0.0, 0.04, 0.08],
z= [i for i in np.arange(-0.061, 0.061, 0.03)],
transform= [0., 0., 0.005, 0., 0., 0.],
),
"thigh": dict(
x= [
-0.16, -0.158, -0.156, -0.154, -0.152,
-0.15, -0.145, -0.14, -0.135, -0.13, -0.125, -0.12, -0.115, -0.11, -0.105, -0.1, -0.095, -0.09, -0.085, -0.08, -0.075, -0.07, -0.065, -0.05,
0.0, 0.05, 0.1,
],
y= [-0.015, -0.01, 0.0, -0.01, 0.015],
z= [-0.03, -0.015, 0.0, 0.015],
transform= [0., 0., -0.1, 0., 1.57079632679, 0.],
),
"calf": dict(
x= [i for i in np.arange(-0.13, 0.111, 0.03)],
y= [-0.015, 0.0, 0.015],
z= [-0.015, 0.0, 0.015],
transform= [0., 0., -0.11, 0., 1.57079632679, 0.],
),
}
class curriculum:
no_moveup_when_fall = False
# chosen heuristically, please refer to `LeggedRobotField._get_terrain_curriculum_move` with fixed body_measure_points
class A1FieldCfgPPO( A1RoughCfgPPO ):
class algorithm( A1RoughCfgPPO.algorithm ):
entropy_coef = 0.01
clip_min_std = 1e-12
class policy( A1RoughCfgPPO.policy ):
rnn_type = 'gru'
mu_activation = "tanh"
class runner( A1RoughCfgPPO.runner ):
policy_class_name = "ActorCriticRecurrent"
experiment_name = "field_a1"
run_name = "".join(["WalkingBase",
("_pEnergySubsteps" + np.format_float_scientific(-A1FieldCfg.rewards.scales.legs_energy_substeps, precision=1, exp_digits=1, trim="-") if A1FieldCfg.rewards.scales.legs_energy_substeps != 0 else ""),
("_propDelay{:.2f}-{:.2f}".format(
A1FieldCfg.sensor.proprioception.latency_range[0],
A1FieldCfg.sensor.proprioception.latency_range[1],
) if A1FieldCfg.sensor.proprioception.delay_action_obs else ""
),
("_aScale{:d}{:d}{:d}".format(
int(A1FieldCfg.control.action_scale[0] * 10),
int(A1FieldCfg.control.action_scale[1] * 10),
int(A1FieldCfg.control.action_scale[2] * 10),
) if isinstance(A1FieldCfg.control.action_scale, (tuple, list)) \
else "_aScale{:.1f}".format(A1FieldCfg.control.action_scale)
),
])
resume = False
max_iterations = 10000
save_interval = 500

View File

@ -0,0 +1,325 @@
from collections import OrderedDict
import os
import os.path as osp
from datetime import datetime
import numpy as np
from legged_gym.envs.a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
from legged_gym.utils.helpers import merge_dict
class A1FieldDistillCfg( A1FieldCfg ):
class env( A1FieldCfg.env ):
num_envs = 256
obs_components = [
"proprioception", # 48
"forward_depth",
]
privileged_obs_components = [
"proprioception", # 48
"base_pose",
"robot_config",
"engaging_block",
"sidewall_distance",
]
use_lin_vel = False
privileged_use_lin_vel = True
# if True privileged_obs will not have noise based on implementations
privileged_obs_gets_privilege = False
class init_state( A1FieldCfg.init_state ):
pos = [0.0, 0.0, 0.38] # x,y,z [m]
class sensor( A1FieldCfg.sensor ):
class forward_camera( A1FieldCfg.sensor.forward_camera ):
resolution = [int(240/4), int(424/4)]
position = dict(
mean= [0.27, 0.0075, 0.033],
std= [0.01, 0.0025, 0.0005],
) # position in base_link ##### small randomization
rotation = dict(
lower= [0, 0, 0],
upper= [0, 5 * np.pi / 180, 0],
) # rotation in base_link ##### small randomization
resized_resolution= [48, 64]
output_resolution = [48, 64]
horizontal_fov = [85, 88]
# adding randomized latency
latency_range = [0.2, 0.26] # for [16, 32, 32] -> 128 -> 128 visual model in (240, 424 option)
latency_resample_time = 5.0 # [s]
refresh_duration = 1/10 # [s] for (240, 424 option with onboard script fixed to no more than 20Hz)
# config to simulate stero RGBD camera
crop_top_bottom = [0, 0]
crop_left_right = [int(60/4), int(46/4)]
depth_range = [0.0, 1.5] # [m]
class proprioception:
delay_action_obs = True
latency_range = [0.04-0.0025, 0.04+0.0075] # [min, max] in seconds
latency_resample_time = 2.0 # [s]
class terrain( A1FieldCfg.terrain ):
num_rows = 2
# num_rows = 80
num_cols = 1
max_init_terrain_level = 1
curriculum = False
selected = "BarrierTrack"
BarrierTrack_kwargs = dict(
options= [
"tilt",
"crawl",
"climb",
"climb",
"leap",
], # each race track will permute all the options
n_obstacles_per_track= 5,
randomize_obstacle_order= True,
track_width= 3.0,
track_block_length= 1.8, # the x-axis distance from the env origin point
wall_thickness= (0.2, 1.), # [m]
wall_height= (-0.5, 0.5), # [m]
climb= dict(
height= (0.2, 0.6),
# height= (0.1, 0.5),
depth= (0.1, 0.2), # size along the forward axis
fake_offset= 0.0, # [m] making the climb's height info greater than its physical height.
climb_down_prob= 0.5,
),
crawl= dict(
# height= (0.28, 0.4),
height= (0.3, 0.5),
depth= (0.1, 0.4), # size along the forward axis
wall_height= (0.1, 0.6),
),
tilt= dict(
width= (0.305, 0.4),
depth= (0.4, 0.6), # size along the forward axis
opening_angle= 0.0, # [rad] an opening that make the robot easier to get into the obstacle
wall_height= 0.5,
),
leap= dict(
length= (0.3, 0.7), # for parkour real-world env
depth= (0.5, 0.6),
height= 0.2,
fake_offset= 0.2,
),
add_perlin_noise= True,
border_perlin_noise= True,
border_height= -0.5,
virtual_terrain= False,
draw_virtual_terrain= True,
engaging_next_threshold= 1.2,
check_skill_combinations= True,
curriculum_perlin= False,
no_perlin_threshold= 0.04, # for parkour real-world env
walk_in_skill_gap= True, # for parkour real-world env
)
TerrainPerlin_kwargs = dict(
zScale= [0.0, 0.05], # for parkour real-world env
frequency= 10,
)
class commands( A1FieldCfg.commands ):
class ranges( A1FieldCfg.commands.ranges ):
# student not react to command input
lin_vel_x = [0.0, 0.0]
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
class control( A1FieldCfg.control ):
stiffness = {'joint': 50.}
damping = {'joint': 1.}
action_scale = 0.5
torque_limits = 25.
computer_clip_torque = True # for climb, walk temporarily
motor_clip_torque = False # for climb, walk temporarily
class termination( A1FieldCfg.termination ):
termination_terms = [
"roll",
"pitch",
"out_of_track",
]
roll_kwargs = merge_dict(A1FieldCfg.termination.pitch_kwargs, dict(
threshold= 0.8, # [rad] # for tilt
crawl_threshold= 0.4,
climb_threshold= 0.8,
leap_threshold= 0.6,
tilt_threshold= 1.0, # for tilt (condition on engaging block)
))
pitch_kwargs = merge_dict(A1FieldCfg.termination.pitch_kwargs, dict(
threshold= 1.5,
crawl_threshold= 0.7,
climb_threshold= 1.5,
leap_threshold= 0.7,
tilt_threshold= 0.5,
))
out_of_track_kwargs = dict(
threshold= 1., # [m] NOTE: change according to skill demonstration
)
timeout_at_border = True
timeout_at_finished = True
class domain_rand( A1FieldCfg.domain_rand ):
randomize_com = True
class com_range( A1FieldCfg.domain_rand.com_range ):
x = [0.05, 0.15]
randomize_motor = True
randomize_friction = True
friction_range = [0.0, 0.8]
randomize_base_mass = True
# added_mass_range = [-1.0, 1.0]
push_robots = False
init_dof_pos_ratio_range = [0.9, 1.1]
init_base_vel_range = [-0.0, 0.0]
init_base_pos_range = dict(
x= [0.4, 0.6],
y= [-0.05, 0.05],
)
# not used in the algorithm, only for visualizing
class rewards( A1FieldCfg.rewards ):
class scales:
pass
class noise( A1FieldCfg.noise ):
class noise_scales( A1FieldCfg.noise.noise_scales ):
forward_depth = 0.0
class forward_depth:
stereo_min_distance = 0.12 # when using (240, 424) resolution
stereo_far_distance = 2.
stereo_far_noise_std = 0.08 # The noise std of pixels that are greater than stereo_far_noise_distance
stereo_near_noise_std = 0.02 # The noise std of pixels that are less than stereo_far_noise_distance
stereo_full_block_artifacts_prob = 0.008 # The probability of adding artifacts to pixels that are less than stereo_min_distance
stereo_full_block_values = [0.0, 0.25, 0.5, 1., 3.]
stereo_full_block_height_mean_std = [62, 1.5]
stereo_full_block_width_mean_std = [3, 0.01]
stereo_half_block_spark_prob = 0.02
stereo_half_block_value = 3000 # to the maximum value directly
sky_artifacts_prob = 0.0001
sky_artifacts_far_distance = 2. # greater than this distance will be viewed as to the sky
sky_artifacts_values = [0.6, 1., 1.2, 1.5, 1.8]
sky_artifacts_height_mean_std = [2, 3.2]
sky_artifacts_width_mean_std = [2, 3.2]
class viewer( A1FieldCfg.viewer ):
pos = [16, 5, 3.2] # [m]
lookat = [16, 8., 2.] # [m]
class sim( A1FieldCfg.sim ):
no_camera = False
class curriculum:
# override base class attributes
pass
distill_target_ = "tanh"
logs_root = osp.join(osp.dirname(osp.dirname(osp.dirname(osp.dirname(osp.abspath(__file__))))), "logs")
class A1FieldDistillCfgPPO( A1FieldCfgPPO ):
class algorithm( A1FieldCfgPPO.algorithm ):
entropy_coef = 0.0
value_loss_coef = 0.0
num_learning_epochs = 1 # 1 for finetuning, 2 for training from scratch
num_mini_batches = 2
teacher_act_prob = "exp"
update_times_scale = 2000
using_ppo = False
distill_target = distill_target_
buffer_dilation_ratio = 1.
learning_rate = 3e-4
optimizer_class_name = "AdamW"
teacher_policy_class_name = "ActorCriticClimbMutex"
teacher_ac_path = None
class teacher_policy (A1FieldCfgPPO.policy ):
# For loading teacher policy. No need to change for training student
num_actor_obs = 81 # should equal to the sum of the obs_segments
num_critic_obs = 81
num_actions = 12
obs_segments = OrderedDict(
proprioception= (48,),
base_pose= (6,),
robot_config= (1 + 3 + 1 + 12,),
engaging_block= (1 + (4 + 1) + 2,),
sidewall_distance= (2,),
)
env_action_scale = A1FieldCfg.control.action_scale
sub_policy_class_name = "ActorCriticRecurrent"
sub_policy_paths = [ # must in the order of obstacle ID, Replace the folder name with your own training logdir
os.path.join(logs_root, "field_a1/{your walking policy}"),
os.path.join(logs_root, "field_a1/{your tilting policy}"),
os.path.join(logs_root, "field_a1/{your crawling policy}"),
os.path.join(logs_root, "field_a1/{your climbing policy}"),
os.path.join(logs_root, "field_a1/{your leaping policy}"),
]
climb_down_policy_path = os.path.join(logs_root, "field_a1/{your climbing down policy}")
cmd_vel_mapping = {
0: 1.0,
1: 0.5,
2: 0.8,
3: 1.2,
4: 1.5,
}
class policy( A1FieldCfgPPO.policy ):
class visual_kwargs:
channels = [16, 32, 32]
kernel_sizes = [5, 4, 3]
strides = [2, 2, 1]
hidden_sizes = [128]
use_maxpool = True
nonlinearity = "LeakyReLU"
visual_latent_size = 128
init_noise_std = 0.05
runner_class_name = "TwoStageRunner"
class runner( A1FieldCfgPPO.runner ):
policy_class_name = "VisualDeterministicRecurrent"
algorithm_class_name = "TPPO"
experiment_name = "distill_a1"
num_steps_per_env = 48
# configs for training using collected dataset
pretrain_iterations = -1 # negative value for infinite training
class pretrain_dataset:
scan_dir = "".join([
"logs/distill_a1_dagger/", datetime.now().strftime('%b%d_%H-%M-%S'), "_",
"".join(A1FieldDistillCfg.terrain.BarrierTrack_kwargs["options"]),
"_vDelay{:.2f}-{:.2f}".format(
A1FieldDistillCfg.sensor.forward_camera.latency_range[0],
A1FieldDistillCfg.sensor.forward_camera.latency_range[1],
),
"_pDelay{:.2f}-{:.2f}".format(
A1FieldDistillCfg.sensor.proprioception.latency_range[0],
A1FieldDistillCfg.sensor.proprioception.latency_range[1],
),
("_randOrder" if A1FieldDistillCfg.terrain.BarrierTrack_kwargs["randomize_obstacle_order"] else ""),
("_noPerlinRate{:.1f}".format(
(A1FieldDistillCfg.terrain.BarrierTrack_kwargs["no_perlin_threshold"] - A1FieldDistillCfg.terrain.TerrainPerlin_kwargs["zScale"][0]) / \
(A1FieldDistillCfg.terrain.TerrainPerlin_kwargs["zScale"][1] - A1FieldDistillCfg.terrain.TerrainPerlin_kwargs["zScale"][0])
)),
])
dataset_loops = -1 # negative value for infinite dataset loops
random_shuffle_traj_order = True
keep_latest_ratio = 1.0
keep_latest_n_trajs = 2000
starting_frame_range = [0, 100]
resume = False
load_run = None
max_iterations = 80000
save_interval = 2000
run_name = "".join(["distill", #"_opensource",
("_" + ("".join(A1FieldDistillCfg.terrain.BarrierTrack_kwargs["options"]) if len(A1FieldDistillCfg.terrain.BarrierTrack_kwargs["options"]) > 1 else A1FieldDistillCfg.terrain.BarrierTrack_kwargs["options"][0])),
("_vDelay{:.2f}-{:.2f}".format(*A1FieldDistillCfg.sensor.forward_camera.latency_range)),
("_pDelay{:.2f}-{:.2f}".format(*A1FieldDistillCfg.sensor.proprioception.latency_range)),
("_randOrder" if A1FieldDistillCfg.terrain.BarrierTrack_kwargs["randomize_obstacle_order"] else ""),
("_from" + "_".join(load_run.split("_")[:2]) if resume else ""),
])

View File

@ -0,0 +1,101 @@
import numpy as np
from legged_gym.envs.a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
from legged_gym.utils.helpers import merge_dict
class A1LeapCfg( A1FieldCfg ):
#### uncomment this to train non-virtual terrain
# class sensor( A1FieldCfg.sensor ):
# class proprioception( A1FieldCfg.sensor.proprioception ):
# delay_action_obs = True
# latency_range = [0.04-0.0025, 0.04+0.0075]
#### uncomment the above to train non-virtual terrain
class terrain( A1FieldCfg.terrain ):
max_init_terrain_level = 2
border_size = 5
slope_treshold = 20.
curriculum = True
BarrierTrack_kwargs = merge_dict(A1FieldCfg.terrain.BarrierTrack_kwargs, dict(
options= [
"leap",
],
leap= dict(
length= (0.2, 1.0),
depth= (0.4, 0.8),
height= 0.2,
),
virtual_terrain= False, # Change this to False for real terrain
no_perlin_threshold= 0.06,
))
TerrainPerlin_kwargs = merge_dict(A1FieldCfg.terrain.TerrainPerlin_kwargs, dict(
zScale= [0.05, 0.1],
))
class commands( A1FieldCfg.commands ):
class ranges( A1FieldCfg.commands.ranges ):
lin_vel_x = [1.0, 1.5]
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
class termination( A1FieldCfg.termination ):
# additional factors that determines whether to terminates the episode
termination_terms = [
"roll",
"pitch",
"z_low",
"z_high",
"out_of_track",
]
roll_kwargs = merge_dict(A1FieldCfg.termination.roll_kwargs, dict(
threshold= 0.4,
leap_threshold= 0.4,
))
z_high_kwargs = merge_dict(A1FieldCfg.termination.z_high_kwargs, dict(
threshold= 2.0,
))
class rewards( A1FieldCfg.rewards ):
class scales:
tracking_ang_vel = 0.05
world_vel_l2norm = -1.
legs_energy_substeps = -1e-6
alive = 2.
penetrate_depth = -4e-3
penetrate_volume = -4e-3
exceed_dof_pos_limits = -1e-1
exceed_torque_limits_i = -2e-1
class curriculum( A1FieldCfg.curriculum ):
penetrate_volume_threshold_harder = 9000
penetrate_volume_threshold_easier = 10000
penetrate_depth_threshold_harder = 300
penetrate_depth_threshold_easier = 5000
class A1LeapCfgPPO( A1FieldCfgPPO ):
class algorithm( A1FieldCfgPPO.algorithm ):
entropy_coef = 0.0
clip_min_std = 0.2
class runner( A1FieldCfgPPO.runner ):
policy_class_name = "ActorCriticRecurrent"
experiment_name = "field_a1"
run_name = "".join(["Skill",
("Multi" if len(A1LeapCfg.terrain.BarrierTrack_kwargs["options"]) > 1 else (A1LeapCfg.terrain.BarrierTrack_kwargs["options"][0] if A1LeapCfg.terrain.BarrierTrack_kwargs["options"] else "PlaneWalking")),
("_propDelay{:.2f}-{:.2f}".format(
A1LeapCfg.sensor.proprioception.latency_range[0],
A1LeapCfg.sensor.proprioception.latency_range[1],
) if A1LeapCfg.sensor.proprioception.delay_action_obs else ""
),
("_pEnergySubsteps{:.0e}".format(A1LeapCfg.rewards.scales.legs_energy_substeps) if A1LeapCfg.rewards.scales.legs_energy_substeps != -2e-6 else ""),
("_virtual" if A1LeapCfg.terrain.BarrierTrack_kwargs["virtual_terrain"] else ""),
])
resume = True
load_run = "{Your traind walking model directory}"
load_run = "{Your virtually trained leap model directory}"
max_iterations = 20000
save_interval = 500

View File

@ -0,0 +1,452 @@
import numpy as np
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import json
import os
import os.path as osp
from collections import OrderedDict
from typing import Tuple
import rospy
from unitree_legged_msgs.msg import LowState
from unitree_legged_msgs.msg import LegsCmd
from unitree_legged_msgs.msg import Float32MultiArrayStamped
from std_msgs.msg import Float32MultiArray
from geometry_msgs.msg import Twist, Pose
from nav_msgs.msg import Odometry
from sensor_msgs.msg import Image
import ros_numpy
@torch.no_grad()
def resize2d(img, size):
return (F.adaptive_avg_pool2d(Variable(img), size)).data
@torch.jit.script
def quat_rotate_inverse(q, v):
shape = q.shape
q_w = q[:, -1]
q_vec = q[:, :3]
a = v * (2.0 * q_w ** 2 - 1.0).unsqueeze(-1)
b = torch.cross(q_vec, v, dim=-1) * q_w.unsqueeze(-1) * 2.0
c = q_vec * \
torch.bmm(q_vec.view(shape[0], 1, 3), v.view(
shape[0], 3, 1)).squeeze(-1) * 2.0
return a - b + c
class UnitreeA1Real:
""" This is the handler that works for ROS 1 on unitree. """
def __init__(self,
robot_namespace= "a112138",
low_state_topic= "/low_state",
legs_cmd_topic= "/legs_cmd",
forward_depth_topic = "/camera/depth/image_rect_raw",
forward_depth_embedding_dims = None,
odom_topic= "/odom/filtered",
lin_vel_deadband= 0.2,
ang_vel_deadband= 0.1,
move_by_wireless_remote= False, # if True, command will not listen to move_cmd_subscriber, but wireless remote.
cfg= dict(),
extra_cfg= dict(),
model_device= torch.device("cpu"),
):
"""
NOTE:
* Must call start_ros() before using this class's get_obs() and send_action()
* Joint order of simulation and of real A1 protocol are different, see dof_names
* We store all joints values in the order of simulation in this class
Args:
forward_depth_embedding_dims: If a real number, the obs will not be built as a normal env.
The segment of obs will be subsituted by the embedding of forward depth image from the
ROS topic.
cfg: same config from a1_config but a dict object.
extra_cfg: some other configs that is hard to load from file.
"""
self.model_device = model_device
self.num_envs = 1
self.robot_namespace = robot_namespace
self.low_state_topic = low_state_topic
self.legs_cmd_topic = legs_cmd_topic
self.forward_depth_topic = forward_depth_topic
self.forward_depth_embedding_dims = forward_depth_embedding_dims
self.odom_topic = odom_topic
self.lin_vel_deadband = lin_vel_deadband
self.ang_vel_deadband = ang_vel_deadband
self.move_by_wireless_remote = move_by_wireless_remote
self.cfg = cfg
self.extra_cfg = dict(
torque_limits= torch.tensor([33.5] * 12, dtype= torch.float32, device= self.model_device, requires_grad= False), # Nm
# torque_limits= torch.tensor([1, 5, 5] * 4, dtype= torch.float32, device= self.model_device, requires_grad= False), # Nm
dof_map= [ # from isaacgym simulation joint order to URDF order
3, 4, 5,
0, 1, 2,
9, 10,11,
6, 7, 8,
], # real_joint_idx = dof_map[sim_joint_idx]
dof_names= [ # NOTE: order matters
"FL_hip_joint",
"FL_thigh_joint",
"FL_calf_joint",
"FR_hip_joint",
"FR_thigh_joint",
"FR_calf_joint",
"RL_hip_joint",
"RL_thigh_joint",
"RL_calf_joint",
"RR_hip_joint",
"RR_thigh_joint",
"RR_calf_joint",
],
# motor strength is multiplied directly to the action.
motor_strength= torch.ones(12, dtype= torch.float32, device= self.model_device, requires_grad= False),
); self.extra_cfg.update(extra_cfg)
if "torque_limits" in self.cfg["control"]:
self.extra_cfg["torque_limits"][:] = self.cfg["control"]["torque_limits"]
self.command_buf = torch.zeros((self.num_envs, 3,), device= self.model_device, dtype= torch.float32) # zeros for initialization
self.actions = torch.zeros((1, 12), device= model_device, dtype= torch.float32)
self.process_configs()
def start_ros(self):
# initialze several buffers so that the system works even without message update.
# self.low_state_buffer = LowState() # not initialized, let input message update it.
self.base_position_buffer = torch.zeros((self.num_envs, 3), device= self.model_device, requires_grad= False)
self.legs_cmd_publisher = rospy.Publisher(
self.robot_namespace + self.legs_cmd_topic,
LegsCmd,
queue_size= 1,
)
# self.debug_publisher = rospy.Publisher(
# "/DNNmodel_debug",
# Float32MultiArray,
# queue_size= 1,
# )
# NOTE: this launches the subscriber callback function
self.low_state_subscriber = rospy.Subscriber(
self.robot_namespace + self.low_state_topic,
LowState,
self.update_low_state,
queue_size= 1,
)
self.odom_subscriber = rospy.Subscriber(
self.robot_namespace + self.odom_topic,
Odometry,
self.update_base_pose,
queue_size= 1,
)
if not self.move_by_wireless_remote:
self.move_cmd_subscriber = rospy.Subscriber(
"/cmd_vel",
Twist,
self.update_move_cmd,
queue_size= 1,
)
if "forward_depth" in self.all_obs_components:
if not self.forward_depth_embedding_dims:
self.forward_depth_subscriber = rospy.Subscriber(
self.robot_namespace + self.forward_depth_topic,
Image,
self.update_forward_depth,
queue_size= 1,
)
else:
self.forward_depth_subscriber = rospy.Subscriber(
self.robot_namespace + self.forward_depth_topic,
Float32MultiArrayStamped,
self.update_forward_depth_embedding,
queue_size= 1,
)
self.pose_cmd_subscriber = rospy.Subscriber(
"/body_pose",
Pose,
self.dummy_handler,
queue_size= 1,
)
def wait_untill_ros_working(self):
rate = rospy.Rate(100)
while not hasattr(self, "low_state_buffer"):
rate.sleep()
rospy.loginfo("UnitreeA1Real.low_state_buffer acquired, stop waiting.")
def process_configs(self):
self.up_axis_idx = 2 # 2 for z, 1 for y -> adapt gravity accordingly
self.gravity_vec = torch.zeros((self.num_envs, 3), dtype= torch.float32)
self.gravity_vec[:, self.up_axis_idx] = -1
self.obs_scales = self.cfg["normalization"]["obs_scales"]
self.obs_scales["dof_pos"] = torch.tensor(self.obs_scales["dof_pos"], device= self.model_device, dtype= torch.float32)
if not isinstance(self.cfg["control"]["damping"]["joint"], (list, tuple)):
self.cfg["control"]["damping"]["joint"] = [self.cfg["control"]["damping"]["joint"]] * 12
if not isinstance(self.cfg["control"]["stiffness"]["joint"], (list, tuple)):
self.cfg["control"]["stiffness"]["joint"] = [self.cfg["control"]["stiffness"]["joint"]] * 12
self.d_gains = torch.tensor(self.cfg["control"]["damping"]["joint"], device= self.model_device, dtype= torch.float32)
self.p_gains = torch.tensor(self.cfg["control"]["stiffness"]["joint"], device= self.model_device, dtype= torch.float32)
self.default_dof_pos = torch.zeros(12, device= self.model_device, dtype= torch.float32)
for i in range(12):
name = self.extra_cfg["dof_names"][i]
default_joint_angle = self.cfg["init_state"]["default_joint_angles"][name]
self.default_dof_pos[i] = default_joint_angle
self.torque_limits = self.extra_cfg["torque_limits"]
self.commands_scale = torch.tensor([
self.obs_scales["lin_vel"],
self.obs_scales["lin_vel"],
self.obs_scales["lin_vel"],
], device= self.model_device, requires_grad= False)
self.obs_segments = self.get_obs_segment_from_components(self.cfg["env"]["obs_components"])
self.num_obs = self.get_num_obs_from_components(self.cfg["env"]["obs_components"])
components = self.cfg["env"].get("privileged_obs_components", None)
self.privileged_obs_segments = None if components is None else self.get_num_obs_from_components(components)
self.num_privileged_obs = None if components is None else self.get_num_obs_from_components(components)
self.all_obs_components = self.cfg["env"]["obs_components"] + (self.cfg["env"].get("privileged_obs_components", []) if components is not None else [])
# store config values to attributes to improve speed
self.clip_obs = self.cfg["normalization"]["clip_observations"]
self.control_type = self.cfg["control"]["control_type"]
self.action_scale = self.cfg["control"]["action_scale"]
self.motor_strength = self.extra_cfg["motor_strength"]
self.clip_actions = self.cfg["normalization"]["clip_actions"]
self.dof_map = self.extra_cfg["dof_map"]
# get ROS params for hardware configs
self.joint_limits_high = torch.tensor([
rospy.get_param(self.robot_namespace + "/joint_limits/{}_max".format(s), 0.) \
for s in ["hip", "thigh", "calf"] * 4
])
self.joint_limits_low = torch.tensor([
rospy.get_param(self.robot_namespace + "/joint_limits/{}_min".format(s), 0.) \
for s in ["hip", "thigh", "calf"] * 4
])
if "forward_depth" in self.all_obs_components:
resolution = self.cfg["sensor"]["forward_camera"].get(
"output_resolution",
self.cfg["sensor"]["forward_camera"]["resolution"],
)
if not self.forward_depth_embedding_dims:
self.forward_depth_buf = torch.zeros(
(self.num_envs, *resolution),
device= self.model_device,
dtype= torch.float32,
)
else:
self.forward_depth_embedding_buf = torch.zeros(
(1, self.forward_depth_embedding_dims),
device= self.model_device,
dtype= torch.float32,
)
def _init_height_points(self):
""" Returns points at which the height measurments are sampled (in base frame)
Returns:
[torch.Tensor]: Tensor of shape (num_envs, self.num_height_points, 3)
"""
return None
def _get_heights(self):
""" TODO: get estimated terrain heights around the robot base """
# currently return a zero tensor with valid size
return torch.zeros(self.num_envs, 187, device= self.model_device, requires_grad= False)
def clip_by_torque_limit(self, actions_scaled):
""" Different from simulation, we reverse the process and clip the actions directly,
so that the PD controller runs in robot but not our script.
"""
control_type = self.cfg["control"]["control_type"]
if control_type == "P":
p_limits_low = (-self.torque_limits) + self.d_gains*self.dof_vel
p_limits_high = (self.torque_limits) + self.d_gains*self.dof_vel
actions_low = (p_limits_low/self.p_gains) - self.default_dof_pos + self.dof_pos
actions_high = (p_limits_high/self.p_gains) - self.default_dof_pos + self.dof_pos
else:
raise NotImplementedError
return torch.clip(actions_scaled, actions_low, actions_high)
""" Get obs components and cat to a single obs input """
def _get_proprioception_obs(self):
# base_ang_vel = quat_rotate_inverse(
# torch.tensor(self.low_state_buffer.imu.quaternion).unsqueeze(0),
# torch.tensor(self.low_state_buffer.imu.gyroscope).unsqueeze(0),
# ).to(self.model_device)
# NOTE: Different from the isaacgym.
# The anglar velocity is already in base frame, no need to rotate
base_ang_vel = torch.tensor(self.low_state_buffer.imu.gyroscope, device= self.model_device).unsqueeze(0)
projected_gravity = quat_rotate_inverse(
torch.tensor(self.low_state_buffer.imu.quaternion).unsqueeze(0),
self.gravity_vec,
).to(self.model_device)
self.dof_pos = dof_pos = torch.tensor([
self.low_state_buffer.motorState[self.dof_map[i]].q for i in range(12)
], dtype= torch.float32, device= self.model_device).unsqueeze(0)
self.dof_vel = dof_vel = torch.tensor([
self.low_state_buffer.motorState[self.dof_map[i]].dq for i in range(12)
], dtype= torch.float32, device= self.model_device).unsqueeze(0)
return torch.cat([
torch.zeros((1, 3), device= self.model_device), # no linear velocity
base_ang_vel * self.obs_scales["ang_vel"],
projected_gravity,
self.command_buf * self.commands_scale,
(dof_pos - self.default_dof_pos) * self.obs_scales["dof_pos"],
dof_vel * self.obs_scales["dof_vel"],
self.actions
], dim= -1)
def _get_forward_depth_obs(self):
if not self.forward_depth_embedding_dims:
return self.forward_depth_buf.flatten(start_dim= 1)
else:
return self.forward_depth_embedding_buf.flatten(start_dim= 1)
def compute_observation(self):
""" use the updated low_state_buffer to compute observation vector """
assert hasattr(self, "legs_cmd_publisher"), "start_ros() not called, ROS handlers are not initialized!"
obs_segments = self.obs_segments
obs = []
for k, v in obs_segments.items():
obs.append(
getattr(self, "_get_" + k + "_obs")() * \
self.obs_scales.get(k, 1.)
)
obs = torch.cat(obs, dim= 1)
self.obs_buf = obs
""" The methods combined with outer model forms the step function
NOTE: the outer user handles the loop frequency.
"""
def send_action(self, actions):
""" The function that send commands to the real robot.
"""
self.actions = torch.clip(actions, -self.clip_actions, self.clip_actions)
actions = self.actions * self.motor_strength
robot_coordinates_action = self.clip_by_torque_limit(actions * self.action_scale) + self.default_dof_pos.unsqueeze(0)
# robot_coordinates_action = self.actions * self.action_scale + self.default_dof_pos.unsqueeze(0)
# debugging and logging
# transfered_action = torch.zeros_like(self.actions[0])
# for i in range(12):
# transfered_action[self.dof_map[i]] = self.actions[0, i] + self.default_dof_pos[i]
# self.debug_publisher.publish(Float32MultiArray(data=
# transfered_action\
# .cpu().numpy().astype(np.float32).tolist()
# ))
# restrict the target action delta in order to avoid robot shutdown (maybe there is another solution)
# robot_coordinates_action = torch.clip(
# robot_coordinates_action,
# self.dof_pos - 0.3,
# self.dof_pos + 0.3,
# )
# wrap the message and publish
self.publish_legs_cmd(robot_coordinates_action)
def publish_legs_cmd(self, robot_coordinates_action, kp= None, kd= None):
""" publish the joint position directly to the robot. NOTE: The joint order from input should
be in simulation order. The value should be absolute value rather than related to dof_pos.
"""
robot_coordinates_action = torch.clip(
robot_coordinates_action.cpu(),
self.joint_limits_low,
self.joint_limits_high,
)
legs_cmd = LegsCmd()
for sim_joint_idx in range(12):
real_joint_idx = self.dof_map[sim_joint_idx]
legs_cmd.cmd[real_joint_idx].mode = 10
legs_cmd.cmd[real_joint_idx].q = robot_coordinates_action[0, sim_joint_idx] if self.control_type == "P" else rospy.get_param(self.robot_namespace + "/PosStopF", (2.146e+9))
legs_cmd.cmd[real_joint_idx].dq = 0.
legs_cmd.cmd[real_joint_idx].tau = 0.
legs_cmd.cmd[real_joint_idx].Kp = self.p_gains[sim_joint_idx] if kp is None else kp
legs_cmd.cmd[real_joint_idx].Kd = self.d_gains[sim_joint_idx] if kd is None else kd
self.legs_cmd_publisher.publish(legs_cmd)
def get_obs(self):
""" The function that refreshes the buffer and return the observation vector.
"""
self.compute_observation()
self.obs_buf = torch.clip(self.obs_buf, -self.clip_obs, self.clip_obs)
return self.obs_buf.to(self.model_device)
""" Copied from legged_robot_field. Please check whether these are consistent. """
def get_obs_segment_from_components(self, components):
segments = OrderedDict()
if "proprioception" in components:
segments["proprioception"] = (48,)
if "height_measurements" in components:
segments["height_measurements"] = (187,)
if "forward_depth" in components:
resolution = self.cfg["sensor"]["forward_camera"].get(
"output_resolution",
self.cfg["sensor"]["forward_camera"]["resolution"],
)
segments["forward_depth"] = (1, *resolution)
# The following components are only for rebuilding the non-actor module.
# DO NOT use these in actor network and check consistency with simulator implementation.
if "base_pose" in components:
segments["base_pose"] = (6,) # xyz + rpy
if "robot_config" in components:
segments["robot_config"] = (1 + 3 + 1 + 12,)
if "engaging_block" in components:
# This could be wrong, please check the implementation of BarrierTrack
segments["engaging_block"] = (1 + (4 + 1) + 2,)
if "sidewall_distance" in components:
segments["sidewall_distance"] = (2,)
return segments
def get_num_obs_from_components(self, components):
obs_segments = self.get_obs_segment_from_components(components)
num_obs = 0
for k, v in obs_segments.items():
num_obs += np.prod(v)
return num_obs
""" ROS callbacks and handlers that update the buffer """
def update_low_state(self, ros_msg):
self.low_state_buffer = ros_msg
if self.move_by_wireless_remote:
self.command_buf[0, 0] = self.low_state_buffer.wirelessRemote.ly
self.command_buf[0, 1] = -self.low_state_buffer.wirelessRemote.lx # right-moving stick is positive
self.command_buf[0, 2] = -self.low_state_buffer.wirelessRemote.rx # right-moving stick is positive
# set the command to zero if it is too small
if np.linalg.norm(self.command_buf[0, :2]) < self.lin_vel_deadband:
self.command_buf[0, :2] = 0.
if np.abs(self.command_buf[0, 2]) < self.ang_vel_deadband:
self.command_buf[0, 2] = 0.
def update_base_pose(self, ros_msg):
""" update robot odometry for position """
self.base_position_buffer[0, 0] = ros_msg.pose.pose.position.x
self.base_position_buffer[0, 1] = ros_msg.pose.pose.position.y
self.base_position_buffer[0, 2] = ros_msg.pose.pose.position.z
def update_move_cmd(self, ros_msg):
self.command_buf[0, 0] = ros_msg.linear.x
self.command_buf[0, 1] = ros_msg.linear.y
self.command_buf[0, 2] = ros_msg.angular.z
def update_forward_depth(self, ros_msg):
# TODO not checked.
self.forward_depth_header = ros_msg.header
buf = ros_numpy.numpify(ros_msg)
self.forward_depth_buf = resize2d(
torch.from_numpy(buf.astype(np.float32)).unsqueeze(0).unsqueeze(0).to(self.model_device),
self.forward_depth_buf.shape[-2:],
)
def update_forward_depth_embedding(self, ros_msg):
self.forward_depth_embedding_stamp = ros_msg.header.stamp
self.forward_depth_embedding_buf[:] = torch.tensor(ros_msg.data).unsqueeze(0) # (1, d)
def dummy_handler(self, ros_msg):
""" To meet the need of teleop-legged-robots requirements """
pass

View File

@ -0,0 +1,377 @@
#!/home/unitree/agility_ziwenz_venv/bin/python
import os
import os.path as osp
import json
import numpy as np
import torch
from collections import OrderedDict
from functools import partial
from typing import Tuple
import rospy
from std_msgs.msg import Float32MultiArray
from sensor_msgs.msg import Image
import ros_numpy
from a1_real import UnitreeA1Real, resize2d
from rsl_rl import modules
from rsl_rl.utils.utils import get_obs_slice
@torch.no_grad()
def handle_forward_depth(ros_msg, model, publisher, output_resolution, device):
""" The callback function to handle the forward depth and send the embedding through ROS topic """
buf = ros_numpy.numpify(ros_msg).astype(np.float32)
forward_depth_buf = resize2d(
torch.from_numpy(buf).unsqueeze(0).unsqueeze(0).to(device),
output_resolution,
)
embedding = model(forward_depth_buf)
ros_data = embedding.reshape(-1).cpu().numpy().astype(np.float32)
publisher.publish(Float32MultiArray(data= ros_data.tolist()))
class StandOnlyModel(torch.nn.Module):
def __init__(self, action_scale, dof_pos_scale, tolerance= 0.1, delta= 0.1):
rospy.loginfo("Using stand only model, please make sure the proprioception is 48 dim.")
rospy.loginfo("Using stand only model, -36 to -24 must be joint position.")
super().__init__()
if isinstance(action_scale, (tuple, list)):
self.register_buffer("action_scale", torch.tensor(action_scale))
else:
self.action_scale = action_scale
if isinstance(dof_pos_scale, (tuple, list)):
self.register_buffer("dof_pos_scale", torch.tensor(dof_pos_scale))
else:
self.dof_pos_scale = dof_pos_scale
self.tolerance = tolerance
self.delta = delta
def forward(self, obs):
joint_positions = obs[..., -36:-24] / self.dof_pos_scale
diff_large_mask = torch.abs(joint_positions) > self.tolerance
target_positions = torch.zeros_like(joint_positions)
target_positions[diff_large_mask] = joint_positions[diff_large_mask] - self.delta * torch.sign(joint_positions[diff_large_mask])
return torch.clip(
target_positions / self.action_scale,
-1.0, 1.0,
)
def reset(self, *args, **kwargs):
pass
def load_walk_policy(env, model_dir):
""" Load the walk policy from the model directory """
if model_dir == None:
model = StandOnlyModel(
action_scale= env.action_scale,
dof_pos_scale= env.obs_scales["dof_pos"],
)
policy = torch.jit.script(model)
else:
with open(osp.join(model_dir, "config.json"), "r") as f:
config_dict = json.load(f, object_pairs_hook= OrderedDict)
obs_components = config_dict["env"]["obs_components"]
privileged_obs_components = config_dict["env"].get("privileged_obs_components", obs_components)
model = getattr(modules, config_dict["runner"]["policy_class_name"])(
num_actor_obs= env.get_num_obs_from_components(obs_components),
num_critic_obs= env.get_num_obs_from_components(privileged_obs_components),
num_actions= 12,
**config_dict["policy"],
)
model_names = [i for i in os.listdir(model_dir) if i.startswith("model_")]
model_names.sort(key= lambda x: int(x.split("_")[-1].split(".")[0]))
state_dict = torch.load(osp.join(model_dir, model_names[-1]), map_location= "cpu")
model.load_state_dict(state_dict["model_state_dict"])
model_action_scale = torch.tensor(config_dict["control"]["action_scale"]) if isinstance(config_dict["control"]["action_scale"], (tuple, list)) else torch.tensor([config_dict["control"]["action_scale"]])[0]
if not (torch.is_tensor(model_action_scale) and (model_action_scale == env.action_scale).all()):
action_rescale_ratio = model_action_scale / env.action_scale
print("walk_policy action scaling:", action_rescale_ratio.tolist())
else:
action_rescale_ratio = 1.0
memory_module = model.memory_a
actor_mlp = model.actor
@torch.jit.script
def policy_run(obs):
recurrent_embedding = memory_module(obs)
actions = actor_mlp(recurrent_embedding.squeeze(0))
return actions
if (torch.is_tensor(action_rescale_ratio) and (action_rescale_ratio == 1.).all()) \
or (not torch.is_tensor(action_rescale_ratio) and action_rescale_ratio == 1.):
policy = policy_run
else:
policy = lambda x: policy_run(x) * action_rescale_ratio
return policy, model
def standup_procedure(env, ros_rate, angle_tolerance= 0.05, kp= None, kd= None, device= "cpu"):
rospy.loginfo("Robot standing up, please wait ...")
target_pos = torch.zeros((1, 12), device= device, dtype= torch.float32)
while not rospy.is_shutdown():
dof_pos = [env.low_state_buffer.motorState[env.dof_map[i]].q for i in range(12)]
diff = [env.default_dof_pos[i].item() - dof_pos[i] for i in range(12)]
direction = [1 if i > 0 else -1 for i in diff]
if all([abs(i) < angle_tolerance for i in diff]):
break
print("max joint error (rad):", max([abs(i) for i in diff]), end= "\r")
for i in range(12):
target_pos[0, i] = dof_pos[i] + direction[i] * angle_tolerance if abs(diff[i]) > angle_tolerance else env.default_dof_pos[i]
env.publish_legs_cmd(target_pos,
kp= kp,
kd= kd,
)
ros_rate.sleep()
rospy.loginfo("Robot stood up! press R1 on the remote control to continue ...")
while not rospy.is_shutdown():
if env.low_state_buffer.wirelessRemote.btn.components.R1:
break
if env.low_state_buffer.wirelessRemote.btn.components.L2 or env.low_state_buffer.wirelessRemote.btn.components.R2:
env.publish_legs_cmd(env.default_dof_pos.unsqueeze(0), kp= 20, kd= 0.5)
rospy.signal_shutdown("Controller send stop signal, exiting")
exit(0)
env.publish_legs_cmd(env.default_dof_pos.unsqueeze(0), kp= kp, kd= kd)
ros_rate.sleep()
rospy.loginfo("Robot standing up procedure finished!")
class SkilledA1Real(UnitreeA1Real):
""" Some additional methods to help the execution of skill policy """
def __init__(self, *args,
skill_mode_threhold= 0.1,
skill_vel_range= [0.0, 1.0],
**kwargs,
):
self.skill_mode_threhold = skill_mode_threhold
self.skill_vel_range = skill_vel_range
super().__init__(*args, **kwargs)
def is_skill_mode(self):
if self.move_by_wireless_remote:
return self.low_state_buffer.wirelessRemote.ry > self.skill_mode_threhold
else:
# Not implemented yet
return False
def update_low_state(self, ros_msg):
self.low_state_buffer = ros_msg
if self.move_by_wireless_remote and ros_msg.wirelessRemote.ry > self.skill_mode_threhold:
skill_vel = (self.low_state_buffer.wirelessRemote.ry - self.skill_mode_threhold) / (1.0 - self.skill_mode_threhold)
skill_vel *= self.skill_vel_range[1] - self.skill_vel_range[0]
skill_vel += self.skill_vel_range[0]
self.command_buf[0, 0] = skill_vel
self.command_buf[0, 1] = 0.
self.command_buf[0, 2] = 0.
return
return super().update_low_state(ros_msg)
def main(args):
log_level = rospy.DEBUG if args.debug else rospy.INFO
rospy.init_node("a1_legged_gym_" + args.mode, anonymous= True, log_level= log_level)
with open(osp.join(args.logdir, "config.json"), "r") as f:
config_dict = json.load(f, object_pairs_hook= OrderedDict)
duration = config_dict["sim"]["dt"] * config_dict["control"]["decimation"] # in sec
# config_dict["control"]["stiffness"]["joint"] -= 2.5 # kp
model_device = torch.device("cpu") if args.mode == "upboard" else torch.device("cuda")
unitree_real_env = SkilledA1Real(
robot_namespace= args.namespace,
cfg= config_dict,
forward_depth_topic= "/visual_embedding" if args.mode == "upboard" else "/camera/depth/image_rect_raw",
forward_depth_embedding_dims= config_dict["policy"]["visual_latent_size"] if args.mode == "upboard" else None,
move_by_wireless_remote= True,
skill_vel_range= config_dict["commands"]["ranges"]["lin_vel_x"],
model_device= model_device,
# extra_cfg= dict(
# motor_strength= torch.tensor([
# 1., 1./0.9, 1./0.9,
# 1., 1./0.9, 1./0.9,
# 1., 1., 1.,
# 1., 1., 1.,
# ], dtype= torch.float32, device= model_device, requires_grad= False),
# ),
)
model = getattr(modules, config_dict["runner"]["policy_class_name"])(
num_actor_obs= unitree_real_env.num_obs,
num_critic_obs= unitree_real_env.num_privileged_obs,
num_actions= 12,
obs_segments= unitree_real_env.obs_segments,
privileged_obs_segments= unitree_real_env.privileged_obs_segments,
**config_dict["policy"],
)
config_dict["terrain"]["measure_heights"] = False
# load the model with the latest checkpoint
model_names = [i for i in os.listdir(args.logdir) if i.startswith("model_")]
model_names.sort(key= lambda x: int(x.split("_")[-1].split(".")[0]))
state_dict = torch.load(osp.join(args.logdir, model_names[-1]), map_location= "cpu")
model.load_state_dict(state_dict["model_state_dict"])
model.to(model_device)
model.eval()
rospy.loginfo("duration: {}, motor Kp: {}, motor Kd: {}".format(
duration,
config_dict["control"]["stiffness"]["joint"],
config_dict["control"]["damping"]["joint"],
))
rospy.loginfo("[Env] torque limit: {:.1f}".format(unitree_real_env.torque_limits.mean().item()))
rospy.loginfo("[Env] action scale: {:.1f}".format(unitree_real_env.action_scale))
rospy.loginfo("[Env] motor strength: {}".format(unitree_real_env.motor_strength))
if args.mode == "jetson":
embeding_publisher = rospy.Publisher(
args.namespace + "/visual_embedding",
Float32MultiArray,
queue_size= 1,
)
# extract and build the torch ScriptFunction
visual_encoder = model.visual_encoder
visual_encoder = torch.jit.script(visual_encoder)
forward_depth_subscriber = rospy.Subscriber(
args.namespace + "/camera/depth/image_rect_raw",
Image,
partial(handle_forward_depth,
model= visual_encoder,
publisher= embeding_publisher,
output_resolution= config_dict["sensor"]["forward_camera"].get(
"output_resolution",
config_dict["sensor"]["forward_camera"]["resolution"],
),
device= model_device,
),
queue_size= 1,
)
rospy.spin()
elif args.mode == "upboard":
# extract and build the torch ScriptFunction
memory_module = model.memory_a
actor_mlp = model.actor
@torch.jit.script
def policy(obs):
recurrent_embedding = memory_module(obs)
actions = actor_mlp(recurrent_embedding.squeeze(0))
return actions
walk_policy, walk_model = load_walk_policy(unitree_real_env, args.walkdir)
using_walk_policy = True # switch between skill policy and walk policy
unitree_real_env.start_ros()
unitree_real_env.wait_untill_ros_working()
rate = rospy.Rate(1 / duration)
with torch.no_grad():
if not args.debug:
standup_procedure(unitree_real_env, rate,
angle_tolerance= 0.1,
kp= 50,
kd= 1.,
device= model_device,
)
while not rospy.is_shutdown():
# inference_start_time = rospy.get_time()
# check remote controller and decide which policy to use
if unitree_real_env.is_skill_mode():
if using_walk_policy:
rospy.loginfo_throttle(0.1, "switch to skill policy")
using_walk_policy = False
model.reset()
else:
if not using_walk_policy:
rospy.loginfo_throttle(0.1, "switch to walk policy")
using_walk_policy = True
walk_model.reset()
if not using_walk_policy:
obs = unitree_real_env.get_obs()
actions = policy(obs)
else:
walk_obs = unitree_real_env._get_proprioception_obs()
actions = walk_policy(walk_obs)
unitree_real_env.send_action(actions)
# unitree_real_env.send_action(torch.zeros((1, 12)))
# inference_duration = rospy.get_time() - inference_start_time
# rospy.loginfo("inference duration: {:.3f}".format(inference_duration))
# rospy.loginfo("visual_latency: %f", rospy.get_time() - unitree_real_env.forward_depth_embedding_stamp.to_sec())
# motor_temperatures = [motor_state.temperature for motor_state in unitree_real_env.low_state_buffer.motorState]
# rospy.loginfo_throttle(10, " ".join(["motor_temperatures:"] + ["{:d},".format(t) for t in motor_temperatures[:12]]))
rate.sleep()
if unitree_real_env.low_state_buffer.wirelessRemote.btn.components.down:
rospy.loginfo_throttle(0.1, "model reset")
model.reset()
walk_model.reset()
if unitree_real_env.low_state_buffer.wirelessRemote.btn.components.L2 or unitree_real_env.low_state_buffer.wirelessRemote.btn.components.R2:
unitree_real_env.publish_legs_cmd(unitree_real_env.default_dof_pos.unsqueeze(0), kp= 20, kd= 0.5)
rospy.signal_shutdown("Controller send stop signal, exiting")
elif args.mode == "full":
# extract and build the torch ScriptFunction
visual_obs_slice = get_obs_slice(unitree_real_env.obs_segments, "forward_depth")
visual_encoder = model.visual_encoder
memory_module = model.memory_a
actor_mlp = model.actor
@torch.jit.script
def policy(observations: torch.Tensor, obs_start: int, obs_stop: int, obs_shape: Tuple[int, int, int]):
visual_latent = visual_encoder(
observations[..., obs_start:obs_stop].reshape(-1, *obs_shape)
).reshape(1, -1)
obs = torch.cat([
observations[..., :obs_start],
visual_latent,
observations[..., obs_stop:],
], dim= -1)
recurrent_embedding = memory_module(obs)
actions = actor_mlp(recurrent_embedding.squeeze(0))
return actions
unitree_real_env.start_ros()
unitree_real_env.wait_untill_ros_working()
rate = rospy.Rate(1 / duration)
with torch.no_grad():
while not rospy.is_shutdown():
# inference_start_time = rospy.get_time()
obs = unitree_real_env.get_obs()
actions = policy(obs,
obs_start= visual_obs_slice[0].start.item(),
obs_stop= visual_obs_slice[0].stop.item(),
obs_shape= visual_obs_slice[1],
)
unitree_real_env.send_action(actions)
# inference_duration = rospy.get_time() - inference_start_time
motor_temperatures = [motor_state.temperature for motor_state in unitree_real_env.low_state_buffer.motorState]
rospy.loginfo_throttle(10, " ".join(["motor_temperatures:"] + ["{:d},".format(t) for t in motor_temperatures[:12]]))
rate.sleep()
if unitree_real_env.low_state_buffer.wirelessRemote.btn.components.L2 or unitree_real_env.low_state_buffer.wirelessRemote.btn.components.R2:
unitree_real_env.publish_legs_cmd(unitree_real_env.default_dof_pos.unsqueeze(0), kp= 20, kd= 0.5)
rospy.signal_shutdown("Controller send stop signal, exiting")
else:
rospy.logfatal("Unknown mode, exiting")
if __name__ == "__main__":
""" The script to run the A1 script in ROS.
It's designed as a main function and not designed to be a scalable code.
"""
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--namespace",
type= str,
default= "/a112138",
)
parser.add_argument("--logdir",
type= str,
help= "The log directory of the trained model",
)
parser.add_argument("--walkdir",
type= str,
help= "The log directory of the walking model, not for the skills.",
default= None,
)
parser.add_argument("--mode",
type= str,
help= "The mode to determine which computer to run on.",
choices= ["jetson", "upboard", "full"],
)
parser.add_argument("--debug",
action= "store_true",
)
args = parser.parse_args()
main(args)

View File

@ -0,0 +1,104 @@
import numpy as np
from legged_gym.envs.a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
from legged_gym.utils.helpers import merge_dict
class A1TiltCfg( A1FieldCfg ):
#### uncomment this to train non-virtual terrain
# class sensor( A1FieldCfg.sensor ):
# class proprioception( A1FieldCfg.sensor.proprioception ):
# delay_action_obs = True
# latency_range = [0.04-0.0025, 0.04+0.0075]
#### uncomment the above to train non-virtual terrain
class terrain( A1FieldCfg.terrain ):
max_init_terrain_level = 2
border_size = 5
slope_treshold = 20.
curriculum = True
BarrierTrack_kwargs = merge_dict(A1FieldCfg.terrain.BarrierTrack_kwargs, dict(
options= [
"tilt",
],
tilt= dict(
width= (0.24, 0.4),
depth= (0.4, 1.), # size along the forward axis
opening_angle= 0.0, # [rad] an opening that make the robot easier to get into the obstacle
wall_height= 0.5,
),
virtual_terrain= False, # Change this to False for real terrain
no_perlin_threshold= 0.06,
))
TerrainPerlin_kwargs = merge_dict(A1FieldCfg.terrain.TerrainPerlin_kwargs, dict(
zScale= [0.05, 0.1],
))
class commands( A1FieldCfg.commands ):
class ranges( A1FieldCfg.commands.ranges ):
lin_vel_x = [0.3, 0.6]
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
class termination( A1FieldCfg.termination ):
# additional factors that determines whether to terminates the episode
termination_terms = [
"roll",
"pitch",
"z_low",
"z_high",
"out_of_track",
]
class domain_rand( A1FieldCfg.domain_rand ):
# push_robots = True # use for virtual training
push_robots = False # use for non-virtual training
class rewards( A1FieldCfg.rewards ):
class scales:
tracking_ang_vel = 0.05
world_vel_l2norm = -1.
legs_energy_substeps = -1e-5
alive = 2.
penetrate_depth = -3e-3
penetrate_volume = -3e-3
exceed_dof_pos_limits = -1e-1
exceed_torque_limits_i = -2e-1
class curriculum( A1FieldCfg.curriculum ):
penetrate_volume_threshold_harder = 4000
penetrate_volume_threshold_easier = 10000
penetrate_depth_threshold_harder = 100
penetrate_depth_threshold_easier = 300
class A1TiltCfgPPO( A1FieldCfgPPO ):
class algorithm( A1FieldCfgPPO.algorithm ):
entropy_coef = 0.0
clip_min_std = 0.2
class runner( A1FieldCfgPPO.runner ):
policy_class_name = "ActorCriticRecurrent"
experiment_name = "field_a1"
run_name = "".join(["Skill",
("Multi" if len(A1TiltCfg.terrain.BarrierTrack_kwargs["options"]) > 1 else (A1TiltCfg.terrain.BarrierTrack_kwargs["options"][0] if A1TiltCfg.terrain.BarrierTrack_kwargs["options"] else "PlaneWalking")),
("_propDelay{:.2f}-{:.2f}".format(
A1TiltCfg.sensor.proprioception.latency_range[0],
A1TiltCfg.sensor.proprioception.latency_range[1],
) if A1TiltCfg.sensor.proprioception.delay_action_obs else ""
),
("_pPenV" + np.format_float_scientific(-A1TiltCfg.rewards.scales.penetrate_volume, trim= "-", exp_digits= 1) if getattr(A1TiltCfg.rewards.scales, "penetrate_volume", 0.) < 0. else ""),
("_pPenD" + np.format_float_scientific(-A1TiltCfg.rewards.scales.penetrate_depth, trim= "-", exp_digits= 1) if getattr(A1TiltCfg.rewards.scales, "penetrate_depth", 0.) < 0. else ""),
("_noPush" if not A1TiltCfg.domain_rand.push_robots else ""),
("_tiltMax{:.2f}".format(A1TiltCfg.terrain.BarrierTrack_kwargs["tilt"]["width"][1])),
("_virtual" if A1TiltCfg.terrain.BarrierTrack_kwargs["virtual_terrain"] else ""),
])
resume = True
load_run = "{Your traind walking model directory}"
load_run = "{Your virtual terrain model directory}"
load_run = "Aug17_11-13-14_WalkingBase_pEnergySubsteps2e-5_aScale0.5"
load_run = "Aug23_22-03-41_Skilltilt_pPenV3e-3_pPenD3e-3_tiltMax0.40_virtual"
max_iterations = 20000
save_interval = 500

View File

@ -0,0 +1,309 @@
import os
import os.path as osp
import numpy as np
import torch
import json
from functools import partial
from collections import OrderedDict
from a1_real import UnitreeA1Real, resize2d
from rsl_rl import modules
import rospy
from unitree_legged_msgs.msg import Float32MultiArrayStamped
from sensor_msgs.msg import Image
import ros_numpy
import pyrealsense2 as rs
def get_encoder_script(logdir):
with open(osp.join(logdir, "config.json"), "r") as f:
config_dict = json.load(f, object_pairs_hook= OrderedDict)
model_device = torch.device("cuda")
unitree_real_env = UnitreeA1Real(
robot_namespace= "DummyUnitreeA1Real",
cfg= config_dict,
forward_depth_topic= "", # this env only computes parameters to build the model
forward_depth_embedding_dims= None,
model_device= model_device,
)
model = getattr(modules, config_dict["runner"]["policy_class_name"])(
num_actor_obs= unitree_real_env.num_obs,
num_critic_obs= unitree_real_env.num_privileged_obs,
num_actions= 12,
obs_segments= unitree_real_env.obs_segments,
privileged_obs_segments= unitree_real_env.privileged_obs_segments,
**config_dict["policy"],
)
model_names = [i for i in os.listdir(logdir) if i.startswith("model_")]
model_names.sort(key= lambda x: int(x.split("_")[-1].split(".")[0]))
state_dict = torch.load(osp.join(args.logdir, model_names[-1]), map_location= "cpu")
model.load_state_dict(state_dict["model_state_dict"])
model.to(model_device)
model.eval()
visual_encoder = model.visual_encoder
script = torch.jit.script(visual_encoder)
return script, model_device
def get_input_filter(args):
""" This is the filter different from the simulator, but try to close the gap. """
with open(osp.join(args.logdir, "config.json"), "r") as f:
config_dict = json.load(f, object_pairs_hook= OrderedDict)
image_resolution = config_dict["sensor"]["forward_camera"].get(
"output_resolution",
config_dict["sensor"]["forward_camera"]["resolution"],
)
depth_range = config_dict["sensor"]["forward_camera"].get(
"depth_range",
[0.0, 3.0],
)
depth_range = (depth_range[0] * 1000, depth_range[1] * 1000) # [m] -> [mm]
crop_top, crop_bottom, crop_left, crop_right = args.crop_top, args.crop_bottom, args.crop_left, args.crop_right
crop_far = args.crop_far * 1000
def input_filter(depth_image: torch.Tensor,
crop_top: int,
crop_bottom: int,
crop_left: int,
crop_right: int,
crop_far: float,
depth_min: int,
depth_max: int,
output_height: int,
output_width: int,
):
""" depth_image must have shape [1, 1, H, W] """
depth_image = depth_image[:, :,
crop_top: -crop_bottom-1,
crop_left: -crop_right-1,
]
depth_image[depth_image > crop_far] = depth_max
depth_image = torch.clip(
depth_image,
depth_min,
depth_max,
) / (depth_max - depth_min)
depth_image = resize2d(depth_image, (output_height, output_width))
return depth_image
# input_filter = torch.jit.script(input_filter)
return partial(input_filter,
crop_top= crop_top,
crop_bottom= crop_bottom,
crop_left= crop_left,
crop_right= crop_right,
crop_far= crop_far,
depth_min= depth_range[0],
depth_max= depth_range[1],
output_height= image_resolution[0],
output_width= image_resolution[1],
), depth_range
def get_started_pipeline(
height= 480,
width= 640,
fps= 30,
enable_rgb= False,
):
# By default, rgb is not used.
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, width, height, rs.format.z16, fps)
if enable_rgb:
config.enable_stream(rs.stream.color, width, height, rs.format.rgb8, fps)
profile = pipeline.start(config)
# build the sensor filter
hole_filling_filter = rs.hole_filling_filter(2)
spatial_filter = rs.spatial_filter()
spatial_filter.set_option(rs.option.filter_magnitude, 5)
spatial_filter.set_option(rs.option.filter_smooth_alpha, 0.75)
spatial_filter.set_option(rs.option.filter_smooth_delta, 1)
spatial_filter.set_option(rs.option.holes_fill, 4)
temporal_filter = rs.temporal_filter()
temporal_filter.set_option(rs.option.filter_smooth_alpha, 0.75)
temporal_filter.set_option(rs.option.filter_smooth_delta, 1)
# decimation_filter = rs.decimation_filter()
# decimation_filter.set_option(rs.option.filter_magnitude, 2)
def filter_func(frame):
frame = hole_filling_filter.process(frame)
frame = spatial_filter.process(frame)
frame = temporal_filter.process(frame)
# frame = decimation_filter.process(frame)
return frame
return pipeline, filter_func
def main(args):
rospy.init_node("a1_legged_gym_jetson", anonymous= True)
input_filter, depth_range = get_input_filter(args)
model_script, model_device = get_encoder_script(args.logdir)
with open(osp.join(args.logdir, "config.json"), "r") as f:
config_dict = json.load(f, object_pairs_hook= OrderedDict)
if config_dict.get("sensor", dict()).get("forward_camera", dict()).get("refresh_duration", None) is not None:
refresh_duration = config_dict["sensor"]["forward_camera"]["refresh_duration"]
ros_rate = rospy.Rate(1.0 / refresh_duration)
rospy.loginfo("Using refresh duration {}s".format(refresh_duration))
else:
ros_rate = rospy.Rate(args.fps)
rs_pipeline, rs_filters = get_started_pipeline(
height= args.height,
width= args.width,
fps= args.fps,
enable_rgb= args.enable_rgb,
)
embedding_publisher = rospy.Publisher(
args.namespace + "/visual_embedding",
Float32MultiArrayStamped,
queue_size= 1,
)
if args.enable_vis:
depth_image_publisher = rospy.Publisher(
args.namespace + "/camera/depth/image_rect_raw",
Image,
queue_size= 1,
)
network_input_publisher = rospy.Publisher(
args.namespace + "/camera/depth/network_input_raw",
Image,
queue_size= 1,
)
if args.enable_rgb:
rgb_image_publisher = rospy.Publisher(
args.namespace + "/camera/color/image_raw",
Image,
queue_size= 1,
)
rospy.loginfo("Depth range is clipped to [{}, {}] and normalized".format(depth_range[0], depth_range[1]))
rospy.loginfo("ROS, model, realsense have been initialized.")
if args.enable_vis:
rospy.loginfo("Visualization enabled, sending depth{} images".format(", rgb" if args.enable_rgb else ""))
try:
embedding_msg = Float32MultiArrayStamped()
embedding_msg.header.frame_id = args.namespace + "/camera_depth_optical_frame"
frame_got = False
while not rospy.is_shutdown():
# Wait for the depth image
frames = rs_pipeline.wait_for_frames()
embedding_msg.header.stamp = rospy.Time.now()
depth_frame = frames.get_depth_frame()
if not depth_frame:
continue
if not frame_got:
frame_got = True
rospy.loginfo("Realsense frame recieved. Sending embeddings...")
if args.enable_rgb:
color_frame = frames.get_color_frame()
# Use this branch to log the time when image is acquired
if args.enable_vis and not color_frame is None:
color_frame = np.asanyarray(color_frame.get_data())
rgb_image_msg = ros_numpy.msgify(Image, color_frame, encoding= "rgb8")
rgb_image_msg.header.stamp = rospy.Time.now()
rgb_image_msg.header.frame_id = args.namespace + "/camera_color_optical_frame"
rgb_image_publisher.publish(rgb_image_msg)
# Process the depth image and publish
depth_frame = rs_filters(depth_frame)
depth_image_ = np.asanyarray(depth_frame.get_data())
depth_image = torch.from_numpy(depth_image_.astype(np.float32)).unsqueeze(0).unsqueeze(0).to(model_device)
depth_image = input_filter(depth_image)
with torch.no_grad():
depth_embedding = model_script(depth_image).reshape(-1).cpu().numpy()
embedding_msg.header.seq += 1
embedding_msg.data = depth_embedding.tolist()
embedding_publisher.publish(embedding_msg)
# Publish the acquired image if needed
if args.enable_vis:
depth_image_msg = ros_numpy.msgify(Image, depth_image_, encoding= "16UC1")
depth_image_msg.header.stamp = rospy.Time.now()
depth_image_msg.header.frame_id = args.namespace + "/camera_depth_optical_frame"
depth_image_publisher.publish(depth_image_msg)
network_input_np = (\
depth_image.detach().cpu().numpy()[0, 0] * (depth_range[1] - depth_range[0]) \
+ depth_range[0]
).astype(np.uint16)
network_input_msg = ros_numpy.msgify(Image, network_input_np, encoding= "16UC1")
network_input_msg.header.stamp = rospy.Time.now()
network_input_msg.header.frame_id = args.namespace + "/camera_depth_optical_frame"
network_input_publisher.publish(network_input_msg)
ros_rate.sleep()
finally:
rs_pipeline.stop()
if __name__ == "__main__":
""" This script is designed to load the model and process the realsense image directly
from realsense SDK without realsense ROS wrapper
"""
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--namespace",
type= str,
default= "/a112138",
)
parser.add_argument("--logdir",
type= str,
help= "The log directory of the trained model",
)
parser.add_argument("--height",
type= int,
default= 240,
help= "The height of the realsense image",
)
parser.add_argument("--width",
type= int,
default= 424,
help= "The width of the realsense image",
)
parser.add_argument("--fps",
type= int,
default= 30,
help= "The fps of the realsense image",
)
parser.add_argument("--crop_left",
type= int,
default= 60,
help= "num of pixel to crop in the original pyrealsense readings."
)
parser.add_argument("--crop_right",
type= int,
default= 46,
help= "num of pixel to crop in the original pyrealsense readings."
)
parser.add_argument("--crop_top",
type= int,
default= 0,
help= "num of pixel to crop in the original pyrealsense readings."
)
parser.add_argument("--crop_bottom",
type= int,
default= 0,
help= "num of pixel to crop in the original pyrealsense readings."
)
parser.add_argument("--crop_far",
type= float,
default= 3.0,
help= "asside from the config far limit, make all depth readings larger than this value to be 3.0 in un-normalized network input."
)
parser.add_argument("--enable_rgb",
action= "store_true",
help= "Whether to enable rgb image",
)
parser.add_argument("--enable_vis",
action= "store_true",
help= "Whether to publish realsense image",
)
args = parser.parse_args()
main(args)

View File

@ -0,0 +1,46 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym.envs import AnymalCRoughCfg, AnymalCRoughCfgPPO
class AnymalBRoughCfg( AnymalCRoughCfg ):
class asset( AnymalCRoughCfg.asset ):
file = '{LEGGED_GYM_ROOT_DIR}/resources/robots/anymal_b/urdf/anymal_b.urdf'
name = "anymal_b"
foot_name = 'FOOT'
class rewards( AnymalCRoughCfg.rewards ):
class scales ( AnymalCRoughCfg.rewards.scales ):
pass
class AnymalBRoughCfgPPO( AnymalCRoughCfgPPO ):
class runner ( AnymalCRoughCfgPPO.runner):
run_name = ''
experiment_name = 'rough_anymal_b'
load_run = -1

View File

@ -0,0 +1,81 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from time import time
import numpy as np
import os
from isaacgym.torch_utils import *
from isaacgym import gymtorch, gymapi, gymutil
import torch
# from torch.tensor import Tensor
from typing import Tuple, Dict
from legged_gym.envs import LeggedRobot
from legged_gym import LEGGED_GYM_ROOT_DIR
from .mixed_terrains.anymal_c_rough_config import AnymalCRoughCfg
class Anymal(LeggedRobot):
cfg : AnymalCRoughCfg
def __init__(self, cfg, sim_params, physics_engine, sim_device, headless):
super().__init__(cfg, sim_params, physics_engine, sim_device, headless)
# load actuator network
if self.cfg.control.use_actuator_network:
actuator_network_path = self.cfg.control.actuator_net_file.format(LEGGED_GYM_ROOT_DIR=LEGGED_GYM_ROOT_DIR)
self.actuator_network = torch.jit.load(actuator_network_path).to(self.device)
def reset_idx(self, env_ids):
super().reset_idx(env_ids)
# Additionaly empty actuator network hidden states
self.sea_hidden_state_per_env[:, env_ids] = 0.
self.sea_cell_state_per_env[:, env_ids] = 0.
def _init_buffers(self):
super()._init_buffers()
# Additionally initialize actuator network hidden state tensors
self.sea_input = torch.zeros(self.num_envs*self.num_actions, 1, 2, device=self.device, requires_grad=False)
self.sea_hidden_state = torch.zeros(2, self.num_envs*self.num_actions, 8, device=self.device, requires_grad=False)
self.sea_cell_state = torch.zeros(2, self.num_envs*self.num_actions, 8, device=self.device, requires_grad=False)
self.sea_hidden_state_per_env = self.sea_hidden_state.view(2, self.num_envs, self.num_actions, 8)
self.sea_cell_state_per_env = self.sea_cell_state.view(2, self.num_envs, self.num_actions, 8)
def _compute_torques(self, actions):
# Choose between pd controller and actuator network
if self.cfg.control.use_actuator_network:
with torch.inference_mode():
self.sea_input[:, 0, 0] = (actions * self.cfg.control.action_scale + self.default_dof_pos - self.dof_pos).flatten()
self.sea_input[:, 0, 1] = self.dof_vel.flatten()
torques, (self.sea_hidden_state[:], self.sea_cell_state[:]) = self.actuator_network(self.sea_input, (self.sea_hidden_state, self.sea_cell_state))
return torques
else:
# pd controller
return super()._compute_torques(actions)

View File

@ -0,0 +1,74 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym.envs import AnymalCRoughCfg, AnymalCRoughCfgPPO
class AnymalCFlatCfg( AnymalCRoughCfg ):
class env( AnymalCRoughCfg.env ):
num_observations = 48
class terrain( AnymalCRoughCfg.terrain ):
mesh_type = 'plane'
measure_heights = False
class asset( AnymalCRoughCfg.asset ):
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter
class rewards( AnymalCRoughCfg.rewards ):
max_contact_force = 350.
class scales ( AnymalCRoughCfg.rewards.scales ):
orientation = -5.0
torques = -0.000025
feet_air_time = 2.
# feet_contact_forces = -0.01
class commands( AnymalCRoughCfg.commands ):
heading_command = False
resampling_time = 4.
class ranges( AnymalCRoughCfg.commands.ranges ):
ang_vel_yaw = [-1.5, 1.5]
class domain_rand( AnymalCRoughCfg.domain_rand ):
friction_range = [0., 1.5] # on ground planes the friction combination mode is averaging, i.e total friction = (foot_friction + 1.)/2.
class AnymalCFlatCfgPPO( AnymalCRoughCfgPPO ):
class policy( AnymalCRoughCfgPPO.policy ):
actor_hidden_dims = [128, 64, 32]
critic_hidden_dims = [128, 64, 32]
activation = 'elu' # can be elu, relu, selu, crelu, lrelu, tanh, sigmoid
class algorithm( AnymalCRoughCfgPPO.algorithm):
entropy_coef = 0.01
class runner ( AnymalCRoughCfgPPO.runner):
run_name = ''
experiment_name = 'flat_anymal_c'
load_run = -1
max_iterations = 300

View File

@ -0,0 +1,94 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg, LeggedRobotCfgPPO
class AnymalCRoughCfg( LeggedRobotCfg ):
class env( LeggedRobotCfg.env ):
num_envs = 4096
num_actions = 12
class terrain( LeggedRobotCfg.terrain ):
mesh_type = 'trimesh'
class init_state( LeggedRobotCfg.init_state ):
pos = [0.0, 0.0, 0.6] # x,y,z [m]
default_joint_angles = { # = target angles [rad] when action = 0.0
"LF_HAA": 0.0,
"LH_HAA": 0.0,
"RF_HAA": -0.0,
"RH_HAA": -0.0,
"LF_HFE": 0.4,
"LH_HFE": -0.4,
"RF_HFE": 0.4,
"RH_HFE": -0.4,
"LF_KFE": -0.8,
"LH_KFE": 0.8,
"RF_KFE": -0.8,
"RH_KFE": 0.8,
}
class control( LeggedRobotCfg.control ):
# PD Drive parameters:
stiffness = {'HAA': 80., 'HFE': 80., 'KFE': 80.} # [N*m/rad]
damping = {'HAA': 2., 'HFE': 2., 'KFE': 2.} # [N*m*s/rad]
# action scale: target angle = actionScale * action + defaultAngle
action_scale = 0.5
# decimation: Number of control action updates @ sim DT per policy DT
decimation = 4
use_actuator_network = True
actuator_net_file = "{LEGGED_GYM_ROOT_DIR}/resources/actuator_nets/anydrive_v3_lstm.pt"
class asset( LeggedRobotCfg.asset ):
file = "{LEGGED_GYM_ROOT_DIR}/resources/robots/anymal_c/urdf/anymal_c.urdf"
name = "anymal_c"
foot_name = "FOOT"
penalize_contacts_on = ["SHANK", "THIGH"]
terminate_after_contacts_on = ["base"]
self_collisions = 1 # 1 to disable, 0 to enable...bitwise filter
class domain_rand( LeggedRobotCfg.domain_rand):
randomize_base_mass = True
added_mass_range = [-5., 5.]
class rewards( LeggedRobotCfg.rewards ):
base_height_target = 0.5
max_contact_force = 500.
only_positive_rewards = True
class scales( LeggedRobotCfg.rewards.scales ):
pass
class AnymalCRoughCfgPPO( LeggedRobotCfgPPO ):
class runner( LeggedRobotCfgPPO.runner ):
run_name = ''
experiment_name = 'rough_anymal_c'
load_run = -1

View File

@ -0,0 +1,55 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import inspect
class BaseConfig:
def __init__(self) -> None:
""" Initializes all member classes recursively. Ignores all namse starting with '__' (buit-in methods)."""
self.init_member_classes(self)
@staticmethod
def init_member_classes(obj):
# iterate over all attributes names
for key in dir(obj):
# disregard builtin attributes
# if key.startswith("__"):
if key=="__class__":
continue
# get the corresponding attribute object
var = getattr(obj, key)
# check if it the attribute is a class
if inspect.isclass(var):
# instantate the class
i_var = var()
# set the attribute to the instance instead of the type
setattr(obj, key, i_var)
# recursively init members of the attribute
BaseConfig.init_member_classes(i_var)

View File

@ -0,0 +1,144 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import sys
from isaacgym import gymapi
from isaacgym import gymutil
import numpy as np
import torch
# Base class for RL tasks
class BaseTask():
def __init__(self, cfg, sim_params, physics_engine, sim_device, headless):
self.gym = gymapi.acquire_gym()
self.sim_params = sim_params
self.physics_engine = physics_engine
self.sim_device = sim_device
sim_device_type, self.sim_device_id = gymutil.parse_device_str(self.sim_device)
self.headless = headless
# env device is GPU only if sim is on GPU and use_gpu_pipeline=True, otherwise returned tensors are copied to CPU by physX.
if sim_device_type=='cuda' and sim_params.use_gpu_pipeline:
self.device = self.sim_device
else:
self.device = 'cpu'
# graphics device for rendering, -1 for no rendering
self.graphics_device_id = self.sim_device_id
if self.headless == True and self.cfg.sim.no_camera:
self.graphics_device_id = -1
self.num_envs = cfg.env.num_envs
self.num_obs = cfg.env.num_observations
self.num_privileged_obs = cfg.env.num_privileged_obs
self.num_actions = cfg.env.num_actions
# optimization flags for pytorch JIT
torch._C._jit_set_profiling_mode(False)
torch._C._jit_set_profiling_executor(False)
# allocate buffers
self.obs_buf = torch.zeros(self.num_envs, self.num_obs, device=self.device, dtype=torch.float)
self.rew_buf = torch.zeros(self.num_envs, device=self.device, dtype=torch.float)
self.reset_buf = torch.ones(self.num_envs, device=self.device, dtype=torch.long)
self.episode_length_buf = torch.zeros(self.num_envs, device=self.device, dtype=torch.long)
self.time_out_buf = torch.zeros(self.num_envs, device=self.device, dtype=torch.bool)
if self.num_privileged_obs is not None:
self.privileged_obs_buf = torch.zeros(self.num_envs, self.num_privileged_obs, device=self.device, dtype=torch.float)
else:
self.privileged_obs_buf = None
# self.num_privileged_obs = self.num_obs
self.extras = {}
# create envs, sim and viewer
self.create_sim()
self.gym.prepare_sim(self.sim)
# todo: read from config
self.enable_viewer_sync = True
self.viewer = None
# if running with a viewer, set up keyboard shortcuts and camera
if self.headless == False:
# subscribe to keyboard shortcuts
self.viewer = self.gym.create_viewer(
self.sim, gymapi.CameraProperties())
self.gym.subscribe_viewer_keyboard_event(
self.viewer, gymapi.KEY_ESCAPE, "QUIT")
self.gym.subscribe_viewer_keyboard_event(
self.viewer, gymapi.KEY_V, "toggle_viewer_sync")
def get_observations(self):
return self.obs_buf
def get_privileged_observations(self):
return self.privileged_obs_buf
def reset_idx(self, env_ids):
"""Reset selected robots"""
raise NotImplementedError
def reset(self):
""" Reset all robots"""
self.reset_idx(torch.arange(self.num_envs, device=self.device))
obs, privileged_obs, _, _, _ = self.step(torch.zeros(self.num_envs, self.num_actions, device=self.device, requires_grad=False))
return obs, privileged_obs
def step(self, actions):
raise NotImplementedError
def render(self, sync_frame_time=True):
if self.viewer:
# check for window closed
if self.gym.query_viewer_has_closed(self.viewer):
sys.exit()
# check for keyboard events
for evt in self.gym.query_viewer_action_events(self.viewer):
if evt.action == "QUIT" and evt.value > 0:
sys.exit()
elif evt.action == "toggle_viewer_sync" and evt.value > 0:
self.enable_viewer_sync = not self.enable_viewer_sync
# fetch results
if self.device != 'cpu':
self.gym.fetch_results(self.sim, True)
# step graphics
if self.enable_viewer_sync:
self.gym.step_graphics(self.sim)
self.gym.draw_viewer(self.viewer, self.sim, True)
if sync_frame_time:
self.gym.sync_frame_time(self.sim)
else:
self.gym.poll_viewer_events(self.viewer)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,249 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from .base_config import BaseConfig
class LeggedRobotCfg(BaseConfig):
class env:
num_envs = 4096
num_observations = 235
use_lin_vel = True
num_privileged_obs = None # if not None a priviledge_obs_buf will be returned by step() (critic obs for assymetric training). None is returned otherwise
num_actions = 12
env_spacing = 3. # not used with heightfields/trimeshes
send_timeouts = True # send time out information to the algorithm
episode_length_s = 20 # episode length in seconds
class terrain:
mesh_type = 'trimesh' # "heightfield" # none, plane, heightfield or trimesh
horizontal_scale = 0.1 # [m]
vertical_scale = 0.005 # [m]
border_size = 25 # [m]
curriculum = True
static_friction = 1.0
dynamic_friction = 1.0
restitution = 0.
# rough terrain only:
measure_heights = True
measured_points_x = [-0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8] # 1mx1.6m rectangle (without center line)
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
selected = False # select a unique terrain type and pass all arguments
terrain_kwargs = None # Dict of arguments for selected terrain
max_init_terrain_level = 5 # starting curriculum state
terrain_length = 8.
terrain_width = 8.
num_rows= 10 # number of terrain rows (levels)
num_cols = 20 # number of terrain cols (types)
# terrain types: [smooth slope, rough slope, stairs up, stairs down, discrete]
terrain_proportions = [0.1, 0.1, 0.35, 0.25, 0.2]
# trimesh only:
slope_treshold = 0.75 # slopes above this threshold will be corrected to vertical surfaces
class commands:
curriculum = False
max_curriculum = 1.
num_commands = 4 # default: lin_vel_x, lin_vel_y, ang_vel_yaw, heading (in heading mode ang_vel_yaw is recomputed from heading error)
resampling_time = 10. # time before command are changed[s]
heading_command = True # if true: compute ang vel command from heading error
class ranges:
lin_vel_x = [-1.0, 1.0] # min max [m/s]
lin_vel_y = [-1.0, 1.0] # min max [m/s]
ang_vel_yaw = [-1, 1] # min max [rad/s]
heading = [-3.14, 3.14]
class init_state:
pos = [0.0, 0.0, 1.] # x,y,z [m]
rot = [0.0, 0.0, 0.0, 1.0] # x,y,z,w [quat]
lin_vel = [0.0, 0.0, 0.0] # x,y,z [m/s]
ang_vel = [0.0, 0.0, 0.0] # x,y,z [rad/s]
default_joint_angles = { # target angles when action = 0.0
"joint_a": 0.,
"joint_b": 0.}
class control:
control_type = 'P' # P: position, V: velocity, T: torques
# PD Drive parameters:
stiffness = {'joint_a': 10.0, 'joint_b': 15.} # [N*m/rad]
damping = {'joint_a': 1.0, 'joint_b': 1.5} # [N*m*s/rad]
# action scale: target angle = actionScale * action + defaultAngle
action_scale = 0.5
# decimation: Number of control action updates @ sim DT per policy DT
decimation = 4
class asset:
file = ""
name = "legged_robot" # actor name
foot_name = "None" # name of the feet bodies, used to index body state and contact force tensors
penalize_contacts_on = []
terminate_after_contacts_on = []
disable_gravity = False
collapse_fixed_joints = True # merge bodies connected by fixed joints. Specific fixed joints can be kept by adding " <... dont_collapse="true">
fix_base_link = False # fixe the base of the robot
default_dof_drive_mode = 3 # see GymDofDriveModeFlags (0 is none, 1 is pos tgt, 2 is vel tgt, 3 effort)
self_collisions = 0 # 1 to disable, 0 to enable...bitwise filter
replace_cylinder_with_capsule = True # replace collision cylinders with capsules, leads to faster/more stable simulation
flip_visual_attachments = True # Some .obj meshes must be flipped from y-up to z-up
density = 0.001
angular_damping = 0.
linear_damping = 0.
max_angular_velocity = 1000.
max_linear_velocity = 1000.
armature = 0.
thickness = 0.01
class domain_rand:
randomize_friction = True
friction_range = [0.5, 1.25]
randomize_base_mass = False
added_mass_range = [-1., 1.]
push_robots = True
push_interval_s = 15
max_push_vel_xy = 1.
max_push_vel_ang = 0.
init_dof_pos_ratio_range = [0.5, 1.5]
class rewards:
class scales:
termination = -0.0
tracking_lin_vel = 1.0
tracking_ang_vel = 0.5
lin_vel_z = -2.0
ang_vel_xy = -0.05
orientation = -0.
torques = -0.00001
dof_vel = -0.
dof_acc = -2.5e-7
base_height = -0.
feet_air_time = 1.0
collision = -1.
feet_stumble = -0.0
action_rate = -0.01
stand_still = -0.
only_positive_rewards = True # if true negative total rewards are clipped at zero (avoids early termination problems)
tracking_sigma = 0.25 # tracking reward = exp(-error^2/sigma)
soft_dof_pos_limit = 1. # percentage of urdf limits, values above this limit are penalized
soft_dof_vel_limit = 1.
soft_torque_limit = 1.
base_height_target = 1.
max_contact_force = 100. # forces above this value are penalized
class normalization:
class obs_scales:
lin_vel = 2.0
ang_vel = 0.25
dof_pos = 1.0
dof_vel = 0.05
height_measurements = 5.0
clip_observations = 100.
clip_actions = 100.
class noise:
add_noise = True
noise_level = 1.0 # scales other values
class noise_scales:
dof_pos = 0.01
dof_vel = 1.5
lin_vel = 0.1
ang_vel = 0.2
gravity = 0.05
height_measurements = 0.1
# viewer camera:
class viewer:
ref_env = 0
pos = [10, 0, 6] # [m]
lookat = [11., 5, 3.] # [m]
class sim:
dt = 0.005
substeps = 1
gravity = [0., 0. ,-9.81] # [m/s^2]
up_axis = 1 # 0 is y, 1 is z
no_camera = True
class physx:
num_threads = 10
solver_type = 1 # 0: pgs, 1: tgs
num_position_iterations = 4
num_velocity_iterations = 0
contact_offset = 0.01 # [m]
rest_offset = 0.0 # [m]
bounce_threshold_velocity = 0.5 #0.5 [m/s]
max_depenetration_velocity = 1.0
max_gpu_contact_pairs = 2**23 #2**24 -> needed for 8000 envs and more
default_buffer_size_multiplier = 5
contact_collection = 2 # 0: never, 1: last sub-step, 2: all sub-steps (default=2)
class LeggedRobotCfgPPO(BaseConfig):
seed = 1
runner_class_name = 'OnPolicyRunner'
class policy:
init_noise_std = 1.0
actor_hidden_dims = [512, 256, 128]
critic_hidden_dims = [512, 256, 128]
activation = 'elu' # can be elu, relu, selu, crelu, lrelu, tanh, sigmoid
# only for 'ActorCriticRecurrent':
# rnn_type = 'lstm'
# rnn_hidden_size = 512
# rnn_num_layers = 1
class algorithm:
# training params
value_loss_coef = 1.0
use_clipped_value_loss = True
clip_param = 0.2
entropy_coef = 0.01
num_learning_epochs = 5
num_mini_batches = 4 # mini batch size = num_envs*nsteps / nminibatches
learning_rate = 1.e-3 #5.e-4
schedule = 'adaptive' # could be adaptive, fixed
gamma = 0.99
lam = 0.95
desired_kl = 0.01
max_grad_norm = 1.
clip_min_std = 1e-15
class runner:
policy_class_name = 'ActorCritic'
algorithm_class_name = 'PPO'
num_steps_per_env = 24 # per iteration
max_iterations = 1500 # number of policy updates
# logging
save_interval = 50 # check for potential saves every this many iterations
experiment_name = 'test'
run_name = ''
# load and resume
resume = False
load_run = -1 # -1 = last run
checkpoint = -1 # -1 = last saved model
resume_path = None # updated from load_run and chkpt

View File

@ -0,0 +1,822 @@
from collections import OrderedDict, defaultdict
import itertools
import numpy as np
from isaacgym.torch_utils import torch_rand_float, get_euler_xyz, quat_from_euler_xyz, tf_apply
from isaacgym import gymtorch, gymapi, gymutil
import torch
from legged_gym.envs.base.legged_robot import LeggedRobot
from legged_gym.utils.terrain import get_terrain_cls
from .legged_robot_config import LeggedRobotCfg
class LeggedRobotField(LeggedRobot):
""" NOTE: Most of this class implementation does not depend on the terrain. Check where
`check_BarrierTrack_terrain` is called to remove the dependency of BarrierTrack terrain.
"""
def __init__(self, cfg: LeggedRobotCfg, sim_params, physics_engine, sim_device, headless):
print("Using LeggedRobotField.__init__, num_obs and num_privileged_obs will be computed instead of assigned.")
cfg.terrain.measure_heights = True # force height measurement that have full obs from parent class implementation.
super().__init__(cfg, sim_params, physics_engine, sim_device, headless)
##### adds-on with sensors #####
def _create_sensors(self, env_handle=None, actor_handle= None):
sensor_handle_dict = super()._create_sensors(
env_handle= env_handle,
actor_handle= actor_handle,
)
all_obs_components = self.all_obs_components
if "forward_depth" in all_obs_components or "forward_color" in all_obs_components:
camera_handle = self._create_onboard_camera(env_handle, actor_handle, "forward_camera")
sensor_handle_dict["forward_camera"] = camera_handle
return sensor_handle_dict
def _create_onboard_camera(self, env_handle, actor_handle, sensor_name):
camera_props = gymapi.CameraProperties()
camera_props.enable_tensors = True
camera_props.height = getattr(self.cfg.sensor, sensor_name).resolution[0]
camera_props.width = getattr(self.cfg.sensor, sensor_name).resolution[1]
if hasattr(getattr(self.cfg.sensor, sensor_name), "horizontal_fov"):
camera_props.horizontal_fov = np.random.uniform(
getattr(self.cfg.sensor, sensor_name).horizontal_fov[0],
getattr(self.cfg.sensor, sensor_name).horizontal_fov[1],
) if isinstance(getattr(self.cfg.sensor, sensor_name).horizontal_fov, (tuple, list)) else getattr(self.cfg.sensor, sensor_name).horizontal_fov
# vertical_fov = horizontal_fov * camera_props.height / camera_props.width
camera_handle = self.gym.create_camera_sensor(env_handle, camera_props)
local_transform = gymapi.Transform()
if isinstance(getattr(self.cfg.sensor, sensor_name).position, dict):
# allow domain randomization across robots.
# sample from "mean" and "std" attributes.
# each must be a list of 3 elements.
cam_x = np.random.normal(
getattr(self.cfg.sensor, sensor_name).position["mean"][0],
getattr(self.cfg.sensor, sensor_name).position["std"][0],
)
cam_y = np.random.normal(
getattr(self.cfg.sensor, sensor_name).position["mean"][1],
getattr(self.cfg.sensor, sensor_name).position["std"][1],
)
cam_z = np.random.normal(
getattr(self.cfg.sensor, sensor_name).position["mean"][2],
getattr(self.cfg.sensor, sensor_name).position["std"][2],
)
local_transform.p = gymapi.Vec3(cam_x, cam_y, cam_z)
else:
local_transform.p = gymapi.Vec3(*getattr(self.cfg.sensor, sensor_name).position)
if isinstance(getattr(self.cfg.sensor, sensor_name).rotation, dict):
# allow domain randomization across robots
# sample from "lower" and "upper" attributes.
# each must be a list of 3 elements (in radian).
cam_roll = np.random.uniform(0, 1) * (
getattr(self.cfg.sensor, sensor_name).rotation["upper"][0] - \
getattr(self.cfg.sensor, sensor_name).rotation["lower"][0]
) + getattr(self.cfg.sensor, sensor_name).rotation["lower"][0]
cam_pitch = np.random.uniform(0, 1) * (
getattr(self.cfg.sensor, sensor_name).rotation["upper"][1] - \
getattr(self.cfg.sensor, sensor_name).rotation["lower"][1]
) + getattr(self.cfg.sensor, sensor_name).rotation["lower"][1]
cam_yaw = np.random.uniform(0, 1) * (
getattr(self.cfg.sensor, sensor_name).rotation["upper"][2] - \
getattr(self.cfg.sensor, sensor_name).rotation["lower"][2]
) + getattr(self.cfg.sensor, sensor_name).rotation["lower"][2]
local_transform.r = gymapi.Quat.from_euler_zyx(cam_yaw, cam_pitch, cam_roll)
else:
local_transform.r = gymapi.Quat.from_euler_zyx(*getattr(self.cfg.sensor, sensor_name).rotation)
self.gym.attach_camera_to_body(
camera_handle,
env_handle,
actor_handle,
local_transform,
gymapi.FOLLOW_TRANSFORM,
)
return camera_handle
##### Working on simulation steps #####
def pre_physics_step(self, actions):
self.volume_sample_points_refreshed = False
actions_preprocessed = False
if isinstance(self.cfg.normalization.clip_actions, (tuple, list)):
self.cfg.normalization.clip_actions = torch.tensor(
self.cfg.normalization.clip_actions,
device= self.device,
)
if getattr(self.cfg.normalization, "clip_actions_method", None) == "tanh":
clip_actions = self.cfg.normalization.clip_actions
self.actions = (torch.tanh(actions) * clip_actions).to(self.device)
actions_preprocessed = True
if getattr(self.cfg.normalization, "clip_actions_delta", None) is not None:
self.actions = torch.clip(
self.actions,
self.last_actions - self.cfg.normalization.clip_actions_delta,
self.last_actions + self.cfg.normalization.clip_actions_delta,
)
if not actions_preprocessed:
return super().pre_physics_step(actions)
def post_physics_step(self):
self.gym.refresh_rigid_body_state_tensor(self.sim)
return super().post_physics_step()
def check_termination(self):
return_ = super().check_termination()
if not hasattr(self.cfg, "termination"): return return_
r, p, y = get_euler_xyz(self.base_quat)
r[r > np.pi] -= np.pi * 2 # to range (-pi, pi)
p[p > np.pi] -= np.pi * 2 # to range (-pi, pi)
z = self.root_states[:, 2] - self.env_origins[:, 2]
if getattr(self.cfg.termination, "check_obstacle_conditioned_threshold", False) and self.check_BarrierTrack_terrain():
if hasattr(self, "volume_sample_points"):
self.refresh_volume_sample_points()
stepping_obstacle_info = self.terrain.get_stepping_obstacle_info(self.volume_sample_points.view(-1, 3))
else:
stepping_obstacle_info = self.terrain.get_stepping_obstacle_info(self.root_states[:, :3])
stepping_obstacle_info = stepping_obstacle_info.view(self.num_envs, -1, stepping_obstacle_info.shape[-1])
# Assuming that each robot will only be in one obstacle or non obstacle.
robot_stepping_obstacle_id = torch.max(stepping_obstacle_info[:, :, 0], dim= -1)[0]
if "roll" in self.cfg.termination.termination_terms:
if "robot_stepping_obstacle_id" in locals():
r_term_buff = torch.abs(r[robot_stepping_obstacle_id == 0]) > \
self.cfg.termination.roll_kwargs["threshold"]
self.reset_buf[robot_stepping_obstacle_id == 0] |= r_term_buff
for obstacle_name, obstacle_id in self.terrain.track_options_id_dict.items():
if (obstacle_name + "_threshold") in self.cfg.termination.roll_kwargs:
env_selection_mask = robot_stepping_obstacle_id == obstacle_id
r_term_buff = torch.abs(r[env_selection_mask]) > \
self.cfg.termination.roll_kwargs[obstacle_name + "_threshold"]
self.reset_buf[env_selection_mask] |= r_term_buff
else:
r_term_buff = torch.abs(r) > self.cfg.termination.roll_kwargs["threshold"]
self.reset_buf |= r_term_buff
if "pitch" in self.cfg.termination.termination_terms:
if "robot_stepping_obstacle_id" in locals():
p_term_buff = torch.abs(p[robot_stepping_obstacle_id == 0]) > \
self.cfg.termination.pitch_kwargs["threshold"]
self.reset_buf[robot_stepping_obstacle_id == 0] |= p_term_buff
for obstacle_name, obstacle_id in self.terrain.track_options_id_dict.items():
if (obstacle_name + "_threshold") in self.cfg.termination.pitch_kwargs:
env_selection_mask = robot_stepping_obstacle_id == obstacle_id
p_term_buff = torch.abs(p[env_selection_mask]) > \
self.cfg.termination.pitch_kwargs[obstacle_name + "_threshold"]
self.reset_buf[env_selection_mask] |= p_term_buff
else:
p_term_buff = torch.abs(p) > self.cfg.termination.pitch_kwargs["threshold"]
self.reset_buf |= p_term_buff
if "z_low" in self.cfg.termination.termination_terms:
if "robot_stepping_obstacle_id" in locals():
z_low_term_buff = z[robot_stepping_obstacle_id == 0] < \
self.cfg.termination.z_low_kwargs["threshold"]
self.reset_buf[robot_stepping_obstacle_id == 0] |= z_low_term_buff
for obstacle_name, obstacle_id in self.terrain.track_options_id_dict.items():
if (obstacle_name + "_threshold") in self.cfg.termination.z_low_kwargs:
env_selection_mask = robot_stepping_obstacle_id == obstacle_id
z_low_term_buff = z[env_selection_mask] < \
self.cfg.termination.z_low_kwargs[obstacle_name + "_threshold"]
self.reset_buf[env_selection_mask] |= z_low_term_buff
else:
z_low_term_buff = z < self.cfg.termination.z_low_kwargs["threshold"]
self.reset_buf |= z_low_term_buff
if "z_high" in self.cfg.termination.termination_terms:
z_high_term_buff = z > self.cfg.termination.z_high_kwargs["threshold"]
self.reset_buf |= z_high_term_buff
if "out_of_track" in self.cfg.termination.termination_terms and self.check_BarrierTrack_terrain():
# robot considered dead if it goes side ways
side_distance = self.terrain.get_sidewall_distance(self.root_states[:, :3])
side_diff = torch.abs(side_distance[..., 0] - side_distance[..., 1])
out_of_track_buff = side_diff > self.cfg.termination.out_of_track_kwargs["threshold"]
out_of_track_buff |= side_distance[..., 0] <= 0
out_of_track_buff |= side_distance[..., 1] <= 0
self.reset_buf |= out_of_track_buff
if getattr(self.cfg.termination, "timeout_at_border", False) and self.check_BarrierTrack_terrain():
track_idx = self.terrain.get_track_idx(self.root_states[:, :3], clipped= False)
# The robot is going +x direction, so no checking for row_idx <= 0
out_of_border_buff = track_idx[:, 0] >= self.terrain.cfg.num_rows
out_of_border_buff |= track_idx[:, 1] < 0
out_of_border_buff |= track_idx[:, 1] >= self.terrain.cfg.num_cols
self.time_out_buf |= out_of_border_buff
self.reset_buf |= out_of_border_buff
if getattr(self.cfg.termination, "timeout_at_finished", False) and self.check_BarrierTrack_terrain():
x = self.root_states[:, 0] - self.env_origins[:, 0]
finished_buffer = x > (self.terrain.env_block_length * self.terrain.n_blocks_per_track)
self.time_out_buf |= finished_buffer
self.reset_buf |= finished_buffer
return return_
def _fill_extras(self, env_ids):
return_ = super()._fill_extras(env_ids)
self.extras["episode"]["max_pos_x"] = 0.
self.extras["episode"]["min_pos_x"] = 0.
self.extras["episode"]["max_pos_y"] = 0.
self.extras["episode"]["min_pos_y"] = 0.
self.extras["episode"]["n_obstacle_passed"] = 0.
with torch.no_grad():
pos_x = self.root_states[env_ids, 0] - self.env_origins[env_ids, 0]
self.extras["episode"]["pos_x"] = pos_x
if self.check_BarrierTrack_terrain():
self.extras["episode"]["n_obstacle_passed"] = torch.mean(torch.clip(
torch.div(pos_x, self.terrain.env_block_length, rounding_mode= "floor") - 1,
min= 0.0,
)).cpu()
return return_
def _post_physics_step_callback(self):
return_ = super()._post_physics_step_callback()
with torch.no_grad():
pos_x = self.root_states[:, 0] - self.env_origins[:, 0]
pos_y = self.root_states[:, 1] - self.env_origins[:, 1]
self.extras["episode"]["max_pos_x"] = max(self.extras["episode"]["max_pos_x"], torch.max(pos_x).cpu())
self.extras["episode"]["min_pos_x"] = min(self.extras["episode"]["min_pos_x"], torch.min(pos_x).cpu())
self.extras["episode"]["max_pos_y"] = max(self.extras["episode"]["max_pos_y"], torch.max(pos_y).cpu())
self.extras["episode"]["min_pos_y"] = min(self.extras["episode"]["min_pos_y"], torch.min(pos_y).cpu())
if self.check_BarrierTrack_terrain():
self.extras["episode"]["n_obstacle_passed"] = torch.mean(torch.clip(
torch.div(pos_x, self.terrain.env_block_length, rounding_mode= "floor") - 1,
min= 0.0,
)).cpu()
return return_
def _compute_torques(self, actions):
if hasattr(self, "motor_strength"):
actions = self.motor_strength * actions
return super()._compute_torques(actions)
def _get_terrain_curriculum_move(self, env_ids):
if not (self.cfg.terrain.selected == "BarrierTrack" and self.cfg.terrain.BarrierTrack_kwargs["virtual_terrain"] and hasattr(self, "body_sample_indices")):
if getattr(self.cfg.curriculum, "no_moveup_when_fall", False):
move_up, move_down = super()._get_terrain_curriculum_move(env_ids)
move_up = move_up & self.time_out_buf[env_ids]
return move_up, move_down
else:
return super()._get_terrain_curriculum_move(env_ids)
distance = torch.norm(self.root_states[env_ids, :2] - self.env_origins[env_ids, :2], dim=1)
moved = distance > (self.terrain.env_block_length * 1.5) # 0.1 is the guess of robot touching the obstacle block.
passed_depths = self.terrain.get_passed_obstacle_depths(
self.terrain_levels[env_ids],
self.terrain_types[env_ids],
self.volume_sample_points[env_ids, :, 0].max(-1)[0], # choose the sample points that goes the furthest
) + 1e-12
p_v_ok = p_d_ok = 1
p_v_too_much = p_d_too_much = 0
# NOTE: only when penetrate_* reward is computed does this function check the penetration
if "penetrate_volume" in self.episode_sums:
p_v = self.episode_sums["penetrate_volume"][env_ids]
p_v_normalized = p_v / passed_depths / self.reward_scales["penetrate_volume"]
p_v_ok = p_v_normalized < self.cfg.curriculum.penetrate_volume_threshold_harder
p_v_too_much = p_v_normalized > self.cfg.curriculum.penetrate_volume_threshold_easier
if "penetrate_depth" in self.episode_sums:
p_d = self.episode_sums["penetrate_depth"][env_ids]
p_d_normalized = p_d / passed_depths / self.reward_scales["penetrate_depth"]
p_d_ok = p_d_normalized < self.cfg.curriculum.penetrate_depth_threshold_harder
p_d_too_much = p_d_normalized > self.cfg.curriculum.penetrate_depth_threshold_easier
# print("p_v:", p_v_normalized, "p_d:", p_d_normalized)
move_up = p_v_ok * p_d_ok * moved
move_down = ((~moved) + p_v_too_much + p_d_too_much).to(bool)
return move_up, move_down
##### Dealing with observations #####
def _init_buffers(self):
# update obs_scales components incase there will be one-by-one scaling
for k in self.all_obs_components:
if isinstance(getattr(self.obs_scales, k, None), (tuple, list)):
setattr(
self.obs_scales,
k,
torch.tensor(getattr(self.obs_scales, k, 1.), dtype= torch.float32, device= self.device)
)
super()._init_buffers()
rigid_body_state = self.gym.acquire_rigid_body_state_tensor(self.sim)
self.all_rigid_body_states = gymtorch.wrap_tensor(rigid_body_state)
# add sensor dict, which will be filled during create sensor
self.sensor_tensor_dict = defaultdict(list)
for env_i, env_handle in enumerate(self.envs):
if "forward_depth" in self.all_obs_components:
self.sensor_tensor_dict["forward_depth"].append(gymtorch.wrap_tensor(
self.gym.get_camera_image_gpu_tensor(
self.sim,
env_handle,
self.sensor_handles[env_i]["forward_camera"],
gymapi.IMAGE_DEPTH,
)))
if "forward_color" in self.all_obs_components:
self.sensor_tensor_dict["forward_color"].append(gymtorch.wrap_tensor(
self.gym.get_camera_image_gpu_tensor(
self.sim,
env_handle,
self.sensor_handles[env_i]["forward_camera"],
gymapi.IMAGE_COLOR,
)))
def _reset_buffers(self, env_ids):
return_ = super()._reset_buffers(env_ids)
if hasattr(self, "velocity_sample_points"): self.velocity_sample_points[env_ids] = 0.
return return_
def _prepare_reward_function(self):
return_ = super()._prepare_reward_function()
# get the body indices within the simulation (for estimating robot state)
# if "penetrate_volume" in self.reward_names or "penetrate_depth" in self.reward_names:
self._init_volume_sample_points()
print("Total number of volume estimation points for each robot is:", self.volume_sample_points.shape[1])
return return_
def _init_volume_sample_points(self):
""" Build sample points for penetration volume estimation
NOTE: self.cfg.sim.body_measure_points must be a dict with
key: body name (or part of the body name) to estimate
value: dict(
x, y, z: sample points to form a meshgrid
transform: [x, y, z, roll, pitch, yaw] for transforming the meshgrid w.r.t body frame
)
"""
# read and specify the order of which body to sample from and its relative sample points.
self.body_measure_name_order = [] # order specified
self.body_sample_indices = []
for idx in range(self.num_envs):
rigid_body_names = self.gym.get_actor_rigid_body_names(self.envs[idx], self.actor_handles[idx])
self.body_sample_indices.append([])
for name, measure_name in itertools.product(rigid_body_names, self.cfg.sim.body_measure_points.keys()):
if measure_name in name:
self.body_sample_indices[-1].append(
self.gym.find_actor_rigid_body_index(
self.envs[idx],
self.actor_handles[idx],
name,
gymapi.IndexDomain.DOMAIN_SIM,
))
if idx == 0: # assuming all envs have the same actor configuration
self.body_measure_name_order.append(measure_name) # order specified
self.body_sample_indices = torch.tensor(self.body_sample_indices, device= self.sim_device).flatten() # n_envs * num_bodies
# compute and store each sample points in body frame.
self.body_volume_points = dict()
for measure_name, points_cfg in self.cfg.sim.body_measure_points.items():
x = torch.tensor(points_cfg["x"], device= self.device, dtype= torch.float32, requires_grad= False)
y = torch.tensor(points_cfg["y"], device= self.device, dtype= torch.float32, requires_grad= False)
z = torch.tensor(points_cfg["z"], device= self.device, dtype= torch.float32, requires_grad= False)
t = torch.tensor(points_cfg["transform"][0:3], device= self.device, dtype= torch.float32, requires_grad= False)
grid_x, grid_y, grid_z = torch.meshgrid(x, y, z)
grid_points = torch.stack([
grid_x.flatten(),
grid_y.flatten(),
grid_z.flatten(),
], dim= -1) # n_points, 3
q = quat_from_euler_xyz(
torch.tensor(points_cfg["transform"][3], device= self.sim_device, dtype= torch.float32),
torch.tensor(points_cfg["transform"][4], device= self.sim_device, dtype= torch.float32),
torch.tensor(points_cfg["transform"][5], device= self.sim_device, dtype= torch.float32),
)
self.body_volume_points[measure_name] = tf_apply(
q.expand(grid_points.shape[0], 4),
t.expand(grid_points.shape[0], 3),
grid_points,
)
num_sample_points_per_env = 0
for body_name in self.body_measure_name_order:
for measure_name in self.body_volume_points.keys():
if measure_name in body_name:
num_sample_points_per_env += self.body_volume_points[measure_name].shape[0]
self.volume_sample_points = torch.zeros(
(self.num_envs, num_sample_points_per_env, 3),
device= self.device,
dtype= torch.float32,
)
self.velocity_sample_points = torch.zeros(
(self.num_envs, num_sample_points_per_env, 3),
device= self.device,
dtype= torch.float32,
)
def _get_proprioception_obs(self, privileged= False):
return self.obs_super_impl[:, :48]
def _get_height_measurements_obs(self, privileged= False):
return self.obs_super_impl[:, 48:235]
def _get_forward_depth_obs(self, privileged= False):
return torch.stack(self.sensor_tensor_dict["forward_depth"]).flatten(start_dim= 1)
def _get_base_pose_obs(self, privileged= False):
roll, pitch, yaw = get_euler_xyz(self.root_states[:, 3:7])
roll[roll > np.pi] -= np.pi * 2 # to range (-pi, pi)
pitch[pitch > np.pi] -= np.pi * 2 # to range (-pi, pi)
yaw[yaw > np.pi] -= np.pi * 2 # to range (-pi, pi)
return torch.cat([
self.root_states[:, :3] - self.env_origins,
torch.stack([roll, pitch, yaw], dim= -1),
], dim= -1)
def _get_robot_config_obs(self, privileged= False):
return self.robot_config_buffer
def _get_engaging_block_obs(self, privileged= False):
""" Compute the obstacle info for the robot """
if not self.check_BarrierTrack_terrain():
# This could be wrong, check BarrierTrack implementation to get the exact shape.
return torch.zeros((self.num_envs, (1 + (4 + 1) + 2)), device= self.sim_device)
base_positions = self.root_states[:, 0:3] # (n_envs, 3)
self.refresh_volume_sample_points()
return self.terrain.get_engaging_block_info(
base_positions,
self.volume_sample_points - base_positions.unsqueeze(-2), # (n_envs, n_points, 3)
)
def _get_sidewall_distance_obs(self, privileged= False):
if not self.check_BarrierTrack_terrain():
return torch.zeros((self.num_envs, 2), device= self.sim_device)
base_positions = self.root_states[:, 0:3] # (n_envs, 3)
return self.terrain.get_sidewall_distance(base_positions)
def _get_obs_from_components(self, components: list, privileged= False):
obs_segments = self.get_obs_segment_from_components(components)
obs = []
for k, v in obs_segments.items():
if k == "proprioception":
obs.append(self._get_proprioception_obs(privileged))
elif k == "height_measurements":
obs.append(self._get_height_measurements_obs(privileged))
else:
# get the observation from specific component name
# such as "_get_forward_depth_obs"
obs.append(
getattr(self, "_get_" + k + "_obs")(privileged) * \
getattr(self.obs_scales, k, 1.)
)
obs = torch.cat(obs, dim= 1)
return obs
def compute_observations(self):
for key in self.sensor_handles[0].keys():
if "camera" in key:
# NOTE: Different from the documentation and examples from isaacgym
# gym.fetch_results() must be called before gym.start_access_image_tensors()
# refer to https://forums.developer.nvidia.com/t/camera-example-and-headless-mode/178901/10
self.gym.fetch_results(self.sim, True)
self.gym.step_graphics(self.sim)
self.gym.render_all_camera_sensors(self.sim)
self.gym.start_access_image_tensors(self.sim)
break
add_noise = self.add_noise; self.add_noise = False
return_ = super().compute_observations() # currently self.obs_buf is a mess
self.obs_super_impl = self.obs_buf
self.add_noise = add_noise
# actor obs
self.obs_buf = self._get_obs_from_components(
self.cfg.env.obs_components,
privileged= False,
)
if self.add_noise:
self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec
# critic obs
if not self.num_privileged_obs is None:
self.privileged_obs_buf[:] = self._get_obs_from_components(
self.cfg.env.privileged_obs_components,
privileged= getattr(self.cfg.env, "privileged_obs_gets_privilege", False),
)
# fixing linear velocity in proprioception observation
if "proprioception" in getattr(self.cfg.env, "privileged_obs_components", []) \
and getattr(self.cfg.env, "privileged_use_lin_vel", False):
# NOTE: according to self.get_obs_segment_from_components, "proprioception" observation
# is always the first part of this flattened observation. check super().compute_observations
# and self.cfg.env.use_lin_vel for the reason of this if branch.
self.privileged_obs_buf[:, :3] = self.base_lin_vel * self.obs_scales.lin_vel
for key in self.sensor_handles[0].keys():
if "camera" in key:
self.gym.end_access_image_tensors(self.sim)
break
return return_
def _get_noise_scale_vec(self, cfg):
noise_vec = torch.zeros_like(self.obs_buf[0])
self.add_noise = cfg.noise.add_noise
segment_start_idx = 0
obs_segments = self.get_obs_segment_from_components(cfg.env.obs_components)
# write noise for each corresponding component.
for k, v in obs_segments.items():
segment_length = np.prod(v)
# write sensor scale to provided noise_vec
# for example "_write_forward_depth_noise"
getattr(self, "_write_" + k + "_noise")(noise_vec[segment_start_idx: segment_start_idx + segment_length])
segment_start_idx += segment_length
return noise_vec
def _write_forward_depth_noise(self, noise_vec):
noise_vec[:] = self.cfg.noise.noise_scales.forward_depth * self.cfg.noise.noise_level * self.obs_scales.forward_depth
def _write_base_pose_noise(self, noise_vec):
if not hasattr(self.cfg.noise.noise_scales, "base_pose"):
return
noise_vec[:] = self.cfg.noise.noise_scales.base_pose * self.cfg.noise.noise_level * self.obs_scales.base_pose
def _write_robot_config_noise(self, noise_vec):
if not hasattr(self.cfg.noise.noise_scales, "robot_config"):
return
noise_vec[:] = self.cfg.noise.noise_scales.robot_config * self.cfg.noise.noise_level * self.obs_scales.robot_config
def _write_engaging_block_noise(self, noise_vec):
if not hasattr(self.cfg.noise.noise_scales, "engaging_block"):
return
noise_vec[:] = self.cfg.noise.noise_scales.engaging_block * self.cfg.noise.noise_level * self.obs_scales.engaging_block
def _write_sidewall_distance_noise(self, noise_vec):
if not hasattr(self.cfg.noise.noise_scales, "sidewall_distance"):
return
noise_vec[:] = self.cfg.noise.noise_scales.sidewall_distance * self.cfg.noise.noise_level * self.obs_scales.sidewall_distance
##### adds-on with building the environment #####
def _create_terrain(self):
""" Using cfg.terrain.selected to identify terrain class """
if not isinstance(self.cfg.terrain.selected, str):
return super()._create_terrain()
terrain_cls = self.cfg.terrain.selected
self.terrain = get_terrain_cls(terrain_cls)(self.cfg.terrain, self.num_envs)
self.terrain.add_terrain_to_sim(self.gym, self.sim, self.device)
self.height_samples = torch.tensor(self.terrain.heightsamples).view(self.terrain.tot_rows, self.terrain.tot_cols).to(self.device)
def _create_envs(self):
if self.cfg.domain_rand.randomize_motor:
self.motor_strength = torch_rand_float(
self.cfg.domain_rand.leg_motor_strength_range[0],
self.cfg.domain_rand.leg_motor_strength_range[1],
(self.num_envs, 12),
device=self.device,
)
return super()._create_envs()
def _process_rigid_shape_props(self, props, env_id):
props = super()._process_rigid_shape_props(props, env_id)
if env_id == 0:
all_obs_components = self.all_obs_components
if "robot_config" in all_obs_components:
all_obs_components
self.robot_config_buffer = torch.empty(
self.num_envs, 1 + 3 + 1 + 12,
dtype= torch.float32,
device= self.device,
)
if hasattr(self, "robot_config_buffer"):
self.robot_config_buffer[env_id, 0] = props[0].friction
return props
def _process_dof_props(self, props, env_id):
props = super()._process_dof_props(props, env_id)
if env_id == 0:
if hasattr(self.cfg.control, "torque_limits"):
if not isinstance(self.cfg.control.torque_limits, (tuple, list)):
self.torque_limits = torch.ones(self.num_dof, dtype= torch.float, device= self.device, requires_grad= False)
self.torque_limits *= self.cfg.control.torque_limits
else:
self.torque_limits = torch.tensor(self.cfg.control.torque_limits, dtype= torch.float, device= self.device, requires_grad= False)
return props
def _process_rigid_body_props(self, props, env_id):
props = super()._process_rigid_body_props(props, env_id)
if self.cfg.domain_rand.randomize_com:
rng_com_x = self.cfg.domain_rand.com_range.x
rng_com_y = self.cfg.domain_rand.com_range.y
rng_com_z = self.cfg.domain_rand.com_range.z
rand_com = np.random.uniform(
[rng_com_x[0], rng_com_y[0], rng_com_z[0]],
[rng_com_x[1], rng_com_y[1], rng_com_z[1]],
size=(3,),
)
props[0].com += gymapi.Vec3(*rand_com)
if hasattr(self, "robot_config_buffer"):
self.robot_config_buffer[env_id, 1] = props[0].com.x
self.robot_config_buffer[env_id, 2] = props[0].com.y
self.robot_config_buffer[env_id, 3] = props[0].com.z
self.robot_config_buffer[env_id, 4] = props[0].mass
self.robot_config_buffer[env_id, 5:5+12] = self.motor_strength[env_id] if hasattr(self, "motor_strength") else 1.
return props
def _get_env_origins(self):
super()._get_env_origins()
self.custom_origins = True
def _draw_sensor_vis(self, env_h, sensor_hd):
for sensor_name, sensor_h in sensor_hd.items():
if "camera" in sensor_name:
camera_transform = self.gym.get_camera_transform(self.sim, env_h, sensor_h)
cam_axes = gymutil.AxesGeometry(scale= 0.1)
gymutil.draw_lines(cam_axes, self.gym, self.viewer, env_h, camera_transform)
def _draw_debug_vis(self):
if not "height_measurements" in self.all_obs_components:
measure_heights_tmp = self.terrain.cfg.measure_heights
self.terrain.cfg.measure_heights = False
return_ = super()._draw_debug_vis()
self.terrain.cfg.measure_heights = measure_heights_tmp
else:
return_ = super()._draw_debug_vis()
if self.cfg.terrain.selected == "BarrierTrack":
self.terrain.draw_virtual_terrain(self.viewer)
if hasattr(self, "volume_sample_points") and self.cfg.viewer.draw_volume_sample_points:
self.draw_volume_sample_points()
for env_h, sensor_hd in zip(self.envs, self.sensor_handles):
self._draw_sensor_vis(env_h, sensor_hd)
return return_
##### defines observation segments, which tells the order of the entire flattened obs #####
def get_obs_segment_from_components(self, components):
""" Observation segment is defined as a list of lists/ints defining the tensor shape with
corresponding order.
"""
segments = OrderedDict()
if "proprioception" in components:
segments["proprioception"] = (48,)
if "height_measurements" in components:
segments["height_measurements"] = (187,)
if "forward_depth" in components:
segments["forward_depth"] = (1, *self.cfg.sensor.forward_camera.resolution)
if "base_pose" in components:
segments["base_pose"] = (6,) # xyz + rpy
if "robot_config" in components:
""" Related to robot_config_buffer attribute, Be careful to change. """
# robot shape friction
# CoM (Center of Mass) x, y, z
# base mass (payload)
# motor strength for each joint
segments["robot_config"] = (1 + 3 + 1 + 12,)
if "engaging_block" in components:
if not self.check_BarrierTrack_terrain():
# This could be wrong, please check the implementation of BarrierTrack
segments["engaging_block"] = (1 + (4 + 1) + 2,)
else:
segments["engaging_block"] = get_terrain_cls("BarrierTrack").get_engaging_block_info_shape(self.cfg.terrain)
if "sidewall_distance" in components:
self.check_BarrierTrack_terrain()
segments["sidewall_distance"] = (2,)
return segments
def get_num_obs_from_components(self, components):
obs_segments = self.get_obs_segment_from_components(components)
num_obs = 0
for k, v in obs_segments.items():
num_obs += np.prod(v)
return num_obs
def refresh_volume_sample_points(self):
if self.volume_sample_points_refreshed:
return
sampled_body_pos = self.all_rigid_body_states[self.body_sample_indices, :3].view(self.num_envs, -1, 3)
sampled_body_quat = self.all_rigid_body_states[self.body_sample_indices, 3:7].view(self.num_envs, -1, 4)
sample_points_start_idx = 0
for body_idx, body_measure_name in enumerate(self.body_measure_name_order):
num_volume_points = self.body_volume_points[body_measure_name].shape[0]
point_positions = tf_apply(
sampled_body_quat[:, body_idx].unsqueeze(1).expand(-1, num_volume_points, -1),
sampled_body_pos[:, body_idx].unsqueeze(1).expand(-1, num_volume_points, -1),
self.body_volume_points[body_measure_name].unsqueeze(0).expand(self.num_envs, -1, -1),
) # (num_envs, num_volume_points, 3)
valid_velocity_mask = self.episode_length_buf > 0
self.velocity_sample_points[valid_velocity_mask, sample_points_start_idx: sample_points_start_idx + num_volume_points] = \
(point_positions[valid_velocity_mask] - self.volume_sample_points[valid_velocity_mask, sample_points_start_idx: sample_points_start_idx + num_volume_points]) / self.dt
self.volume_sample_points[:, sample_points_start_idx: sample_points_start_idx + num_volume_points] = point_positions
sample_points_start_idx += num_volume_points
self.volume_sample_points_refreshed = True
def draw_volume_sample_points(self):
sphere_geom = gymutil.WireframeSphereGeometry(0.005, 4, 4, None, color=(1, 0.1, 0))
for env_idx in range(self.num_envs):
for point_idx in range(self.volume_sample_points.shape[1]):
sphere_pose = gymapi.Transform(gymapi.Vec3(*self.volume_sample_points[env_idx, point_idx]), r= None)
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[env_idx], sphere_pose)
##### Additional rewards #####
def _reward_lin_vel_l2norm(self):
return torch.norm((self.commands[:, :2] - self.base_lin_vel[:, :2]), dim= 1)
def _reward_world_vel_l2norm(self):
return torch.norm((self.commands[:, :2] - self.root_states[:, 7:9]), dim= 1)
def _reward_legs_energy(self):
return torch.sum(torch.square(self.torques * self.dof_vel), dim=1)
def _reward_legs_energy_substeps(self):
# (n_envs, n_substeps, n_dofs)
# square sum -> (n_envs, n_substeps)
# mean -> (n_envs,)
return torch.mean(torch.sum(torch.square(self.substep_torques * self.substep_dof_vel), dim=-1), dim=-1)
def _reward_legs_energy_abs(self):
return torch.sum(torch.abs(self.torques * self.dof_vel), dim=1)
def _reward_alive(self):
return 1.
def _reward_lin_cmd(self):
""" This reward term does not depend on the policy, depends on the command """
return torch.norm(self.commands[:, :2], dim= 1)
def _reward_lin_vel_x(self):
return self.root_states[:, 7]
def _reward_lin_vel_y_abs(self):
return torch.abs(self.root_states[:, 8])
def _reward_lin_vel_y_square(self):
return torch.square(self.root_states[:, 8])
def _reward_lin_pos_y(self):
return torch.abs((self.root_states[:, :3] - self.env_origins)[:, 1])
def _reward_yaw_abs(self):
""" Aiming for the robot yaw to be zero (pointing to the positive x-axis) """
yaw = get_euler_xyz(self.root_states[:, 3:7])[2]
yaw[yaw > np.pi] -= np.pi * 2 # to range (-pi, pi)
yaw[yaw < -np.pi] += np.pi * 2 # to range (-pi, pi)
return torch.abs(yaw)
def _reward_penetrate_depth(self):
if not self.check_BarrierTrack_terrain(): return torch.zeros_like(self.root_states[:, 0])
self.refresh_volume_sample_points()
penetration_depths = self.terrain.get_penetration_depths(self.volume_sample_points.view(-1, 3)).view(self.num_envs, -1)
penetration_depths *= torch.norm(self.velocity_sample_points, dim= -1) + 1e-3
return torch.sum(penetration_depths, dim= -1)
def _reward_penetrate_volume(self):
if not self.check_BarrierTrack_terrain(): return torch.zeros_like(self.root_states[:, 0])
self.refresh_volume_sample_points()
penetration_mask = self.terrain.get_penetration_mask(self.volume_sample_points.view(-1, 3)).view(self.num_envs, -1)
penetration_mask *= torch.norm(self.velocity_sample_points, dim= -1) + 1e-3
return torch.sum(penetration_mask, dim= -1)
##### Some helper functions that override parent class attributes #####
@property
def all_obs_components(self):
components = set(self.cfg.env.obs_components)
if getattr(self.cfg.env, "privileged_obs_components", None):
components.update(self.cfg.env.privileged_obs_components)
return components
@property
def obs_segments(self):
return self.get_obs_segment_from_components(self.cfg.env.obs_components)
@property
def privileged_obs_segments(self):
components = getattr(
self.cfg.env,
"privileged_obs_components",
None
)
if components is None:
return None
else:
return self.get_obs_segment_from_components(components)
@property
def num_obs(self):
""" get this value from self.cfg.env """
assert "proprioception" in self.cfg.env.obs_components, "missing critical observation component 'proprioception'"
return self.get_num_obs_from_components(self.cfg.env.obs_components)
@num_obs.setter
def num_obs(self, value):
""" avoid setting self.num_obs """
pass
@property
def num_privileged_obs(self):
""" get this value from self.cfg.env """
components = getattr(
self.cfg.env,
"privileged_obs_components",
None
)
if components is None:
return None
else:
return self.get_num_obs_from_components(components)
@num_privileged_obs.setter
def num_privileged_obs(self, value):
""" avoid setting self.num_privileged_obs """
pass
def check_BarrierTrack_terrain(self):
if getattr(self.cfg.terrain, "pad_unavailable_info", False):
return self.cfg.terrain.selected == "BarrierTrack"
assert self.cfg.terrain.selected == "BarrierTrack", "This implementation is only for BarrierTrack terrain"
return True

View File

@ -0,0 +1,577 @@
import random
from isaacgym.torch_utils import torch_rand_float, get_euler_xyz, quat_from_euler_xyz, tf_apply
from isaacgym import gymtorch, gymapi, gymutil
import torch
import torch.nn.functional as F
import torchvision.transforms as T
from legged_gym.envs.base.legged_robot_field import LeggedRobotField
class LeggedRobotNoisy(LeggedRobotField):
""" This class should be independent from the terrain, but depend on the sensors of the parent
class.
"""
def clip_position_action_by_torque_limit(self, actions_scaled):
""" For position control, scaled actions should be in the coordinate of robot default dof pos
"""
if hasattr(self, "proprioception_output"):
dof_vel = self.proprioception_output[:, -24:-12] / self.obs_scales.dof_vel
dof_pos_ = self.proprioception_output[:, -36:-24] / self.obs_scales.dof_pos
else:
dof_vel = self.dof_vel
dof_pos_ = self.dof_pos - self.default_dof_pos
p_limits_low = (-self.torque_limits) + self.d_gains*dof_vel
p_limits_high = (self.torque_limits) + self.d_gains*dof_vel
actions_low = (p_limits_low/self.p_gains) + dof_pos_
actions_high = (p_limits_high/self.p_gains) + dof_pos_
actions_scaled_torque_clipped = torch.clip(actions_scaled, actions_low, actions_high)
return actions_scaled_torque_clipped
def pre_physics_step(self, actions):
self.forward_depth_refreshed = False # incase _get_forward_depth_obs is called multiple times
self.proprioception_refreshed = False
return_ = super().pre_physics_step(actions)
if isinstance(self.cfg.control.action_scale, (tuple, list)):
self.cfg.control.action_scale = torch.tensor(self.cfg.control.action_scale, device= self.sim_device)
if getattr(self.cfg.control, "computer_clip_torque", False):
self.actions_scaled = self.actions * self.cfg.control.action_scale
control_type = self.cfg.control.control_type
if control_type == "P":
self.actions_scaled_torque_clipped = self.clip_position_action_by_torque_limit(self.actions_scaled)
else:
raise NotImplementedError
else:
self.actions_scaled_torque_clipped = self.actions * self.cfg.control.action_scale
return return_
def _compute_torques(self, actions):
""" The input actions will not be used, instead the scaled clipped actions will be used.
Please check the computation logic whenever you change anything.
"""
if not hasattr(self.cfg.control, "motor_clip_torque"):
return super()._compute_torques(actions)
else:
if hasattr(self, "motor_strength"):
actions_scaled_torque_clipped = self.motor_strength * self.actions_scaled_torque_clipped
else:
actions_scaled_torque_clipped = self.actions_scaled_torque_clipped
control_type = self.cfg.control.control_type
if control_type == "P":
torques = self.p_gains * (actions_scaled_torque_clipped + self.default_dof_pos - self.dof_pos) \
- self.d_gains * self.dof_vel
else:
raise NotImplementedError
if self.cfg.control.motor_clip_torque:
torques = torch.clip(
torques,
-self.torque_limits * self.cfg.control.motor_clip_torque,
self.torque_limits * self.cfg.control.motor_clip_torque,
)
return torques
def post_decimation_step(self, dec_i):
return_ = super().post_decimation_step(dec_i)
self.max_torques = torch.maximum(
torch.max(torch.abs(self.torques), dim= -1)[0],
self.max_torques,
)
self.torque_exceed_count_substep[(torch.abs(self.torques) > self.torque_limits).any(dim= -1)] += 1
### count how many times in the episode the robot is out of dof pos limit (summing all dofs)
self.out_of_dof_pos_limit_count_substep += self._reward_dof_pos_limits().int()
### or using a1_const.h value to check whether the robot is out of dof pos limit
# joint_pos_limit_high = torch.tensor([0.802, 4.19, -0.916] * 4, device= self.device) - 0.001
# joint_pos_limit_low = torch.tensor([-0.802, -1.05, -2.7] * 4, device= self.device) + 0.001
# self.out_of_dof_pos_limit_count_substep += (self.dof_pos > joint_pos_limit_high.unsqueeze(0)).sum(-1).int()
# self.out_of_dof_pos_limit_count_substep += (self.dof_pos < joint_pos_limit_low.unsqueeze(0)).sum(-1).int()
return return_
def _fill_extras(self, env_ids):
return_ = super()._fill_extras(env_ids)
self.extras["episode"]["max_torques"] = self.max_torques[env_ids]
self.max_torques[env_ids] = 0.
self.extras["episode"]["torque_exceed_count_substeps_per_envstep"] = self.torque_exceed_count_substep[env_ids] / self.episode_length_buf[env_ids]
self.torque_exceed_count_substep[env_ids] = 0
self.extras["episode"]["torque_exceed_count_envstep"] = self.torque_exceed_count_envstep[env_ids]
self.torque_exceed_count_envstep[env_ids] = 0
self.extras["episode"]["out_of_dof_pos_limit_count_substep"] = self.out_of_dof_pos_limit_count_substep[env_ids] / self.episode_length_buf[env_ids]
self.out_of_dof_pos_limit_count_substep[env_ids] = 0
return return_
def _post_physics_step_callback(self):
super()._post_physics_step_callback()
if hasattr(self, "proprioception_buffer"):
resampling_time = getattr(self.cfg.sensor.proprioception, "latency_resampling_time", self.dt)
resample_env_ids = (self.episode_length_buf % int(resampling_time / self.dt) == 0).nonzero(as_tuple= False).flatten()
if len(resample_env_ids) > 0:
self._resample_proprioception_latency(resample_env_ids)
if hasattr(self, "forward_depth_buffer"):
resampling_time = getattr(self.cfg.sensor.forward_camera, "latency_resampling_time", self.dt)
resample_env_ids = (self.episode_length_buf % int(resampling_time / self.dt) == 0).nonzero(as_tuple= False).flatten()
if len(resample_env_ids) > 0:
self._resample_forward_camera_latency(resample_env_ids)
self.torque_exceed_count_envstep[(torch.abs(self.substep_torques) > self.torque_limits).any(dim= 1).any(dim= 1)] += 1
def _resample_proprioception_latency(self, env_ids):
self.current_proprioception_latency[env_ids] = torch_rand_float(
self.cfg.sensor.proprioception.latency_range[0],
self.cfg.sensor.proprioception.latency_range[1],
(len(env_ids), 1),
device= self.device,
).flatten()
def _resample_forward_camera_latency(self, env_ids):
self.current_forward_camera_latency[env_ids] = torch_rand_float(
self.cfg.sensor.forward_camera.latency_range[0],
self.cfg.sensor.forward_camera.latency_range[1],
(len(env_ids), 1),
device= self.device,
).flatten()
def _init_buffers(self):
return_ = super()._init_buffers()
all_obs_components = self.all_obs_components
if "proprioception" in all_obs_components and hasattr(self.cfg.sensor, "proprioception"):
""" Adding proprioception delay buffer """
self.cfg.sensor.proprioception.buffer_length = int((self.cfg.sensor.proprioception.latency_range[1] + self.dt) / self.dt)
self.proprioception_buffer = torch.zeros(
(
self.cfg.sensor.proprioception.buffer_length,
self.num_envs,
self.get_num_obs_from_components(["proprioception"]),
),
dtype= torch.float32,
device= self.device,
)
self.proprioception_delayed_frames = torch.ones((self.num_envs,), device= self.device, dtype= int) * self.cfg.sensor.proprioception.buffer_length
self.current_proprioception_latency = torch_rand_float(
self.cfg.sensor.proprioception.latency_range[0],
self.cfg.sensor.proprioception.latency_range[1],
(self.num_envs, 1),
device= self.device,
).flatten()
if "forward_depth" in all_obs_components and hasattr(self.cfg.sensor, "forward_camera"):
output_resolution = getattr(self.cfg.sensor.forward_camera, "output_resolution", self.cfg.sensor.forward_camera.resolution)
self.cfg.sensor.forward_camera.buffer_length = int((self.cfg.sensor.forward_camera.latency_range[1] + self.cfg.sensor.forward_camera.refresh_duration) / self.dt)
self.forward_depth_buffer = torch.zeros(
(
self.cfg.sensor.forward_camera.buffer_length,
self.num_envs,
1,
output_resolution[0],
output_resolution[1],
),
dtype= torch.float32,
device= self.device,
)
self.forward_depth_delayed_frames = torch.ones((self.num_envs,), device= self.device, dtype= int) * self.cfg.sensor.forward_camera.buffer_length
self.current_forward_camera_latency = torch_rand_float(
self.cfg.sensor.forward_camera.latency_range[0],
self.cfg.sensor.forward_camera.latency_range[1],
(self.num_envs, 1),
device= self.device,
).flatten()
if hasattr(self.cfg.sensor.forward_camera, "resized_resolution"):
self.forward_depth_resize_transform = T.Resize(
self.cfg.sensor.forward_camera.resized_resolution,
interpolation= T.InterpolationMode.BICUBIC,
)
self.contour_detection_kernel = torch.zeros(
(8, 1, 3, 3),
dtype= torch.float32,
device= self.device,
)
# emperical values to be more sensitive to vertical edges
self.contour_detection_kernel[0, :, 1, 1] = 0.5
self.contour_detection_kernel[0, :, 0, 0] = -0.5
self.contour_detection_kernel[1, :, 1, 1] = 0.1
self.contour_detection_kernel[1, :, 0, 1] = -0.1
self.contour_detection_kernel[2, :, 1, 1] = 0.5
self.contour_detection_kernel[2, :, 0, 2] = -0.5
self.contour_detection_kernel[3, :, 1, 1] = 1.2
self.contour_detection_kernel[3, :, 1, 0] = -1.2
self.contour_detection_kernel[4, :, 1, 1] = 1.2
self.contour_detection_kernel[4, :, 1, 2] = -1.2
self.contour_detection_kernel[5, :, 1, 1] = 0.5
self.contour_detection_kernel[5, :, 2, 0] = -0.5
self.contour_detection_kernel[6, :, 1, 1] = 0.1
self.contour_detection_kernel[6, :, 2, 1] = -0.1
self.contour_detection_kernel[7, :, 1, 1] = 0.5
self.contour_detection_kernel[7, :, 2, 2] = -0.5
self.max_torques = torch.zeros_like(self.torques[..., 0])
self.torque_exceed_count_substep = torch.zeros_like(self.torques[..., 0], dtype= torch.int32) # The number of substeps that the torque exceeds the limit
self.torque_exceed_count_envstep = torch.zeros_like(self.torques[..., 0], dtype= torch.int32) # The number of envsteps that the torque exceeds the limit
self.out_of_dof_pos_limit_count_substep = torch.zeros_like(self.torques[..., 0], dtype= torch.int32) # The number of substeps that the dof pos exceeds the limit
return return_
def _reset_buffers(self, env_ids):
return_ = super()._reset_buffers(env_ids)
if hasattr(self, "forward_depth_buffer"):
self.forward_depth_buffer[:, env_ids] = 0.
self.forward_depth_delayed_frames[env_ids] = self.cfg.sensor.forward_camera.buffer_length
if hasattr(self, "proprioception_buffer"):
self.proprioception_buffer[:, env_ids] = 0.
self.proprioception_delayed_frames[env_ids] = self.cfg.sensor.proprioception.buffer_length
return return_
def _draw_debug_vis(self):
return_ = super()._draw_debug_vis()
if hasattr(self, "forward_depth_output"):
if self.num_envs == 1:
import matplotlib.pyplot as plt
forward_depth_np = self.forward_depth_output[0, 0].detach().cpu().numpy() # (H, W)
plt.imshow(forward_depth_np, cmap= "gray", vmin= 0, vmax= 1)
plt.pause(0.001)
else:
print("LeggedRobotNoisy: More than one robot, stop showing camera image")
return return_
""" Steps to simulate stereo camera depth image """
def _add_depth_contour(self, depth_images):
mask = F.max_pool2d(
torch.abs(F.conv2d(depth_images, self.contour_detection_kernel, padding= 1)).max(dim= -3, keepdim= True)[0],
kernel_size= self.cfg.noise.forward_depth.contour_detection_kernel_size,
stride= 1,
padding= int(self.cfg.noise.forward_depth.contour_detection_kernel_size / 2),
) > self.cfg.noise.forward_depth.contour_threshold
depth_images[mask] = 0.
return depth_images
@torch.no_grad()
def form_artifacts(self,
H, W, # image resolution
tops, bottoms, # artifacts positions (in pixel) shape (n_,)
lefts, rights,
):
""" Paste an artifact to the depth image.
NOTE: Using the paradigm of spatial transformer network to build the artifacts of the
entire depth image.
"""
batch_size = tops.shape[0]
tops, bottoms = tops[:, None, None], bottoms[:, None, None]
lefts, rights = lefts[:, None, None], rights[:, None, None]
# build the source patch
source_patch = torch.zeros((batch_size, 1, 25, 25), device= self.device)
source_patch[:, :, 1:24, 1:24] = 1.
# build the grid
grid = torch.zeros((batch_size, H, W, 2), device= self.device)
grid[..., 0] = torch.linspace(-1, 1, W, device= self.device).view(1, 1, W)
grid[..., 1] = torch.linspace(-1, 1, H, device= self.device).view(1, H, 1)
grid[..., 0] = (grid[..., 0] * W + W - rights - lefts) / (rights - lefts)
grid[..., 1] = (grid[..., 1] * H + H - bottoms - tops) / (bottoms - tops)
# sample using the grid and form the artifacts for the entire depth image
artifacts = torch.clip(
F.grid_sample(
source_patch,
grid,
mode= "bilinear",
padding_mode= "zeros",
align_corners= False,
).sum(dim= 0).view(H, W),
0, 1,
)
return artifacts
def _add_depth_artifacts(self, depth_images,
artifacts_prob,
artifacts_height_mean_std,
artifacts_width_mean_std,
):
""" Simulate artifacts from stereo depth camera. In the final artifacts_mask, where there
should be an artifacts, the mask is 1.
"""
N, _, H, W = depth_images.shape
def _clip(x, dim):
return torch.clip(x, 0., (H, W)[dim])
# random patched artifacts
artifacts_mask = torch_rand_float(
0., 1.,
(N, H * W),
device= self.device,
).view(N, H, W) < artifacts_prob
artifacts_mask = artifacts_mask & (depth_images[:, 0] > 0.)
artifacts_coord = torch.nonzero(artifacts_mask).to(torch.float32) # (n_, 3) n_ <= N * H * W
artifcats_size = (
torch.clip(
artifacts_height_mean_std[0] + torch.randn(
(artifacts_coord.shape[0],),
device= self.device,
) * artifacts_height_mean_std[1],
0., H,
),
torch.clip(
artifacts_width_mean_std[0] + torch.randn(
(artifacts_coord.shape[0],),
device= self.device,
) * artifacts_width_mean_std[1],
0., W,
),
) # (n_,), (n_,)
artifacts_top_left = (
_clip(artifacts_coord[:, 1] - artifcats_size[0] / 2, 0),
_clip(artifacts_coord[:, 2] - artifcats_size[1] / 2, 1),
)
artifacts_bottom_right = (
_clip(artifacts_coord[:, 1] + artifcats_size[0] / 2, 0),
_clip(artifacts_coord[:, 2] + artifcats_size[1] / 2, 1),
)
for i in range(N):
# NOTE: make sure the artifacts points are as few as possible
artifacts_mask = self.form_artifacts(
H, W,
artifacts_top_left[0][artifacts_coord[:, 0] == i],
artifacts_bottom_right[0][artifacts_coord[:, 0] == i],
artifacts_top_left[1][artifacts_coord[:, 0] == i],
artifacts_bottom_right[1][artifacts_coord[:, 0] == i],
)
depth_images[i] *= (1 - artifacts_mask)
return depth_images
def _recognize_top_down_too_close(self, too_close_mask):
""" Based on real D435i image pattern, there are two situations when pixels are too close
Whether there is too-close pixels all the way across the image vertically.
"""
# vertical_all_too_close = too_close_mask.all(dim= 2, keepdim= True)
vertical_too_close = too_close_mask.sum(dim= -2, keepdim= True) > (too_close_mask.shape[-2] * 0.6)
return vertical_too_close
def _add_depth_stereo(self, depth_images):
""" Simulate the noise from the depth limit of the stereo camera. """
N, _, H, W = depth_images.shape
far_mask = depth_images > self.cfg.noise.forward_depth.stereo_far_distance
too_close_mask = depth_images < self.cfg.noise.forward_depth.stereo_min_distance
near_mask = (~far_mask) & (~too_close_mask)
# add noise to the far points
far_noise = torch_rand_float(
0., self.cfg.noise.forward_depth.stereo_far_noise_std,
(N, H * W),
device= self.device,
).view(N, 1, H, W)
far_noise = far_noise * far_mask
depth_images += far_noise
# add noise to the near points
near_noise = torch_rand_float(
0., self.cfg.noise.forward_depth.stereo_near_noise_std,
(N, H * W),
device= self.device,
).view(N, 1, H, W)
near_noise = near_noise * near_mask
depth_images += near_noise
# add artifacts to the too close points
vertical_block_mask = self._recognize_top_down_too_close(too_close_mask)
full_block_mask = vertical_block_mask & too_close_mask
half_block_mask = (~vertical_block_mask) & too_close_mask
# add artifacts where vertical pixels are all too close
for pixel_value in random.sample(
self.cfg.noise.forward_depth.stereo_full_block_values,
len(self.cfg.noise.forward_depth.stereo_full_block_values),
):
artifacts_buffer = torch.ones_like(depth_images)
artifacts_buffer = self._add_depth_artifacts(artifacts_buffer,
self.cfg.noise.forward_depth.stereo_full_block_artifacts_prob,
self.cfg.noise.forward_depth.stereo_full_block_height_mean_std,
self.cfg.noise.forward_depth.stereo_full_block_width_mean_std,
)
depth_images[full_block_mask] = ((1 - artifacts_buffer) * pixel_value)[full_block_mask]
# add artifacts where not all the same vertical pixels are too close
half_block_spark = torch_rand_float(
0., 1.,
(N, H * W),
device= self.device,
).view(N, 1, H, W) < self.cfg.noise.forward_depth.stereo_half_block_spark_prob
depth_images[half_block_mask] = (half_block_spark.to(torch.float32) * self.cfg.noise.forward_depth.stereo_half_block_value)[half_block_mask]
return depth_images
def _recognize_top_down_seeing_sky(self, too_far_mask):
N, _, H, W = too_far_mask.shape
# whether there is too-far pixels with all pixels above it too-far
num_too_far_above = too_far_mask.cumsum(dim= -2)
all_too_far_above_threshold = torch.arange(H, device= self.device).view(1, 1, H, 1)
all_too_far_above = num_too_far_above > all_too_far_above_threshold # (N, 1, H, W) mask
return all_too_far_above
def _add_sky_artifacts(self, depth_images):
""" Incase something like ceiling pattern or stereo failure happens. """
N, _, H, W = depth_images.shape
possible_to_sky_mask = depth_images > self.cfg.noise.forward_depth.sky_artifacts_far_distance
to_sky_mask = self._recognize_top_down_seeing_sky(possible_to_sky_mask)
isinf_mask = depth_images.isinf()
# add artifacts to the regions where they are seemingly pointing to sky
for pixel_value in random.sample(
self.cfg.noise.forward_depth.sky_artifacts_values,
len(self.cfg.noise.forward_depth.sky_artifacts_values),
):
artifacts_buffer = torch.ones_like(depth_images)
artifacts_buffer = self._add_depth_artifacts(artifacts_buffer,
self.cfg.noise.forward_depth.sky_artifacts_prob,
self.cfg.noise.forward_depth.sky_artifacts_height_mean_std,
self.cfg.noise.forward_depth.sky_artifacts_width_mean_std,
)
depth_images[to_sky_mask & (~isinf_mask)] *= artifacts_buffer[to_sky_mask & (~isinf_mask)]
depth_images[to_sky_mask & isinf_mask & (artifacts_buffer < 1)] = 0.
depth_images[to_sky_mask] += ((1 - artifacts_buffer) * pixel_value)[to_sky_mask]
pass
return depth_images
def _crop_depth_images(self, depth_images):
H, W = depth_images.shape[-2:]
return depth_images[...,
self.cfg.sensor.forward_camera.crop_top_bottom[0]: H - self.cfg.sensor.forward_camera.crop_top_bottom[1],
self.cfg.sensor.forward_camera.crop_left_right[0]: W - self.cfg.sensor.forward_camera.crop_left_right[1],
]
def _normalize_depth_images(self, depth_images):
depth_images = torch.clip(
depth_images,
self.cfg.sensor.forward_camera.depth_range[0],
self.cfg.sensor.forward_camera.depth_range[1],
)
# normalize depth image to (0, 1)
depth_images = (depth_images - self.cfg.sensor.forward_camera.depth_range[0]) / (
self.cfg.sensor.forward_camera.depth_range[1] - self.cfg.sensor.forward_camera.depth_range[0]
)
return depth_images
@torch.no_grad()
def _process_depth_image(self, depth_images):
# depth_images length N list with shape (H, W)
# reverse the negative depth (according to the document)
depth_images_ = torch.stack(depth_images).unsqueeze(1).contiguous().detach().clone() * -1
if hasattr(self.cfg.noise, "forward_depth"):
if getattr(self.cfg.noise.forward_depth, "countour_threshold", 0.) > 0.:
depth_images_ = self._add_depth_contour(depth_images_)
if getattr(self.cfg.noise.forward_depth, "artifacts_prob", 0.) > 0.:
depth_images_ = self._add_depth_artifacts(depth_images_,
self.cfg.noise.forward_depth.artifacts_prob,
self.cfg.noise.forward_depth.artifacts_height_mean_std,
self.cfg.noise.forward_depth.artifacts_width_mean_std,
)
if getattr(self.cfg.noise.forward_depth, "stereo_min_distance", 0.) > 0.:
depth_images_ = self._add_depth_stereo(depth_images_)
if getattr(self.cfg.noise.forward_depth, "sky_artifacts_prob", 0.) > 0.:
depth_images_ = self._add_sky_artifacts(depth_images_)
# if self.num_envs == 1:
# import matplotlib.pyplot as plt
# plt.cla()
# __depth_images = depth_images_[0, 0].detach().cpu().numpy() # (H, W)
# plt.imshow(__depth_images, cmap= "gray",
# vmin= self.cfg.sensor.forward_camera.depth_range[0],
# vmax= self.cfg.sensor.forward_camera.depth_range[1],
# )
# plt.draw()
# plt.pause(0.001)
depth_images_ = self._normalize_depth_images(depth_images_)
depth_images_ = self._crop_depth_images(depth_images_)
if hasattr(self, "forward_depth_resize_transform"):
depth_images_ = self.forward_depth_resize_transform(depth_images_)
return depth_images_.unsqueeze(0) # (1, N, 1, H, W)
def _get_forward_depth_obs(self, privileged= False):
if not self.forward_depth_refreshed and hasattr(self.cfg.sensor, "forward_camera") and (not privileged):
self.forward_depth_buffer = torch.cat([
self.forward_depth_buffer[1:],
self._process_depth_image(self.sensor_tensor_dict["forward_depth"]),
], dim= 0)
delay_refresh_mask = (self.episode_length_buf % int(self.cfg.sensor.forward_camera.refresh_duration / self.dt)) == 0
# NOTE: if the delayed frames is greater than the last frame, the last image should be used.
frame_select = (self.current_forward_camera_latency / self.dt).to(int)
self.forward_depth_delayed_frames = torch.where(
delay_refresh_mask,
torch.minimum(
frame_select,
self.forward_depth_delayed_frames + 1,
),
self.forward_depth_delayed_frames + 1,
)
self.forward_depth_delayed_frames = torch.clip(
self.forward_depth_delayed_frames,
0,
self.cfg.sensor.forward_camera.buffer_length,
)
self.forward_depth_output = self.forward_depth_buffer[
-self.forward_depth_delayed_frames,
torch.arange(self.num_envs, device= self.device),
].clone()
self.forward_depth_refreshed = True
if not hasattr(self.cfg.sensor, "forward_camera") or privileged:
return super()._get_forward_depth_obs(privileged).reshape(self.num_envs, -1)
return self.forward_depth_output.flatten(start_dim= 1)
def _get_proprioception_obs(self, privileged= False):
if not self.proprioception_refreshed and hasattr(self.cfg.sensor, "proprioception") and (not privileged):
self.proprioception_buffer = torch.cat([
self.proprioception_buffer[1:],
super()._get_proprioception_obs().unsqueeze(0),
], dim= 0)
# NOTE: if the delayed frames is greater than the last frame, the last image should be used. [0.04-0.0075, 0.04+0.0025]
self.proprioception_delayed_frames = ((self.current_proprioception_latency / self.dt) + 1).to(int)
self.proprioception_output = self.proprioception_buffer[
-self.proprioception_delayed_frames,
torch.arange(self.num_envs, device= self.device),
].clone()
# The last-action is not delayed.
if getattr(self.cfg.sensor.proprioception, "delay_action_obs", False):
not_delayed_mask = torch.randint(0, 1, size= (self.num_envs,), device= self.device).bool()
self.proprioception_output[not_delayed_mask, -12:] = self.proprioception_buffer[-1, not_delayed_mask, -12:]
else:
self.proprioception_output[:, -12:] = self.proprioception_buffer[-1, :, -12:]
self.proprioception_refreshed = True
if not hasattr(self.cfg.sensor, "proprioception") or privileged:
return super()._get_proprioception_obs(privileged)
return self.proprioception_output.flatten(start_dim= 1)
def get_obs_segment_from_components(self, obs_components):
obs_segments = super().get_obs_segment_from_components(obs_components)
if "forward_depth" in obs_components:
obs_segments["forward_depth"] = (1, *getattr(
self.cfg.sensor.forward_camera,
"output_resolution",
self.cfg.sensor.forward_camera.resolution,
))
return obs_segments
def _reward_exceed_torque_limits_i(self):
""" Indicator function """
max_torques = torch.abs(self.substep_torques).max(dim= 1)[0]
exceed_torque_each_dof = max_torques > self.torque_limits
exceed_torque = exceed_torque_each_dof.any(dim= 1)
return exceed_torque.to(torch.float32)
def _reward_exceed_torque_limits_square(self):
""" square function for exceeding part """
exceeded_torques = torch.abs(self.substep_torques) - self.torque_limits
exceeded_torques[exceeded_torques < 0.] = 0.
# sum along decimation axis and dof axis
return torch.square(exceeded_torques).sum(dim= 1).sum(dim= 1)
def _reward_exceed_dof_pos_limits(self):
return self.substep_exceed_dof_pos_limits.to(torch.float32).sum(dim= -1).mean(dim= -1)

View File

@ -0,0 +1,46 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from time import time
import numpy as np
import os
from isaacgym.torch_utils import *
from isaacgym import gymtorch, gymapi, gymutil
import torch
from typing import Tuple, Dict
from legged_gym.envs import LeggedRobot
class Cassie(LeggedRobot):
def _reward_no_fly(self):
contacts = self.contact_forces[:, self.feet_indices, 2] > 0.1
single_contact = torch.sum(1.*contacts, dim=1)==1
return 1.*single_contact

View File

@ -0,0 +1,113 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg, LeggedRobotCfgPPO
class CassieRoughCfg( LeggedRobotCfg ):
class env( LeggedRobotCfg.env):
num_envs = 4096
num_observations = 169
num_actions = 12
class terrain( LeggedRobotCfg.terrain):
measured_points_x = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5] # 1mx1m rectangle (without center line)
measured_points_y = [-0.5, -0.4, -0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3, 0.4, 0.5]
class init_state( LeggedRobotCfg.init_state ):
pos = [0.0, 0.0, 1.] # x,y,z [m]
default_joint_angles = { # = target angles [rad] when action = 0.0
'hip_abduction_left': 0.1,
'hip_rotation_left': 0.,
'hip_flexion_left': 1.,
'thigh_joint_left': -1.8,
'ankle_joint_left': 1.57,
'toe_joint_left': -1.57,
'hip_abduction_right': -0.1,
'hip_rotation_right': 0.,
'hip_flexion_right': 1.,
'thigh_joint_right': -1.8,
'ankle_joint_right': 1.57,
'toe_joint_right': -1.57
}
class control( LeggedRobotCfg.control ):
# PD Drive parameters:
stiffness = { 'hip_abduction': 100.0, 'hip_rotation': 100.0,
'hip_flexion': 200., 'thigh_joint': 200., 'ankle_joint': 200.,
'toe_joint': 40.} # [N*m/rad]
damping = { 'hip_abduction': 3.0, 'hip_rotation': 3.0,
'hip_flexion': 6., 'thigh_joint': 6., 'ankle_joint': 6.,
'toe_joint': 1.} # [N*m*s/rad] # [N*m*s/rad]
# action scale: target angle = actionScale * action + defaultAngle
action_scale = 0.5
# decimation: Number of control action updates @ sim DT per policy DT
decimation = 4
class asset( LeggedRobotCfg.asset ):
file = '{LEGGED_GYM_ROOT_DIR}/resources/robots/cassie/urdf/cassie.urdf'
name = "cassie"
foot_name = 'toe'
terminate_after_contacts_on = ['pelvis']
flip_visual_attachments = False
self_collisions = 1 # 1 to disable, 0 to enable...bitwise filter
class rewards( LeggedRobotCfg.rewards ):
soft_dof_pos_limit = 0.95
soft_dof_vel_limit = 0.9
soft_torque_limit = 0.9
max_contact_force = 300.
only_positive_rewards = False
class scales( LeggedRobotCfg.rewards.scales ):
termination = -200.
tracking_ang_vel = 1.0
torques = -5.e-6
dof_acc = -2.e-7
lin_vel_z = -0.5
feet_air_time = 5.
dof_pos_limits = -1.
no_fly = 0.25
dof_vel = -0.0
ang_vel_xy = -0.0
feet_contact_forces = -0.
class CassieRoughCfgPPO( LeggedRobotCfgPPO ):
class runner( LeggedRobotCfgPPO.runner ):
run_name = ''
experiment_name = 'rough_cassie'
class algorithm( LeggedRobotCfgPPO.algorithm):
entropy_coef = 0.01

View File

@ -0,0 +1,248 @@
import numpy as np
import os
from os import path as osp
from legged_gym.envs.a1.a1_field_config import A1FieldCfg, A1FieldCfgPPO
class Go1FieldCfg( A1FieldCfg ):
class terrain( A1FieldCfg.terrain ):
num_rows = 20
num_cols = 50
selected = "BarrierTrack"
max_init_terrain_level = 0 # for climb, leap finetune
border_size = 5
slope_treshold = 100.
curriculum = True # for tilt, crawl, climb, leap
# curriculum = False # for walk
horizontal_scale = 0.025 # [m]
pad_unavailable_info = True
BarrierTrack_kwargs = dict(
options= [
# "climb",
# "crawl",
"tilt",
# "leap",
], # each race track will permute all the options
track_width= 1.6, # for climb, crawl, tilt, walk
# track_width= 1.0, # for leap
track_block_length= 2., # the x-axis distance from the env origin point
wall_thickness= (0.04, 0.2), # [m]
wall_height= 0.01, # [m] for climb, crawl, tilt, walk
# wall_height= -0.5, # for leap
climb= dict(
height= (0.2, 0.6),
depth= (0.1, 0.8), # size along the forward axis
fake_offset= 0.0, # [m] an offset that make the robot easier to get into the obstacle
),
crawl= dict(
height= (0.28, 0.38),
depth= (0.1, 0.5), # size along the forward axis
wall_height= 0.6,
no_perlin_at_obstacle= False,
),
tilt= dict(
width= (0.24, 0.32),
depth= (0.4, 1.), # size along the forward axis
opening_angle= 0.0, # [rad] an opening that make the robot easier to get into the obstacle
wall_height= 0.5,
),
leap= dict(
length= (0.2, 1.0),
depth= (0.4, 0.8),
height= 0.25,
),
add_perlin_noise= True,
border_perlin_noise= True,
border_height= 0., # for climb, crawl, tilt, walk
# border_height= -0.5, # for leap
virtual_terrain= False, # for climb, crawl, leap
# virtual_terrain= True, # for tilt
draw_virtual_terrain= True,
engaging_next_threshold= 1.2,
curriculum_perlin= False,
no_perlin_threshold= 0.0, # for crawl, tilt, walk
# no_perlin_threshold= 0.05, # for leap
# no_perlin_threshold= 0.06, # for climb
)
TerrainPerlin_kwargs = dict(
# zScale= 0.1, # for crawl
zScale= 0.12, # for tilt
# zScale= [0.05, 0.1], # for climb
# zScale= [0.04, 0.1], # for leap
# zScale= [0.1, 0.15], # for walk
frequency= 10,
)
class commands( A1FieldCfg.commands ):
class ranges( A1FieldCfg.commands.ranges ):
# lin_vel_x = [0.0, 1.0] # for walk
# lin_vel_x = [0.8, 1.5] # for climb
# lin_vel_x = [1.0, 1.5] # for leap
lin_vel_x = [0.3, 0.8] # for tilt, crawl
lin_vel_y = [0.0, 0.0]
ang_vel_yaw = [0., 0.]
class control( A1FieldCfg.control ):
stiffness = {'joint': 50.}
damping = {'joint': 1.}
# action_scale = [0.2, 0.4, 0.4] * 4 # for walk
action_scale = 0.5 # for tilt, crawl, climb, leap
# for climb, leap
torque_limits = [20., 20., 25.] * 4
computer_clip_torque = True
motor_clip_torque = False
class asset( A1FieldCfg.asset ):
file = "{LEGGED_GYM_ROOT_DIR}/resources/robots/go1/urdf/go1.urdf"
penalize_contacts_on = ["base", "thigh"]
terminate_after_contacts_on = ["base", "imu"] # for climb, leap, tilt, walk no-virtual
class termination:
# additional factors that determines whether to terminates the episode
termination_terms = [
"roll", # for tilt
"pitch",
"z_low",
"z_high",
"out_of_track", # for leap, walk
]
roll_kwargs = dict(
threshold= 0.8, # [rad] # for tilt
tilt_threshold= 1.5, # for tilt (condition on engaging block)
)
pitch_kwargs = dict(
threshold= 1.6,
climb_threshold= 1.6,
leap_threshold= 1.5,
)
z_low_kwargs = dict(
threshold= 0.08, # [m]
)
z_high_kwargs = dict(
threshold= 1.5, # [m]
)
out_of_track_kwargs = dict(
threshold= 1., # [m]
)
check_obstacle_conditioned_threshold = True
timeout_at_border = True
timeout_at_finished = True
class domain_rand( A1FieldCfg.domain_rand ):
randomize_com = True
class com_range:
x = [-0.05, 0.15]
y = [-0.1, 0.1]
z = [-0.05, 0.05]
randomize_base_mass = True
added_mass_range = [1.0, 3.0]
class rewards( A1FieldCfg.rewards ):
class scales:
tracking_ang_vel = 0.1
world_vel_l2norm = -1.
legs_energy_substeps = -1e-5
exceed_torque_limits_i = -0.1
alive = 2.
lin_pos_y = -0.2
yaw_abs = -0.5
penetrate_depth = -5e-3
penetrate_volume = -5e-3
soft_dof_pos_limit = 0.01
class sim( A1FieldCfg.sim ):
body_measure_points = { # transform are related to body frame
"base": dict(
x= [i for i in np.arange(-0.2, 0.31, 0.03)],
y= [-0.08, -0.04, 0.0, 0.04, 0.08],
z= [i for i in np.arange(-0.061, 0.071, 0.03)],
transform= [0., 0., 0.005, 0., 0., 0.],
),
"thigh": dict(
x= [
-0.16, -0.158, -0.156, -0.154, -0.152,
-0.15, -0.145, -0.14, -0.135, -0.13, -0.125, -0.12, -0.115, -0.11, -0.105, -0.1, -0.095, -0.09, -0.085, -0.08, -0.075, -0.07, -0.065, -0.05,
0.0, 0.05, 0.1,
],
y= [-0.015, -0.01, 0.0, -0.01, 0.015],
z= [-0.03, -0.015, 0.0, 0.015],
transform= [0., 0., -0.1, 0., 1.57079632679, 0.],
),
"calf": dict(
x= [i for i in np.arange(-0.13, 0.111, 0.03)],
y= [-0.015, 0.0, 0.015],
z= [-0.015, 0.0, 0.015],
transform= [0., 0., -0.11, 0., 1.57079632679, 0.],
),
}
class curriculum:
# chosen heuristically, please refer to `LeggedRobotField._get_terrain_curriculum_move` with fixed body_measure_points
# for crawl (not updated)
penetrate_volume_threshold_harder = 1500
penetrate_volume_threshold_easier = 10000
penetrate_depth_threshold_harder = 10
penetrate_depth_threshold_easier = 400
# for tilt
# penetrate_volume_threshold_harder = 2000
# penetrate_volume_threshold_easier = 10000
# penetrate_depth_threshold_harder = 20
# penetrate_depth_threshold_easier = 300
# for climb
# penetrate_volume_threshold_harder = 6000
# penetrate_volume_threshold_easier = 12000
# penetrate_depth_threshold_harder = 600
# penetrate_depth_threshold_easier = 1600
# for leap
# penetrate_volume_threshold_harder = 9000
# penetrate_volume_threshold_easier = 10000
# penetrate_depth_threshold_harder = 300
# penetrate_depth_threshold_easier = 5000
logs_root = osp.join(osp.dirname(osp.dirname(osp.dirname(osp.dirname(osp.abspath(__file__))))), "logs")
class Go1FieldCfgPPO( A1FieldCfgPPO ):
class algorithm( A1FieldCfgPPO.algorithm ):
entropy_coef = 0.01 # for walk
clip_min_std = 1e-12 # for walk
class runner( A1FieldCfgPPO.runner ):
experiment_name = "field_go1"
run_name = "".join(["Skills",
("_all" if len(Go1FieldCfg.terrain.BarrierTrack_kwargs["options"]) > 1 else ("_" + Go1FieldCfg.terrain.BarrierTrack_kwargs["options"][0] if Go1FieldCfg.terrain.BarrierTrack_kwargs["options"] else "PlaneWalking")),
("_pEnergySubsteps" + np.format_float_scientific(-Go1FieldCfg.rewards.scales.legs_energy_substeps, trim= "-", exp_digits= 1) if getattr(Go1FieldCfg.rewards.scales, "legs_energy_substeps", 0.0) != 0.0 else ""),
("_pPenV" + np.format_float_scientific(-Go1FieldCfg.rewards.scales.penetrate_volume, trim= "-", exp_digits= 1) if getattr(Go1FieldCfg.rewards.scales, "penetrate_volume", 0.) < 0. else ""),
("_pPenD" + np.format_float_scientific(-Go1FieldCfg.rewards.scales.penetrate_depth, trim= "-", exp_digits= 1) if getattr(Go1FieldCfg.rewards.scales, "penetrate_depth", 0.) < 0. else ""),
("_rAlive{:d}".format(int(Go1FieldCfg.rewards.scales.alive)) if Go1FieldCfg.rewards.scales.alive != 2 else ""),
("_rAngVel{:.2f}".format(Go1FieldCfg.rewards.scales.tracking_ang_vel) if Go1FieldCfg.rewards.scales.tracking_ang_vel != 0.05 else ""),
("_pTorqueExceedIndicate" + np.format_float_scientific(-Go1FieldCfg.rewards.scales.exceed_torque_limits_i, trim= "-", exp_digits= 1) if getattr(Go1FieldCfg.rewards.scales, "exceed_torque_limits_i", 0.) != 0. else ""),
("_pTorqueExceedSquare" + np.format_float_scientific(-Go1FieldCfg.rewards.scales.exceed_torque_limits_square, trim= "-", exp_digits= 1) if getattr(Go1FieldCfg.rewards.scales, "exceed_torque_limits_square", 0.) != 0. else ""),
("_pYaw{:.2f}".format(-Go1FieldCfg.rewards.scales.yaw_abs) if Go1FieldCfg.rewards.scales.yaw_abs != 0. else ""),
("_noComputerTorqueClip" if not Go1FieldCfg.control.computer_clip_torque else ""),
("_virtualTerrain" if Go1FieldCfg.terrain.BarrierTrack_kwargs["virtual_terrain"] else ""),
("_aScale{:d}{:d}{:d}".format(
int(Go1FieldCfg.control.action_scale[0] * 10),
int(Go1FieldCfg.control.action_scale[1] * 10),
int(Go1FieldCfg.control.action_scale[2] * 10),
) if isinstance(Go1FieldCfg.control.action_scale, (tuple, list)) \
else "_aScale{:.1f}".format(Go1FieldCfg.control.action_scale)
),
("_torqueClip{:.0f}".format(Go1FieldCfg.control.torque_limits) if not isinstance(Go1FieldCfg.control.torque_limits, (tuple, list)) else (
"_tClip{:d}{:d}{:d}".format(
int(Go1FieldCfg.control.torque_limits[0]),
int(Go1FieldCfg.control.torque_limits[1]),
int(Go1FieldCfg.control.torque_limits[2]),
)
)),
])
resume = False
load_run = ""
max_iterations = 10000
save_interval = 500

View File

@ -0,0 +1,255 @@
from collections import OrderedDict
import os
import datetime
import os.path as osp
import numpy as np
from legged_gym.envs.go1.go1_field_config import Go1FieldCfg, Go1FieldCfgPPO
from legged_gym.envs.a1.a1_field_distill_config import A1FieldDistillCfg, A1FieldDistillCfgPPO
from legged_gym.utils.helpers import merge_dict
class Go1FieldDistillCfg( Go1FieldCfg ):
class env( Go1FieldCfg.env ):
num_envs = 256
obs_components = [
"proprioception", # 48
"forward_depth",
]
privileged_obs_components = [
"proprioception", # 48
"base_pose",
"robot_config",
"engaging_block",
"sidewall_distance",
]
use_lin_vel = False
privileged_use_lin_vel = True
# if True privileged_obs will not have noise based on implementations
privileged_obs_gets_privilege = False # for climb temporarily
class init_state( Go1FieldCfg.init_state ):
pos = [0.0, 0.0, 0.38] # x,y,z [m]
class sensor( Go1FieldCfg.sensor ):
class forward_camera( Go1FieldCfg.sensor.forward_camera ):
resolution = [int(240/4), int(424/4)]
position = dict(
mean= [0.245, 0.0075, 0.072+0.018],
std= [0.002, 0.002, 0.0005],
) # position in base_link ##### small randomization
rotation = dict(
lower= [0, 0, 0],
upper= [0, 0, 0],
) # rotation in base_link ##### small randomization
resized_resolution= [48, 64]
output_resolution = [48, 64]
horizontal_fov = [85, 87] # measured around 87 degree
# for go1, usb2.0, 480x640, d435i camera
latency_range = [0.25, 0.30]
latency_resample_time = 5.0 # [s]
refresh_duration = 1/10 # [s] for (240, 424 option with onboard script fixed to no more than 20Hz)
# config to simulate stero RGBD camera
crop_top_bottom = [0, 0]
crop_left_right = [int(60/4), int(46/4)]
depth_range = [0.0, 1.5] # [m]
# class proprioception( Go1FieldCfg.sensor.proprioception ): # inherited from A1FieldCfg
class terrain( Go1FieldCfg.terrain ):
num_rows = 2
# num_rows = 80
num_cols = 1
max_init_terrain_level = 1
curriculum = False
selected = "BarrierTrack"
BarrierTrack_kwargs = dict(
options= [
# "tilt",
"crawl",
"climb",
"leap",
], # each race track will permute all the options
one_obstacle_per_track= True,
track_width= 1.6,
track_block_length= 1.8, # the x-axis distance from the env origin point
wall_thickness= (0.04, 0.2), # [m]
wall_height= 0.0, # [m]
climb= dict(
height= (0.40, 0.45),
depth= (0.5, 1.0), # size along the forward axis
fake_offset= 0.05, # [m] making the climb's height info greater than its physical height.
),
crawl= dict(
height= (0.28, 0.38),
depth= (0.1, 0.2), # size along the forward axis
wall_height= 0.6,
fake_depth= 0.4, # when track block length is 1.8m
),
tilt= dict(
width= (0.34, 0.36),
depth= (0.4, 0.6), # size along the forward axis
opening_angle= 0.0, # [rad] an opening that make the robot easier to get into the obstacle
wall_height= 0.5,
),
leap= dict(
# length= (0.45, 0.7),
length= (0.5, 0.7), # for parkour real-world env
depth= (0.5, 0.6),
height= 0.2,
fake_offset= 0.2,
follow_climb_ratio= 0.5, # when following climb, not drop down to ground suddenly.
),
add_perlin_noise= True,
border_perlin_noise= True,
border_height= 0.,
virtual_terrain= False,
draw_virtual_terrain= True,
engaging_next_threshold= 1.2,
check_skill_combinations= True,
curriculum_perlin= False,
no_perlin_threshold= 0.04, # for parkour real-world env
walk_in_skill_gap= True, # for parkour real-world env
)
TerrainPerlin_kwargs = dict(
zScale= [0.0, 0.05], # for parkour real-world env
frequency= 10,
)
class commands( A1FieldDistillCfg.commands ):
pass
class termination( A1FieldDistillCfg.termination ):
pass
class domain_rand( A1FieldDistillCfg.domain_rand ):
randomize_com = True
randomize_motor = True
randomize_friction = True
friction_range = [0.0, 0.8]
randomize_base_mass = True
push_robots = False
init_dof_pos_ratio_range = [0.9, 1.1]
init_base_vel_range = [-0.0, 0.0]
init_base_pos_range = dict(
x= [0.4, 0.6],
y= [-0.05, 0.05],
)
class noise( A1FieldDistillCfg.noise ):
class noise_scales( A1FieldDistillCfg.noise.noise_scales ):
forward_depth = 0.0
class forward_depth:
stereo_min_distance = 0.12 # when using (240, 424) resolution
stereo_far_distance = 2.
stereo_far_noise_std = 0.08 # The noise std of pixels that are greater than stereo_far_noise_distance
stereo_near_noise_std = 0.02 # The noise std of pixels that are less than stereo_far_noise_distance
stereo_full_block_artifacts_prob = 0.004 # The probability of adding artifacts to pixels that are less than stereo_min_distance
stereo_full_block_values = [0.0, 0.25, 0.5, 1., 3.]
stereo_full_block_height_mean_std = [62, 1.5]
stereo_full_block_width_mean_std = [3, 0.01]
stereo_half_block_spark_prob = 0.02
stereo_half_block_value = 3000 # to the maximum value directly
sky_artifacts_prob = 0.0001
sky_artifacts_far_distance = 2. # greater than this distance will be viewed as to the sky
sky_artifacts_values = [0.6, 1., 1.2, 1.5, 1.8]
sky_artifacts_height_mean_std = [2, 3.2]
sky_artifacts_width_mean_std = [2, 3.2]
class sim( Go1FieldCfg.sim ):
no_camera = False
class curriculum:
# override base class attributes
pass
distill_target_ = "tanh"
logs_root = osp.join(osp.dirname(osp.dirname(osp.dirname(osp.dirname(osp.abspath(__file__))))), "logs")
class Go1FieldDistillCfgPPO( A1FieldDistillCfgPPO ):
class algorithm( A1FieldDistillCfgPPO.algorithm ):
entropy_coef = 0.0
value_loss_coef = 0.0
num_learning_epochs = 1 # 1 for finetuning, 2 for training from scratch
num_mini_batches = 2
teacher_act_prob = "exp"
update_times_scale = 2000
using_ppo = False
distill_target = distill_target_
buffer_dilation_ratio = 1.
learning_rate = 3e-4
optimizer_class_name = "AdamW"
teacher_policy_class_name = "ActorCriticFieldMutex"
teacher_ac_path = None
class teacher_policy (A1FieldDistillCfgPPO.policy ):
# For loading teacher policy. No need to change for training student
num_actor_obs = 81 # should equal to the sum of the obs_segments
num_critic_obs = 81
num_actions = 12
obs_segments = OrderedDict(
proprioception= (48,),
base_pose= (6,),
robot_config= (1 + 3 + 1 + 12,),
engaging_block= (1 + (4 + 1) + 2,),
sidewall_distance= (2,),
)
env_action_scale = Go1FieldCfg.control.action_scale
sub_policy_class_name = "ActorCriticRecurrent"
sub_policy_paths = [ # must in the order of obstacle ID
logs_root + "/field_go1/{your go1 walking policy}",
logs_root + "/field_go1/{your go1 tilting policy}",
logs_root + "/field_go1/{your go1 crawling policy}",
logs_root + "/field_go1/{your go1 climbing policy}",
logs_root + "/field_go1/{your go1 leaping policy}",
]
cmd_vel_mapping = {
0: 1.0,
1: 0.5,
2: 0.8,
3: 1.2,
4: 1.5,
}
class policy( A1FieldDistillCfgPPO.policy ):
pass
runner_class_name = "TwoStageRunner"
class runner( A1FieldDistillCfgPPO.runner ):
experiment_name = "distill_go1"
class pretrain_dataset:
# data_dir = [
# "logs/distill_a1_dagger/" + dir_ \
# for dir_ in os.listdir("logs/distill_a1_dagger")
# ]
scan_dir = "".join([
"logs/distill_go1_dagger/{}_".format(datetime.datetime.now().strftime("%b%d")),
"".join(Go1FieldDistillCfg.terrain.BarrierTrack_kwargs["options"]),
"_vDelay{:.2f}-{:.2f}".format(
Go1FieldDistillCfg.sensor.forward_camera.latency_range[0],
Go1FieldDistillCfg.sensor.forward_camera.latency_range[1],
),
])
dataset_loops = -1
random_shuffle_traj_order = True
keep_latest_ratio = 1.0
keep_latest_n_trajs = 4000
starting_frame_range = [0, 100]
resume = False
load_run = ""
max_iterations = 80000
save_interval = 2000
run_name = "".join(["distill_",
"".join(Go1FieldDistillCfg.terrain.BarrierTrack_kwargs["options"]),
"_vDelay{:.2f}-{:.2f}".format(
Go1FieldDistillCfg.sensor.forward_camera.latency_range[0],
Go1FieldDistillCfg.sensor.forward_camera.latency_range[1],
),
])

View File

@ -0,0 +1,35 @@
""" The script to clear the dataset, but maintain the metadata.json file """
import os
import os.path as osp
import shutil
import argparse
def main(args):
for data_dir in args.data_dirs:
if osp.isfile(osp.join(data_dir, "metadata.json")):
shutil.move(
osp.join(data_dir, "metadata.json"),
osp.join(osp.dirname(data_dir), data_dir.split("/")[-1] + ".json")
)
print(f"Moved metadata.json to {osp.join(osp.dirname(data_dir), data_dir.split('/')[-1] + '.json')}")
# removing the directory
if osp.isdir(data_dir):
print("Removing directory: ", data_dir)
if not args.only_files:
shutil.rmtree(data_dir)
else:
for file_name in os.listdir(data_dir):
os.remove(osp.join(data_dir, file_name))
print(f"Finished clearing {data_dir}")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--data_dirs",
type= str,
nargs= "+",
)
parser.add_argument("--only_files",
action= "store_true",
)
args = parser.parse_args()
main(args)

View File

@ -0,0 +1,156 @@
""" The script to collect demonstrations for the legged robot """
import isaacgym
from collections import OrderedDict
import torch
from datetime import datetime
import numpy as np
import os
import json
import os.path as osp
from legged_gym.envs import *
from legged_gym.utils import get_args, task_registry
from legged_gym.utils.helpers import update_cfg_from_args, class_to_dict, update_class_from_dict
from legged_gym.debugger import break_into_debugger
from rsl_rl.modules import build_actor_critic
from rsl_rl.runners.dagger_saver import DemonstrationSaver, DaggerSaver
def main(args):
RunnerCls = DaggerSaver
# RunnerCls = DemonstrationSaver
success_traj_only = False
teacher_act_prob = 0.1
env_cfg, train_cfg = task_registry.get_cfgs(name=args.task)
if RunnerCls == DaggerSaver:
with open(os.path.join("logs", train_cfg.runner.experiment_name, args.load_run, "config.json"), "r") as f:
d = json.load(f, object_pairs_hook= OrderedDict)
update_class_from_dict(env_cfg, d, strict= True)
update_class_from_dict(train_cfg, d, strict= True)
# Some custom settings
env_cfg.terrain.BarrierTrack_kwargs["options"] = [
"tilt", # "tilt",
"crawl", # "crawl",
"climb", # "climb",
"climb", # "climb",
"leap", # "leap",
]
####### customized option to increase data distribution #######
action_sample_std = 0.0
############# some predefined options #############
if len(env_cfg.terrain.BarrierTrack_kwargs["options"]) == 1:
env_cfg.terrain.BarrierTrack_kwargs["n_obstacles_per_track"] = 1
env_cfg.terrain.num_rows = 20; env_cfg.terrain.num_cols = 30
else: # for parkour env
# >>> option 1
env_cfg.domain_rand.added_mass_range = [1.5, 3.0]
env_cfg.terrain.BarrierTrack_kwargs["climb_down_prob"] = 0.45
env_cfg.terrain.num_rows = 60; env_cfg.terrain.num_cols = 1
# >>> option 2
# env_cfg.terrain.BarrierTrack_kwargs["climb"]["depth"] = (0.1, 0.3)
# env_cfg.terrain.BarrierTrack_kwargs["climb"]["climb_down_prob"] = 0.55
# env_cfg.terrain.num_rows = 60; env_cfg.terrain.num_cols = 1
# >>> option 3
# env_cfg.terrain.BarrierTrack_kwargs["n_obstacles_per_track"] = 1
# env_cfg.terrain.BarrierTrack_kwargs["track_width"] = 1.6
# env_cfg.terrain.BarrierTrack_kwargs["wall_thickness"] = (0.0, 0.3)
# env_cfg.terrain.BarrierTrack_kwargs["track_block_length"] = 3.0
# env_cfg.terrain.BarrierTrack_kwargs["climb"]["depth"] = (0.5, 1.8)
# env_cfg.terrain.num_rows = 20; env_cfg.terrain.num_cols = 24
# action_sample_std = 0.1
pass
if (env_cfg.terrain.BarrierTrack_kwargs["options"][0] == "leap") and all(i == env_cfg.terrain.BarrierTrack_kwargs["options"][0] for i in env_cfg.terrain.BarrierTrack_kwargs["options"]):
######### For leap, because the platform is usually higher than the ground.
env_cfg.terrain.num_rows = 80
env_cfg.terrain.num_cols = 1
env_cfg.terrain.BarrierTrack_kwargs["track_width"] = 1.6
env_cfg.terrain.BarrierTrack_kwargs["wall_thickness"] = (0.01, 0.5)
env_cfg.terrain.BarrierTrack_kwargs["wall_height"] = (-0.4, 0.2) # randomize incase of terrain that have side walls
env_cfg.terrain.BarrierTrack_kwargs["border_height"] = -0.4
# Done custom settings
env, _ = task_registry.make_env(name=args.task, args=args, env_cfg=env_cfg)
_, train_cfg = update_cfg_from_args(None, train_cfg, args)
config = class_to_dict(train_cfg)
config.update(class_to_dict(env_cfg))
# create teacher policy
policy = build_actor_critic(
env,
train_cfg.algorithm.teacher_policy_class_name,
config["algorithm"]["teacher_policy"],
).to(env.device)
# load the policy is possible
if train_cfg.algorithm.teacher_ac_path is not None:
state_dict = torch.load(train_cfg.algorithm.teacher_ac_path, map_location= "cpu")
teacher_actor_critic_state_dict = state_dict["model_state_dict"]
policy.load_state_dict(teacher_actor_critic_state_dict)
# build runner
track_header = "".join(env_cfg.terrain.BarrierTrack_kwargs["options"])
if env_cfg.commands.ranges.lin_vel_x[1] > 0.0:
cmd_vel = "_cmd{:.1f}-{:.1f}".format(env_cfg.commands.ranges.lin_vel_x[0], env_cfg.commands.ranges.lin_vel_x[1])
elif env_cfg.commands.ranges.lin_vel_x[1] == 0. and len(env_cfg.terrain.BarrierTrack_kwargs["options"]) == 1 \
or (env_cfg.terrain.BarrierTrack_kwargs["options"][0] == env_cfg.terrain.BarrierTrack_kwargs["options"][1]):
obstacle_id = env.terrain.track_options_id_dict[env_cfg.terrain.BarrierTrack_kwargs["options"][0]]
try:
overrided_vel = train_cfg.algorithm.teacher_policy.cmd_vel_mapping[obstacle_id]
except:
overrided_vel = train_cfg.algorithm.teacher_policy.cmd_vel_mapping[str(obstacle_id)]
cmd_vel = "_cmdOverride{:.1f}".format(overrided_vel)
else:
cmd_vel = "_cmdMutex"
datadir = osp.join(osp.dirname(osp.dirname(osp.dirname(osp.abspath(__file__)))), "logs")
runner_kwargs = dict(
env= env,
policy= policy,
save_dir= osp.join(
train_cfg.runner.pretrain_dataset.scan_dir if RunnerCls == DaggerSaver else osp.join(datadir, "distill_{}_dagger".format(args.task.split("_")[0])),
datetime.now().strftime('%b%d_%H-%M-%S') + "_" + "".join([
track_header,
cmd_vel,
"_lowBorder" if env_cfg.terrain.BarrierTrack_kwargs["border_height"] < 0 else "",
"_addMassMin{:.1f}".format(env_cfg.domain_rand.added_mass_range[0]) if env_cfg.domain_rand.added_mass_range[0] > 1. else "",
"_comMean{:.2f}".format((env_cfg.domain_rand.com_range.x[0] + env_cfg.domain_rand.com_range.x[1])/2),
"_1cols" if env_cfg.terrain.num_cols == 1 else "",
"_randOrder" if env_cfg.terrain.BarrierTrack_kwargs.get("randomize_obstacle_order", False) else "",
("_noPerlinRate{:.1f}".format(
(env_cfg.terrain.BarrierTrack_kwargs["no_perlin_threshold"] - env_cfg.terrain.TerrainPerlin_kwargs["zScale"][0]) / \
(env_cfg.terrain.TerrainPerlin_kwargs["zScale"][1] - env_cfg.terrain.TerrainPerlin_kwargs["zScale"][0])
)),
("_fric{:.1f}-{:.1f}".format(*env_cfg.domain_rand.friction_range)),
"_successOnly" if success_traj_only else "",
"_aStd{:.2f}".format(action_sample_std) if (action_sample_std > 0. and RunnerCls == DaggerSaver) else "",
] + ([] if RunnerCls == DemonstrationSaver else ["_" + "_".join(args.load_run.split("_")[:2])])
),
),
rollout_storage_length= 256,
min_timesteps= 1e6, # 1e6,
min_episodes= 2e4 if RunnerCls == DaggerSaver else 2e-3,
use_critic_obs= True,
success_traj_only= success_traj_only,
obs_disassemble_mapping= dict(
forward_depth= "normalized_image",
),
)
if RunnerCls == DaggerSaver:
# kwargs for dagger saver
runner_kwargs.update(dict(
training_policy_logdir= osp.join(
"logs",
train_cfg.runner.experiment_name,
args.load_run,
),
teacher_act_prob= teacher_act_prob,
update_times_scale= config["algorithm"]["update_times_scale"],
action_sample_std= action_sample_std,
))
runner = RunnerCls(**runner_kwargs)
runner.collect_and_save(config= config)
if __name__ == "__main__":
args = get_args()
main(args)

View File

@ -0,0 +1,340 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from legged_gym import LEGGED_GYM_ROOT_DIR
from collections import OrderedDict
import os
import json
import time
import numpy as np
np.float = np.float32
import isaacgym
from isaacgym import gymtorch, gymapi
from isaacgym.torch_utils import *
from legged_gym.envs import *
from legged_gym.utils import get_args, export_policy_as_jit, task_registry, Logger
from legged_gym.utils.helpers import update_class_from_dict
from legged_gym.utils.observation import get_obs_slice
from legged_gym.debugger import break_into_debugger
import numpy as np
import torch
def create_recording_camera(gym, env_handle,
resolution= (1920, 1080),
h_fov= 86,
actor_to_attach= None,
transform= None, # related to actor_to_attach
):
camera_props = gymapi.CameraProperties()
camera_props.enable_tensors = True
camera_props.width = resolution[0]
camera_props.height = resolution[1]
camera_props.horizontal_fov = h_fov
camera_handle = gym.create_camera_sensor(env_handle, camera_props)
if actor_to_attach is not None:
gym.attach_camera_to_body(
camera_handle,
env_handle,
actor_to_attach,
transform,
gymapi.FOLLOW_POSITION,
)
elif transform is not None:
gym.set_camera_transform(
camera_handle,
env_handle,
transform,
)
return camera_handle
@torch.no_grad()
def play(args):
env_cfg, train_cfg = task_registry.get_cfgs(name=args.task)
with open(os.path.join("logs", train_cfg.runner.experiment_name, args.load_run, "config.json"), "r") as f:
d = json.load(f, object_pairs_hook= OrderedDict)
update_class_from_dict(env_cfg, d, strict= True)
update_class_from_dict(train_cfg, d, strict= True)
# override some parameters for testing
if env_cfg.terrain.selected == "BarrierTrack":
env_cfg.env.num_envs = min(env_cfg.env.num_envs, 1)
env_cfg.env.episode_length_s = 20
env_cfg.terrain.max_init_terrain_level = 0
env_cfg.terrain.num_rows = 1
env_cfg.terrain.num_cols = 1
else:
env_cfg.env.num_envs = min(env_cfg.env.num_envs, 1)
env_cfg.env.episode_length_s = 60
env_cfg.terrain.terrain_length = 8
env_cfg.terrain.terrain_width = 8
env_cfg.terrain.max_init_terrain_level = 0
env_cfg.terrain.num_rows = 1
env_cfg.terrain.num_cols = 1
env_cfg.terrain.curriculum = False
env_cfg.terrain.BarrierTrack_kwargs["options"] = [
# "crawl",
"climb",
# "leap",
# "tilt",
]
if "one_obstacle_per_track" in env_cfg.terrain.BarrierTrack_kwargs.keys():
env_cfg.terrain.BarrierTrack_kwargs.pop("one_obstacle_per_track")
env_cfg.terrain.BarrierTrack_kwargs["n_obstacles_per_track"] = 2
env_cfg.commands.ranges.lin_vel_x = [1.2, 1.2]
if "distill" in args.task:
env_cfg.commands.ranges.lin_vel_x = [0.0, 0.0]
env_cfg.commands.ranges.lin_vel_y = [-0., 0.]
env_cfg.commands.ranges.ang_vel_yaw = [-0., 0.]
env_cfg.domain_rand.push_robots = False
env_cfg.domain_rand.init_base_pos_range = dict(
x= [0.6, 0.6],
y= [-0.05, 0.05],
)
env_cfg.termination.termination_terms = []
env_cfg.termination.timeout_at_border = False
env_cfg.termination.timeout_at_finished = False
env_cfg.viewer.debug_viz = False # in a1_distill, setting this to true will constantly showing the egocentric depth view.
env_cfg.viewer.draw_volume_sample_points = False
train_cfg.runner.resume = True
train_cfg.runner_class_name = "OnPolicyRunner"
if "distill" in args.task: # to save the memory
train_cfg.algorithm.teacher_policy.sub_policy_paths = []
train_cfg.algorithm.teacher_policy_class_name = "ActorCritic"
train_cfg.algorithm.teacher_policy = dict(
num_actor_obs= 48,
num_critic_obs= 48,
num_actions= 12,
)
######### Some hacks to run ActorCriticMutex policy ##########
if False: # for a1
train_cfg.runner.policy_class_name = "ActorCriticClimbMutex"
train_cfg.policy.sub_policy_class_name = "ActorCriticRecurrent"
logs_root = "logs"
train_cfg.policy.sub_policy_paths = [ # must in the order of obstacle ID
logs_root + "/field_a1_oracle/Jun03_00-01-38_SkillsPlaneWalking_pEnergySubsteps1e-5_rAlive2_pTorqueExceedIndicate1e+1_noCurriculum_propDelay0.04-0.05_noPerlinRate-2.0_nSubsteps4_envFreq50.0_aScale244",
logs_root + "/field_a1_oracle/Aug08_05-22-52_Skills_tilt_pEnergySubsteps1e-5_rAlive1_pPenV5e-3_pPenD5e-3_pPosY0.50_pYaw0.50_rTilt7e-1_pTorqueExceedIndicate1e-1_virtualTerrain_propDelay0.04-0.05_push/",
logs_root + "/field_a1_oracle/May21_05-25-19_Skills_crawl_pEnergy2e-5_rAlive1_pPenV6e-2_pPenD6e-2_pPosY0.2_kp50_noContactTerminate_aScale0.5/",
logs_root + "/field_a1_oracle/Jun03_00-33-08_Skills_climb_pEnergySubsteps2e-6_rAlive2_pTorqueExceedIndicate2e-1_propDelay0.04-0.05_noPerlinRate0.2_nSubsteps4_envFreq50.0_aScale0.5",
logs_root + "/field_a1_oracle/Jun04_01-03-59_Skills_leap_pEnergySubsteps2e-6_rAlive2_pPenV4e-3_pPenD4e-3_pPosY0.20_pYaw0.20_pTorqueExceedSquare1e-3_leapH0.2_propDelay0.04-0.05_noPerlinRate0.2_aScale0.5",
]
train_cfg.policy.climb_down_policy_path = logs_root + "/field_a1_oracle/Aug30_16-12-14_Skills_climb_climbDownProb0.5_propDelay0.04-0.05"
train_cfg.runner.resume = False
env_cfg.env.use_lin_vel = True
train_cfg.policy.cmd_vel_mapping = {
0: 1.0,
1: 0.5,
2: 0.8,
3: 1.2,
4: 1.5,
}
if args.task == "a1_distill":
env_cfg.env.obs_components = env_cfg.env.privileged_obs_components
env_cfg.env.privileged_obs_gets_privilege = False
# prepare environment
env, _ = task_registry.make_env(name=args.task, args=args, env_cfg=env_cfg)
env.reset()
print("terrain_levels:", env.terrain_levels.float().mean(), env.terrain_levels.float().max(), env.terrain_levels.float().min())
obs = env.get_observations()
critic_obs = env.get_privileged_observations()
# register debugging options to manually trigger disruption
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_P, "push_robot")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_L, "press_robot")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_J, "action_jitter")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_Q, "exit")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_R, "agent_full_reset")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_U, "full_reset")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_C, "resample_commands")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_W, "forward")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_S, "backward")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_A, "leftward")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_D, "rightward")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_F, "leftturn")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_G, "rightturn")
env.gym.subscribe_viewer_keyboard_event(env.viewer, isaacgym.gymapi.KEY_X, "stop")
# load policy
ppo_runner, train_cfg = task_registry.make_alg_runner(
env=env,
name=args.task,
args=args,
train_cfg=train_cfg,
save_cfg= False,
)
agent_model = ppo_runner.alg.actor_critic
policy = ppo_runner.get_inference_policy(device=env.device)
### get obs_slice to read the obs
# obs_slice = get_obs_slice(env.obs_segments, "engaging_block")
# export policy as a jit module (used to run it from C++)
if EXPORT_POLICY:
path = os.path.join(LEGGED_GYM_ROOT_DIR, 'logs', train_cfg.runner.experiment_name, 'exported', 'policies')
export_policy_as_jit(ppo_runner.alg.actor_critic, path)
print('Exported policy as jit script to: ', path)
if RECORD_FRAMES:
transform = gymapi.Transform()
transform.p = gymapi.Vec3(*env_cfg.viewer.pos)
transform.r = gymapi.Quat.from_euler_zyx(0., 0., -np.pi/2)
recording_camera = create_recording_camera(
env.gym,
env.envs[0],
transform= transform,
)
logger = Logger(env.dt)
robot_index = 0 # which robot is used for logging
joint_index = 4 # which joint is used for logging
stop_state_log = 512 # number of steps before plotting states
stop_rew_log = env.max_episode_length + 1 # number of steps before print average episode rewards
camera_position = np.array(env_cfg.viewer.pos, dtype=np.float64)
camera_vel = np.array([0.6, 0., 0.])
camera_direction = np.array(env_cfg.viewer.lookat) - np.array(env_cfg.viewer.pos)
camera_follow_id = 0 # only effective when CAMERA_FOLLOW
img_idx = 0
if hasattr(env, "motor_strength"):
print("motor_strength:", env.motor_strength[robot_index].cpu().numpy().tolist())
print("torque_limits:", env.torque_limits)
start_time = time.time_ns()
for i in range(10*int(env.max_episode_length)):
if "obs_slice" in locals().keys():
obs_component = obs[:, obs_slice[0]].reshape(-1, *obs_slice[1])
print(obs_component[robot_index])
actions = policy(obs.detach())
teacher_actions = actions
obs, critic_obs, rews, dones, infos = env.step(actions.detach())
if RECORD_FRAMES:
filename = os.path.join(
os.path.abspath("logs/images/"),
f"{img_idx:04d}.png",
)
env.gym.write_viewer_image_to_file(env.viewer, filename)
img_idx += 1
if MOVE_CAMERA:
if CAMERA_FOLLOW:
camera_position[:] = env.root_states[camera_follow_id, :3].cpu().numpy() - camera_direction
else:
camera_position += camera_vel * env.dt
env.set_camera(camera_position, camera_position + camera_direction)
for ui_event in env.gym.query_viewer_action_events(env.viewer):
if ui_event.action == "push_robot" and ui_event.value > 0:
# manully trigger to push the robot
env._push_robots()
if ui_event.action == "press_robot" and ui_event.value > 0:
env.root_states[:, 9] = torch_rand_float(-env.cfg.domain_rand.max_push_vel_xy, 0, (env.num_envs, 1), device=env.device).squeeze(1)
env.gym.set_actor_root_state_tensor(env.sim, gymtorch.unwrap_tensor(env.all_root_states))
if ui_event.action == "action_jitter" and ui_event.value > 0:
# assuming wrong action is taken
obs, critic_obs, rews, dones, infos = env.step(torch.tanh(torch.randn_like(actions)))
if ui_event.action == "exit" and ui_event.value > 0:
print("exit")
exit(0)
if ui_event.action == "agent_full_reset" and ui_event.value > 0:
print("agent_full_reset")
agent_model.reset()
if ui_event.action == "full_reset" and ui_event.value > 0:
print("full_reset")
agent_model.reset()
obs, _ = env.reset()
if ui_event.action == "resample_commands" and ui_event.value > 0:
print("resample_commands")
env._resample_commands(torch.arange(env.num_envs, device= env.device))
if ui_event.action == "stop" and ui_event.value > 0:
env.commands[:, :] = 0
if ui_event.action == "forward" and ui_event.value > 0:
env.commands[:, 0] = env_cfg.commands.ranges.lin_vel_x[1]
if ui_event.action == "backward" and ui_event.value > 0:
env.commands[:, 0] = env_cfg.commands.ranges.lin_vel_x[0]
if ui_event.action == "leftward" and ui_event.value > 0:
env.commands[:, 1] = env_cfg.commands.ranges.lin_vel_y[1]
if ui_event.action == "rightward" and ui_event.value > 0:
env.commands[:, 1] = env_cfg.commands.ranges.lin_vel_y[0]
if ui_event.action == "leftturn" and ui_event.value > 0:
env.commands[:, 2] = env_cfg.commands.ranges.ang_vel_yaw[1]
if ui_event.action == "rightturn" and ui_event.value > 0:
env.commands[:, 2] = env_cfg.commands.ranges.ang_vel_yaw[0]
if i < stop_state_log:
if torch.is_tensor(env.cfg.control.action_scale):
action_scale = env.cfg.control.action_scale.detach().cpu().numpy()[joint_index]
else:
action_scale = env.cfg.control.action_scale
base_roll = get_euler_xyz(env.base_quat)[0][robot_index].item()
base_pitch = get_euler_xyz(env.base_quat)[1][robot_index].item()
if base_pitch > torch.pi: base_pitch -= torch.pi * 2
logger.log_states(
{
'dof_pos_target': actions[robot_index, joint_index].item() * action_scale,
'dof_pos': (env.dof_pos - env.default_dof_pos)[robot_index, joint_index].item(),
'dof_vel': env.substep_dof_vel[robot_index, 0, joint_index].max().item(),
'dof_torque': env.substep_torques[robot_index, 0, joint_index].max().item(),
'command_x': env.commands[robot_index, 0].item(),
'command_y': env.commands[robot_index, 1].item(),
'command_yaw': env.commands[robot_index, 2].item(),
'base_vel_x': env.base_lin_vel[robot_index, 0].item(),
'base_vel_y': env.base_lin_vel[robot_index, 1].item(),
'base_vel_z': env.base_lin_vel[robot_index, 2].item(),
'base_pitch': base_pitch,
'contact_forces_z': env.contact_forces[robot_index, env.feet_indices, 2].cpu().numpy(),
'max_torques': torch.abs(env.substep_torques).max().item(),
"student_action": actions[robot_index, 2].item(),
"teacher_action": teacher_actions[robot_index, 2].item(),
"reward": rews[robot_index].item(),
}
)
elif i==stop_state_log:
logger.plot_states()
env._get_terrain_curriculum_move(torch.tensor([0], device= env.device))
if 0 < i < stop_rew_log:
if infos["episode"]:
num_episodes = torch.sum(env.reset_buf).item()
if num_episodes>0:
logger.log_rewards(infos["episode"], num_episodes)
elif i==stop_rew_log:
logger.print_rewards()
if dones.any():
agent_model.reset(dones)
print("env dones,{} because has timeout".format("" if env.time_out_buf[dones].any() else " not"))
print(infos)
if i % 100 == 0:
print("frame_rate:" , 100/(time.time_ns() - start_time) * 1e9,
"command_x:", env.commands[robot_index, 0],
)
start_time = time.time_ns()
if __name__ == '__main__':
EXPORT_POLICY = False
RECORD_FRAMES = False
MOVE_CAMERA = True
CAMERA_FOLLOW = True
args = get_args()
play(args)

View File

@ -0,0 +1,49 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import numpy as np
import os
from datetime import datetime
import isaacgym
from legged_gym.envs import *
from legged_gym.utils import get_args, task_registry
import torch
from legged_gym.debugger import break_into_debugger
def train(args):
env, env_cfg = task_registry.make_env(name=args.task, args=args)
ppo_runner, train_cfg = task_registry.make_alg_runner(env=env, name=args.task, args=args, env_cfg=env_cfg)
ppo_runner.learn(num_learning_iterations=train_cfg.runner.max_iterations, init_at_random_ep_len=True)
if __name__ == '__main__':
args = get_args()
train(args)

View File

@ -0,0 +1,56 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import numpy as np
import os
from datetime import datetime
import isaacgym
from legged_gym.envs import *
from legged_gym.utils import get_args, export_policy_as_jit, task_registry, Logger
import torch
def test_env(args):
env_cfg, train_cfg = task_registry.get_cfgs(name=args.task)
# override some parameters for testing
env_cfg.env.num_envs = min(env_cfg.env.num_envs, 10)
# prepare environment
env, _ = task_registry.make_env(name=args.task, args=args, env_cfg=env_cfg)
for i in range(int(10*env.max_episode_length)):
actions = 0.*torch.ones(env.num_envs, env.num_actions, device=env.device)
obs, _, rew, done, info = env.step(actions)
print("Done")
if __name__ == '__main__':
args = get_args()
test_env(args)

View File

@ -0,0 +1,35 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
from .helpers import class_to_dict, get_load_path, get_args, export_policy_as_jit, set_seed, update_class_from_dict
from .task_registry import task_registry
from .logger import Logger
from .math import *
from .terrain.terrain import Terrain

View File

@ -0,0 +1,218 @@
import sys
import time
import os
import shlex
import pydoc
import inspect
import collections
color2num = dict(
gray=30,
red=31,
green=32,
yellow=33,
blue=34,
magenta=35,
cyan=36,
white=37,
crimson=38
)
def colorize(string, color, bold=False, highlight=False):
attr = []
num = color2num[color]
if highlight:
num += 10
attr.append(str(num))
if bold:
attr.append('1')
return '\x1b[%sm%s\x1b[0m' % (';'.join(attr), string)
def mkdir_p(path):
os.makedirs(path, exist_ok=True)
def log(s): # , send_telegram=False):
print(s)
sys.stdout.flush()
class SimpleMessage:
def __init__(self, msg, logger=log):
self.msg = msg
self.logger = logger
def __enter__(self):
print(self.msg)
self.tstart = time.time()
def __exit__(self, etype, *args):
maybe_exc = "" if etype is None else " (with exception)"
self.logger("done%s in %.3f seconds" %
(maybe_exc, time.time() - self.tstart))
MESSAGE_DEPTH = 0
class Message:
def __init__(self, msg):
self.msg = msg
def __enter__(self):
global MESSAGE_DEPTH # pylint: disable=W0603
print(colorize('\t' * MESSAGE_DEPTH + '=: ' + self.msg, 'magenta'))
self.tstart = time.time()
MESSAGE_DEPTH += 1
def __exit__(self, etype, *args):
global MESSAGE_DEPTH # pylint: disable=W0603
MESSAGE_DEPTH -= 1
maybe_exc = "" if etype is None else " (with exception)"
print(colorize('\t' * MESSAGE_DEPTH + "done%s in %.3f seconds" % (maybe_exc, time.time() - self.tstart), 'magenta'))
def prefix_log(prefix, logger=log):
return lambda s: logger(prefix + s)
def tee_log(file_name):
f = open(file_name, 'w+')
def logger(s):
log(s)
f.write(s)
f.write('\n')
f.flush()
return logger
def collect_args():
splitted = shlex.split(' '.join(sys.argv[1:]))
return {arg_name[2:]: arg_val
for arg_name, arg_val in zip(splitted[::2], splitted[1::2])}
def type_hint(arg_name, arg_type):
def wrap(f):
meta = getattr(f, '__tweak_type_hint_meta__', None)
if meta is None:
f.__tweak_type_hint_meta__ = meta = {}
meta[arg_name] = arg_type
return f
return wrap
def tweak(fun_or_val, identifier=None):
if isinstance(fun_or_val, collections.Callable):
return tweakfun(fun_or_val, identifier)
return tweakval(fun_or_val, identifier)
def tweakval(val, identifier):
if not identifier:
raise ValueError('Must provide an identifier for tweakval to work')
args = collect_args()
for k, v in args.items():
stripped = k.replace('-', '_')
if stripped == identifier:
log('replacing %s in %s with %s' % (stripped, str(val), str(v)))
return type(val)(v)
return val
def tweakfun(fun, alt=None):
"""Make the arguments (or the function itself) tweakable from command line.
See tests/test_misc_console.py for examples.
NOTE: this only works for the initial launched process, since other processes
will get different argv. What this means is that tweak() calls wrapped in a function
to be invoked in a child process might not behave properly.
"""
cls = getattr(fun, 'im_class', None)
method_name = fun.__name__
if alt:
cmd_prefix = alt
elif cls:
cmd_prefix = cls + '.' + method_name
else:
cmd_prefix = method_name
cmd_prefix = cmd_prefix.lower()
args = collect_args()
if cmd_prefix in args:
fun = pydoc.locate(args[cmd_prefix])
if type(fun) == type:
argspec = inspect.getargspec(fun.__init__)
else:
argspec = inspect.getargspec(fun)
# TODO handle list arguments
defaults = dict(
list(zip(argspec.args[-len(argspec.defaults or []):], argspec.defaults or [])))
replaced_kwargs = {}
cmd_prefix += '-'
if type(fun) == type:
meta = getattr(fun.__init__, '__tweak_type_hint_meta__', {})
else:
meta = getattr(fun, '__tweak_type_hint_meta__', {})
for k, v in args.items():
if k.startswith(cmd_prefix):
stripped = k[len(cmd_prefix):].replace('-', '_')
if stripped in meta:
log('replacing %s in %s with %s' % (stripped, str(fun), str(v)))
replaced_kwargs[stripped] = meta[stripped](v)
elif stripped not in argspec.args:
raise ValueError(
'%s is not an explicit parameter of %s' % (stripped, str(fun)))
elif stripped not in defaults:
raise ValueError(
'%s does not have a default value in method %s' % (stripped, str(fun)))
elif defaults[stripped] is None:
raise ValueError(
'Cannot infer type of %s in method %s from None value' % (stripped, str(fun)))
else:
log('replacing %s in %s with %s' % (stripped, str(fun), str(v)))
# TODO more proper conversions
replaced_kwargs[stripped] = type(defaults[stripped])(v)
def tweaked(*args, **kwargs):
all_kw = dict(list(zip(argspec[0], args)) +
list(kwargs.items()) + list(replaced_kwargs.items()))
return fun(**all_kw)
return tweaked
def query_yes_no(question, default="yes"):
"""Ask a yes/no question via raw_input() and return their answer.
"question" is a string that is presented to the user.
"default" is the presumed answer if the user just hits <Enter>.
It must be "yes" (the default), "no" or None (meaning
an answer is required of the user).
The "answer" return value is True for "yes" or False for "no".
"""
valid = {"yes": True, "y": True, "ye": True,
"no": False, "n": False}
if default is None:
prompt = " [y/n] "
elif default == "yes":
prompt = " [Y/n] "
elif default == "no":
prompt = " [y/N] "
else:
raise ValueError("invalid default answer: '%s'" % default)
while True:
sys.stdout.write(question + prompt)
choice = input().lower()
if default is not None and choice == '':
return valid[default]
elif choice in valid:
return valid[choice]
else:
sys.stdout.write("Please respond with 'yes' or 'no' "
"(or 'y' or 'n').\n")

View File

@ -0,0 +1,241 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import os
import copy
import torch
import numpy as np
import random
from isaacgym import gymapi
from isaacgym import gymutil
from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR
def is_primitive_type(obj):
return not hasattr(obj, '__dict__')
def class_to_dict(obj) -> dict:
if not hasattr(obj,"__dict__") or isinstance(obj, dict):
return obj
result = {}
for key in dir(obj):
if key.startswith("_"):
continue
element = []
val = getattr(obj, key)
if isinstance(val, list):
for item in val:
element.append(class_to_dict(item))
else:
element = class_to_dict(val)
result[key] = element
return result
def update_class_from_dict(obj, dict_, strict= False):
""" If strict, attributes that are not in dict_ will be removed from obj """
attr_names = [n for n in obj.__dict__.keys() if not (n.startswith("__") and n.endswith("__"))]
for attr_name in attr_names:
if not attr_name in dict_:
delattr(obj, attr_name)
for key, val in dict_.items():
attr = getattr(obj, key, None)
if attr is None or is_primitive_type(attr):
if isinstance(val, dict):
setattr(obj, key, copy.deepcopy(val))
update_class_from_dict(getattr(obj, key), val)
else:
setattr(obj, key, val)
else:
update_class_from_dict(attr, val)
return
def set_seed(seed):
if seed == -1:
seed = np.random.randint(0, 10000)
print("Setting seed: {}".format(seed))
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
def parse_sim_params(args, cfg):
# code from Isaac Gym Preview 2
# initialize sim params
sim_params = gymapi.SimParams()
# set some values from args
if args.physics_engine == gymapi.SIM_FLEX:
if args.device != "cpu":
print("WARNING: Using Flex with GPU instead of PHYSX!")
elif args.physics_engine == gymapi.SIM_PHYSX:
sim_params.physx.use_gpu = args.use_gpu
sim_params.physx.num_subscenes = args.subscenes
sim_params.use_gpu_pipeline = args.use_gpu_pipeline
# if sim options are provided in cfg, parse them and update/override above:
if "sim" in cfg:
gymutil.parse_sim_config(cfg["sim"], sim_params)
# Override num_threads if passed on the command line
if args.physics_engine == gymapi.SIM_PHYSX and args.num_threads > 0:
sim_params.physx.num_threads = args.num_threads
return sim_params
def get_load_path(root, load_run=-1, checkpoint=-1):
if load_run==-1:
try:
runs = os.listdir(root)
#TODO sort by date to handle change of month
runs.sort()
if 'exported' in runs: runs.remove('exported')
last_run = os.path.join(root, runs[-1])
except:
raise ValueError("No runs in this directory: " + root)
load_run = last_run
elif os.path.isabs(load_run):
print("Loading load_run as absolute path:", load_run)
else:
load_run = os.path.join(root, load_run)
if checkpoint==-1:
models = [file for file in os.listdir(load_run) if 'model' in file]
models.sort(key=lambda m: '{0:0>15}'.format(m))
model = models[-1]
else:
model = "model_{}.pt".format(checkpoint)
load_path = os.path.join(load_run, model)
return load_path
def update_cfg_from_args(env_cfg, cfg_train, args):
# seed
if env_cfg is not None:
# num envs
if args.num_envs is not None:
env_cfg.env.num_envs = args.num_envs
if cfg_train is not None:
if args.seed is not None:
cfg_train.seed = args.seed
# alg runner parameters
if args.max_iterations is not None:
cfg_train.runner.max_iterations = args.max_iterations
if args.resume:
cfg_train.runner.resume = args.resume
if args.experiment_name is not None:
cfg_train.runner.experiment_name = args.experiment_name
if args.run_name is not None:
cfg_train.runner.run_name = args.run_name
if args.load_run is not None:
cfg_train.runner.load_run = args.load_run
if args.checkpoint is not None:
cfg_train.runner.checkpoint = args.checkpoint
return env_cfg, cfg_train
def get_args():
custom_parameters = [
{"name": "--task", "type": str, "default": "anymal_c_flat", "help": "Resume training or start testing from a checkpoint. Overrides config file if provided."},
{"name": "--resume", "action": "store_true", "default": False, "help": "Resume training from a checkpoint"},
{"name": "--experiment_name", "type": str, "help": "Name of the experiment to run or load. Overrides config file if provided."},
{"name": "--run_name", "type": str, "help": "Name of the run. Overrides config file if provided."},
{"name": "--load_run", "type": str, "help": "Name of the run to load when resume=True. If -1: will load the last run. Overrides config file if provided."},
{"name": "--checkpoint", "type": int, "help": "Saved model checkpoint number. If -1: will load the last checkpoint. Overrides config file if provided."},
{"name": "--headless", "action": "store_true", "default": False, "help": "Force display off at all times"},
{"name": "--horovod", "action": "store_true", "default": False, "help": "Use horovod for multi-gpu training"},
{"name": "--rl_device", "type": str, "default": "cuda:0", "help": 'Device used by the RL algorithm, (cpu, gpu, cuda:0, cuda:1 etc..)'},
{"name": "--num_envs", "type": int, "help": "Number of environments to create. Overrides config file if provided."},
{"name": "--seed", "type": int, "help": "Random seed. Overrides config file if provided."},
{"name": "--max_iterations", "type": int, "help": "Maximum number of training iterations. Overrides config file if provided."},
]
# parse arguments
args = gymutil.parse_arguments(
description="RL Policy",
custom_parameters=custom_parameters)
# name allignment
args.sim_device_id = args.compute_device_id
args.sim_device = args.sim_device_type
if args.sim_device=='cuda':
args.sim_device += f":{args.sim_device_id}"
return args
def export_policy_as_jit(actor_critic, path):
if hasattr(actor_critic, 'memory_a'):
# assumes LSTM: TODO add GRU
exporter = PolicyExporterLSTM(actor_critic)
exporter.export(path)
else:
os.makedirs(path, exist_ok=True)
path = os.path.join(path, 'policy_1.pt')
model = copy.deepcopy(actor_critic.actor).to('cpu')
traced_script_module = torch.jit.script(model)
traced_script_module.save(path)
class PolicyExporterLSTM(torch.nn.Module):
def __init__(self, actor_critic):
super().__init__()
self.actor = copy.deepcopy(actor_critic.actor)
self.is_recurrent = actor_critic.is_recurrent
self.memory = copy.deepcopy(actor_critic.memory_a.rnn)
self.memory.cpu()
self.register_buffer(f'hidden_state', torch.zeros(self.memory.num_layers, 1, self.memory.hidden_size))
self.register_buffer(f'cell_state', torch.zeros(self.memory.num_layers, 1, self.memory.hidden_size))
def forward(self, x):
out, (h, c) = self.memory(x.unsqueeze(0), (self.hidden_state, self.cell_state))
self.hidden_state[:] = h
self.cell_state[:] = c
return self.actor(out.squeeze(0))
@torch.jit.export
def reset_memory(self):
self.hidden_state[:] = 0.
self.cell_state[:] = 0.
def export(self, path):
os.makedirs(path, exist_ok=True)
path = os.path.join(path, 'policy_lstm_1.pt')
self.to('cpu')
traced_script_module = torch.jit.script(self)
traced_script_module.save(path)
def merge_dict(this: dict, other: dict):
""" Merging two dicts. if a key exists in both dict, the other's value will take priority
NOTE: This method is implemented in python>=3.9
"""
output = this.copy()
output.update(other)
return output

View File

@ -0,0 +1,154 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import matplotlib.pyplot as plt
import numpy as np
from collections import defaultdict
from multiprocessing import Process, Value
class Logger:
def __init__(self, dt):
self.state_log = defaultdict(list)
self.rew_log = defaultdict(list)
self.dt = dt
self.num_episodes = 0
self.plot_process = None
def log_state(self, key, value):
self.state_log[key].append(value)
def log_states(self, dict):
for key, value in dict.items():
self.log_state(key, value)
def log_rewards(self, dict, num_episodes):
for key, value in dict.items():
if 'rew' in key:
self.rew_log[key].append(value.item() * num_episodes)
self.num_episodes += num_episodes
def reset(self):
self.state_log.clear()
self.rew_log.clear()
def plot_states(self):
self.plot_process = Process(target=self._plot)
self.plot_process.start()
def _plot(self):
nb_rows = 4
nb_cols = 3
fig, axs = plt.subplots(nb_rows, nb_cols)
for key, value in self.state_log.items():
time = np.linspace(0, len(value)*self.dt, len(value))
break
log= self.state_log
# plot joint targets and measured positions
a = axs[1, 0]
if log["dof_pos"]: a.plot(time, log["dof_pos"], label='measured')
if log["dof_pos_target"]: a.plot(time, log["dof_pos_target"], label='target')
a.set(xlabel='time [s]', ylabel='Position [rad]', title='DOF Position')
a.legend()
# plot joint velocity
a = axs[1, 1]
if log["dof_vel"]: a.plot(time, log["dof_vel"], label='measured')
if log["dof_vel_target"]: a.plot(time, log["dof_vel_target"], label='target')
a.set(xlabel='time [s]', ylabel='Velocity [rad/s]', title='Joint Velocity')
a.legend()
# plot base vel x
a = axs[0, 0]
if log["base_vel_x"]: a.plot(time, log["base_vel_x"], label='measured')
if log["command_x"]: a.plot(time, log["command_x"], label='commanded')
a.set(xlabel='time [s]', ylabel='base lin vel [m/s]', title='Base velocity x')
a.legend()
# plot base vel y
a = axs[0, 1]
if log["base_vel_y"]: a.plot(time, log["base_vel_y"], label='measured')
if log["command_y"]: a.plot(time, log["command_y"], label='commanded')
a.set(xlabel='time [s]', ylabel='base lin vel [m/s]', title='Base velocity y')
a.legend()
# plot base pitch
a = axs[0, 2]
if log["base_pitch"]: a.plot(time, log["base_pitch"], label='measured')
# if log["command_yaw"]: a.plot(time, log["command_yaw"], label='commanded')
a.set(xlabel='time [s]', ylabel='base ang [rad]', title='Base pitch')
a.legend()
# plot base vel z
a = axs[1, 2]
if log["base_vel_z"]: a.plot(time, log["base_vel_z"], label='measured')
a.set(xlabel='time [s]', ylabel='base lin vel [m/s]', title='Base velocity z')
a.legend()
# plot contact forces
a = axs[2, 0]
if log["contact_forces_z"]:
forces = np.array(log["contact_forces_z"])
for i in range(forces.shape[1]):
a.plot(time, forces[:, i], label=f'force {i}')
a.set(xlabel='time [s]', ylabel='Forces z [N]', title='Vertical Contact forces')
a.legend()
# plot torque/vel curves
a = axs[2, 1]
if log["dof_vel"]!=[] and log["dof_torque"]!=[]: a.plot(log["dof_vel"], log["dof_torque"], 'x', label='measured')
a.set(xlabel='Joint vel [rad/s]', ylabel='Joint Torque [Nm]', title='Torque/velocity curves')
a.legend()
# plot torques
a = axs[2, 2]
if log["dof_torque"]!=[]: a.plot(time, log["dof_torque"], label='measured')
a.set(xlabel='time [s]', ylabel='Joint Torque [Nm]', title='Torque')
a.legend()
# plot rewards
a = axs[3, 0]
if log["max_torques"]: a.plot(time, log["max_torques"], label='max_torques')
a.set(xlabel='time [s]', ylabel='max_torques [Nm]', title='max_torques')
a.legend()
# plot customed data
a = axs[3, 1]
if log["student_action"]:
a.plot(time, log["student_action"], label='s')
a.plot(time, log["teacher_action"], label='t')
a.legend()
a.set(xlabel='time [s]', ylabel='tanh', title='student/teacher action')
a = axs[3, 2]
if log["teacher_action"]:
a.plot(time, [i-j for i, j in zip(log["student_action"], log["teacher_action"])], label='action difference')
a.set(xlabel='time [s]', ylabel='tanh', title='action difference')
a.legend()
plt.show()
def print_rewards(self):
print("Average rewards per second:")
for key, values in self.rew_log.items():
mean = np.sum(np.array(values)) / self.num_episodes
print(f" - {key}: {mean}")
print(f"Total number of episodes: {self.num_episodes}")
def __del__(self):
if self.plot_process is not None:
self.plot_process.kill()

View File

@ -0,0 +1,56 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import torch
from torch import Tensor
import numpy as np
from isaacgym.torch_utils import quat_apply, normalize
from typing import Tuple
# @ torch.jit.script
def quat_apply_yaw(quat, vec):
quat_yaw = quat.clone().view(-1, 4)
quat_yaw[:, :2] = 0.
quat_yaw = normalize(quat_yaw)
return quat_apply(quat_yaw, vec)
# @ torch.jit.script
def wrap_to_pi(angles):
angles %= 2*np.pi
angles -= 2*np.pi * (angles > np.pi)
return angles
# @ torch.jit.script
def torch_rand_sqrt_float(lower, upper, shape, device):
# type: (float, float, Tuple[int, int], str) -> Tensor
r = 2*torch.rand(*shape, device=device) - 1
r = torch.where(r<0., -torch.sqrt(-r), torch.sqrt(r))
r = (r + 1.) / 2.
return (upper - lower) * r + lower

View File

@ -0,0 +1,17 @@
from collections import OrderedDict
import numpy as np
def get_obs_slice(segments: OrderedDict, component_name: str):
""" Get the slice from segments and name. Return the slice and component shape """
obs_start = obs_end = 0
component_shape = None
for k, v in segments.items():
obs_start = obs_end
obs_end = obs_start + np.prod(v)
if k == component_name:
component_shape = v # tuple
break
assert component_shape is not None, "No component ({}) is found in the given components {}".format(component_name, [segments.keys()])
return slice(obs_start, obs_end), component_shape

View File

@ -0,0 +1,179 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import os
from datetime import datetime
from typing import Tuple
import torch
import numpy as np
import json
import shutil
from rsl_rl.env import VecEnv
from rsl_rl.runners import build_runner, OnPolicyRunner
from legged_gym import LEGGED_GYM_ROOT_DIR, LEGGED_GYM_ENVS_DIR
from .helpers import get_args, update_cfg_from_args, class_to_dict, get_load_path, set_seed, parse_sim_params
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg, LeggedRobotCfgPPO
class TaskRegistry():
def __init__(self):
self.task_classes = {}
self.env_cfgs = {}
self.train_cfgs = {}
def register(self, name: str, task_class: VecEnv, env_cfg: LeggedRobotCfg, train_cfg: LeggedRobotCfgPPO):
self.task_classes[name] = task_class
self.env_cfgs[name] = env_cfg
self.train_cfgs[name] = train_cfg
def get_task_class(self, name: str) -> VecEnv:
return self.task_classes[name]
def get_cfgs(self, name) -> Tuple[LeggedRobotCfg, LeggedRobotCfgPPO]:
train_cfg = self.train_cfgs[name]
env_cfg = self.env_cfgs[name]
# copy seed
env_cfg.seed = train_cfg.seed
return env_cfg, train_cfg
def make_env(self, name, args=None, env_cfg=None) -> Tuple[VecEnv, LeggedRobotCfg]:
""" Creates an environment either from a registered namme or from the provided config file.
Args:
name (string): Name of a registered env.
args (Args, optional): Isaac Gym comand line arguments. If None get_args() will be called. Defaults to None.
env_cfg (Dict, optional): Environment config file used to override the registered config. Defaults to None.
Raises:
ValueError: Error if no registered env corresponds to 'name'
Returns:
isaacgym.VecTaskPython: The created environment
Dict: the corresponding config file
"""
# if no args passed get command line arguments
if args is None:
args = get_args()
# check if there is a registered env with that name
if name in self.task_classes:
task_class = self.get_task_class(name)
else:
raise ValueError(f"Task with name: {name} was not registered")
if env_cfg is None:
# load config files
env_cfg, _ = self.get_cfgs(name)
# override cfg from args (if specified)
env_cfg, _ = update_cfg_from_args(env_cfg, None, args)
set_seed(env_cfg.seed)
# parse sim params (convert to dict first)
sim_params = {"sim": class_to_dict(env_cfg.sim)}
sim_params = parse_sim_params(args, sim_params)
env = task_class( cfg=env_cfg,
sim_params=sim_params,
physics_engine=args.physics_engine,
sim_device=args.sim_device,
headless=args.headless)
return env, env_cfg
def make_alg_runner(self, env, name=None, args=None, train_cfg=None, log_root="default", env_cfg=None, save_cfg= True) -> Tuple[OnPolicyRunner, LeggedRobotCfgPPO]:
""" Creates the training algorithm either from a registered namme or from the provided config file.
Args:
env (isaacgym.VecTaskPython): The environment to train (TODO: remove from within the algorithm)
name (string, optional): Name of a registered env. If None, the config file will be used instead. Defaults to None.
args (Args, optional): Isaac Gym comand line arguments. If None get_args() will be called. Defaults to None.
train_cfg (Dict, optional): Training config file. If None 'name' will be used to get the config file. Defaults to None.
log_root (str, optional): Logging directory for Tensorboard. Set to 'None' to avoid logging (at test time for example).
Logs will be saved in <log_root>/<date_time>_<run_name>. Defaults to "default"=<path_to_LEGGED_GYM>/logs/<experiment_name>.
env_cfg (Dict, optional): for logging
Raises:
ValueError: Error if neither 'name' or 'train_cfg' are provided
Warning: If both 'name' or 'train_cfg' are provided 'name' is ignored
Returns:
PPO: The created algorithm
Dict: the corresponding config file
"""
# if no args passed get command line arguments
if args is None:
args = get_args()
# if config files are passed use them, otherwise load from the name
if train_cfg is None:
if name is None:
raise ValueError("Either 'name' or 'train_cfg' must be not None")
# load config files
_, train_cfg = self.get_cfgs(name)
else:
if name is not None:
print(f"'train_cfg' provided -> Ignoring 'name={name}'")
# override cfg from args (if specified)
_, train_cfg = update_cfg_from_args(None, train_cfg, args)
if log_root=="default":
log_root = os.path.join(LEGGED_GYM_ROOT_DIR, 'logs', train_cfg.runner.experiment_name)
log_dir = os.path.join(log_root, datetime.now().strftime('%b%d_%H-%M-%S') + '_' + train_cfg.runner.run_name)
elif log_root is None:
log_dir = None
else:
log_dir = os.path.join(log_root, datetime.now().strftime('%b%d_%H-%M-%S') + '_' + train_cfg.runner.run_name)
log_cfg_dict = dict()
train_cfg_dict = class_to_dict(train_cfg)
log_cfg_dict.update(train_cfg_dict)
if not env_cfg is None:
env_cfg_dict = class_to_dict(env_cfg)
log_cfg_dict.update(env_cfg_dict)
runner = build_runner(
getattr(train_cfg, "runner_class_name", "OnPolicyRunner"),
env,
train_cfg_dict,
log_dir,
device=args.rl_device,
)
if save_cfg:
os.makedirs(log_dir, exist_ok= True)
with open(os.path.join(log_dir, "config.json"), "w") as f:
json.dump(log_cfg_dict, f, indent= 4)
resume = train_cfg.runner.resume
if resume:
# load previously trained model
resume_path = get_load_path(log_root, load_run=train_cfg.runner.load_run, checkpoint=train_cfg.runner.checkpoint)
print(f"Loading model from: {resume_path}")
if save_cfg:
shutil.copyfile(resume_path, os.path.join(log_dir, os.path.basename(resume_path)))
runner.load(resume_path)
return runner, train_cfg
# make global task registry
task_registry = TaskRegistry()

View File

@ -0,0 +1,13 @@
import importlib
terrain_registry = dict(
Terrain= "legged_gym.utils.terrain.terrain:Terrain",
BarrierTrack= "legged_gym.utils.terrain.barrier_track:BarrierTrack",
TerrainPerlin= "legged_gym.utils.terrain.perlin:TerrainPerlin",
)
def get_terrain_cls(terrain_cls):
entry_point = terrain_registry[terrain_cls]
module, class_name = entry_point.rsplit(":", 1)
module = importlib.import_module(module)
return getattr(module, class_name)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,113 @@
import numpy as np
from numpy.random import choice
from scipy import interpolate
from isaacgym import terrain_utils, gymapi
import matplotlib.pyplot as plt
class TerrainPerlin:
def __init__(self, cfg, num_envs):
self.cfg = cfg
self.env_length = cfg.terrain_length
self.env_width = cfg.terrain_width
self.xSize = cfg.terrain_length * cfg.num_rows # int(cfg.horizontal_scale * cfg.tot_cols)
self.ySize = cfg.terrain_width * cfg.num_cols # int(cfg.horizontal_scale * cfg.tot_rows)
self.tot_cols = int(self.xSize / cfg.horizontal_scale)
self.tot_rows = int(self.ySize / cfg.horizontal_scale)
assert(self.xSize == cfg.horizontal_scale * self.tot_rows and self.ySize == cfg.horizontal_scale * self.tot_cols)
self.heightsamples_float = self.generate_fractal_noise_2d(self.xSize, self.ySize, self.tot_rows, self.tot_cols, **cfg.TerrainPerlin_kwargs)
# self.heightsamples_float[self.tot_cols//2 - 100:, :] += 100000
# self.heightsamples_float[self.tot_cols//2 - 40: self.tot_cols//2 + 40, :] = np.mean(self.heightsamples_float)
self.heightsamples = (self.heightsamples_float * (1 / cfg.vertical_scale)).astype(np.int16)
print("Terrain heightsamples shape: ", self.heightsamples.shape)
print("Terrain heightsamples stat: ", *(np.array([np.min(self.heightsamples), np.max(self.heightsamples), np.mean(self.heightsamples), np.std(self.heightsamples), np.median(self.heightsamples)]) * cfg.vertical_scale))
# self.heightsamples = np.zeros((800, 800)).astype(np.int16)
self.vertices, self.triangles = terrain_utils.convert_heightfield_to_trimesh( self.heightsamples,
cfg.horizontal_scale,
cfg.vertical_scale,
cfg.slope_treshold)
@staticmethod
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1) * 0.5 + 0.5
@staticmethod
def generate_fractal_noise_2d(xSize=20, ySize=20, xSamples=1600, ySamples=1600, \
frequency=10, fractalOctaves=2, fractalLacunarity = 2.0, fractalGain=0.25, zScale = 0.23):
xScale = int(frequency * xSize)
yScale = int(frequency * ySize)
amplitude = 1
shape = (xSamples, ySamples)
noise = np.zeros(shape)
for _ in range(fractalOctaves):
noise += amplitude * TerrainPerlin.generate_perlin_noise_2d((xSamples, ySamples), (xScale, yScale)) * zScale
amplitude *= fractalGain
xScale, yScale = int(fractalLacunarity * xScale), int(fractalLacunarity * yScale)
return noise
def add_trimesh_to_sim(self, trimesh, trimesh_origin):
tm_params = gymapi.TriangleMeshParams()
tm_params.nb_vertices = trimesh[0].shape[0]
tm_params.nb_triangles = trimesh[1].shape[0]
tm_params.transform.p.x = trimesh_origin[0]
tm_params.transform.p.y = trimesh_origin[1]
tm_params.transform.p.z = 0.
tm_params.static_friction = self.cfg.static_friction
tm_params.dynamic_friction = self.cfg.dynamic_friction
tm_params.restitution = self.cfg.restitution
self.gym.add_triangle_mesh(
self.sim,
trimesh[0].flatten(order= "C"),
trimesh[1].flatten(order= "C"),
tm_params,
)
def add_terrain_to_sim(self, gym, sim, device= "cpu"):
""" deploy the terrain mesh to the simulator
"""
self.gym = gym
self.sim = sim
self.device = device
self.add_trimesh_to_sim(
(self.vertices, self.triangles),
np.zeros(3,),
)
self.env_origins = np.zeros((self.cfg.num_rows, self.cfg.num_cols, 3))
for row_idx in range(self.cfg.num_rows):
for col_idx in range(self.cfg.num_cols):
origin_x = (row_idx + 0.5) * self.env_length
origin_y = (col_idx + 0.5) * self.env_width
self.env_origins[row_idx, col_idx] = [
origin_x,
origin_y,
self.heightsamples[
int(origin_x / self.cfg.horizontal_scale),
int(origin_y / self.cfg.horizontal_scale),
] * self.cfg.vertical_scale,
]

View File

@ -0,0 +1,187 @@
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Copyright (c) 2021 ETH Zurich, Nikita Rudin
import numpy as np
from numpy.random import choice
from scipy import interpolate
from isaacgym import terrain_utils
from legged_gym.envs.base.legged_robot_config import LeggedRobotCfg
class Terrain:
def __init__(self, cfg: LeggedRobotCfg.terrain, num_robots) -> None:
self.cfg = cfg
self.num_robots = num_robots
self.type = cfg.mesh_type
if self.type in ["none", 'plane']:
return
self.env_length = cfg.terrain_length
self.env_width = cfg.terrain_width
self.proportions = [np.sum(cfg.terrain_proportions[:i+1]) for i in range(len(cfg.terrain_proportions))]
self.cfg.num_sub_terrains = cfg.num_rows * cfg.num_cols
self.env_origins = np.zeros((cfg.num_rows, cfg.num_cols, 3))
self.width_per_env_pixels = int(self.env_width / cfg.horizontal_scale)
self.length_per_env_pixels = int(self.env_length / cfg.horizontal_scale)
self.border = int(cfg.border_size/self.cfg.horizontal_scale)
self.tot_cols = int(cfg.num_cols * self.width_per_env_pixels) + 2 * self.border
self.tot_rows = int(cfg.num_rows * self.length_per_env_pixels) + 2 * self.border
self.height_field_raw = np.zeros((self.tot_rows , self.tot_cols), dtype=np.int16)
if cfg.curriculum:
self.curiculum()
elif cfg.selected:
self.selected_terrain()
else:
self.randomized_terrain()
self.heightsamples = self.height_field_raw
if self.type=="trimesh":
self.vertices, self.triangles = terrain_utils.convert_heightfield_to_trimesh( self.height_field_raw,
self.cfg.horizontal_scale,
self.cfg.vertical_scale,
self.cfg.slope_treshold)
def randomized_terrain(self):
for k in range(self.cfg.num_sub_terrains):
# Env coordinates in the world
(i, j) = np.unravel_index(k, (self.cfg.num_rows, self.cfg.num_cols))
choice = np.random.uniform(0, 1)
difficulty = np.random.choice([0.5, 0.75, 0.9])
terrain = self.make_terrain(choice, difficulty)
self.add_terrain_to_map(terrain, i, j)
def curiculum(self):
for j in range(self.cfg.num_cols):
for i in range(self.cfg.num_rows):
difficulty = i / self.cfg.num_rows
choice = j / self.cfg.num_cols + 0.001
terrain = self.make_terrain(choice, difficulty)
self.add_terrain_to_map(terrain, i, j)
def selected_terrain(self):
terrain_type = self.cfg.terrain_kwargs.pop('type')
for k in range(self.cfg.num_sub_terrains):
# Env coordinates in the world
(i, j) = np.unravel_index(k, (self.cfg.num_rows, self.cfg.num_cols))
terrain = terrain_utils.SubTerrain("terrain",
width=self.width_per_env_pixels,
length=self.width_per_env_pixels,
vertical_scale=self.vertical_scale,
horizontal_scale=self.horizontal_scale)
eval(terrain_type)(terrain, **self.cfg.terrain_kwargs.terrain_kwargs)
self.add_terrain_to_map(terrain, i, j)
def make_terrain(self, choice, difficulty):
terrain = terrain_utils.SubTerrain( "terrain",
width=self.width_per_env_pixels,
length=self.width_per_env_pixels,
vertical_scale=self.cfg.vertical_scale,
horizontal_scale=self.cfg.horizontal_scale)
slope = difficulty * 0.4
step_height = 0.05 + 0.18 * difficulty
discrete_obstacles_height = 0.05 + difficulty * 0.2
stepping_stones_size = 1.5 * (1.05 - difficulty)
stone_distance = 0.05 if difficulty==0 else 0.1
gap_size = 1. * difficulty
pit_depth = 1. * difficulty
if choice < self.proportions[0]:
if choice < self.proportions[0]/ 2:
slope *= -1
terrain_utils.pyramid_sloped_terrain(terrain, slope=slope, platform_size=3.)
elif choice < self.proportions[1]:
terrain_utils.pyramid_sloped_terrain(terrain, slope=slope, platform_size=3.)
terrain_utils.random_uniform_terrain(terrain, min_height=-0.05, max_height=0.05, step=0.005, downsampled_scale=0.2)
elif choice < self.proportions[3]:
if choice<self.proportions[2]:
step_height *= -1
terrain_utils.pyramid_stairs_terrain(terrain, step_width=0.31, step_height=step_height, platform_size=3.)
elif choice < self.proportions[4]:
num_rectangles = 20
rectangle_min_size = 1.
rectangle_max_size = 2.
terrain_utils.discrete_obstacles_terrain(terrain, discrete_obstacles_height, rectangle_min_size, rectangle_max_size, num_rectangles, platform_size=3.)
elif choice < self.proportions[5]:
terrain_utils.stepping_stones_terrain(terrain, stone_size=stepping_stones_size, stone_distance=stone_distance, max_height=0., platform_size=4.)
elif choice < self.proportions[6]:
gap_terrain(terrain, gap_size=gap_size, platform_size=3.)
else:
pit_terrain(terrain, depth=pit_depth, platform_size=4.)
return terrain
def add_terrain_to_map(self, terrain, row, col):
i = row
j = col
# map coordinate system
start_x = self.border + i * self.length_per_env_pixels
end_x = self.border + (i + 1) * self.length_per_env_pixels
start_y = self.border + j * self.width_per_env_pixels
end_y = self.border + (j + 1) * self.width_per_env_pixels
self.height_field_raw[start_x: end_x, start_y:end_y] = terrain.height_field_raw
env_origin_x = (i + 0.5) * self.env_length
env_origin_y = (j + 0.5) * self.env_width
x1 = int((self.env_length/2. - 1) / terrain.horizontal_scale)
x2 = int((self.env_length/2. + 1) / terrain.horizontal_scale)
y1 = int((self.env_width/2. - 1) / terrain.horizontal_scale)
y2 = int((self.env_width/2. + 1) / terrain.horizontal_scale)
env_origin_z = np.max(terrain.height_field_raw[x1:x2, y1:y2])*terrain.vertical_scale
self.env_origins[i, j] = [env_origin_x, env_origin_y, env_origin_z]
def gap_terrain(terrain, gap_size, platform_size=1.):
gap_size = int(gap_size / terrain.horizontal_scale)
platform_size = int(platform_size / terrain.horizontal_scale)
center_x = terrain.length // 2
center_y = terrain.width // 2
x1 = (terrain.length - platform_size) // 2
x2 = x1 + gap_size
y1 = (terrain.width - platform_size) // 2
y2 = y1 + gap_size
terrain.height_field_raw[center_x-x2 : center_x + x2, center_y-y2 : center_y + y2] = -1000
terrain.height_field_raw[center_x-x1 : center_x + x1, center_y-y1 : center_y + y1] = 0
def pit_terrain(terrain, depth, platform_size=1.):
depth = int(depth / terrain.vertical_scale)
platform_size = int(platform_size / terrain.horizontal_scale / 2)
x1 = terrain.length // 2 - platform_size
x2 = terrain.length // 2 + platform_size
y1 = terrain.width // 2 - platform_size
y2 = terrain.width // 2 + platform_size
terrain.height_field_raw[x1:x2, y1:y2] = -depth

View File

@ -0,0 +1,61 @@
""" This file defines a mesh as a tuple of (vertices, triangles)
All operations are based on numpy ndarray
- vertices: np ndarray of shape (n, 3) np.float32
- triangles: np ndarray of shape (n_, 3) np.uint32
"""
import numpy as np
def box_trimesh(
size, # float [3] for x, y, z axis length (in meter) under box frame
center_position, # float [3] position (in meter) in world frame
rpy= np.zeros(3), # euler angle (in rad) not implemented yet.
):
if not (rpy == 0).all():
raise NotImplementedError("Only axis-aligned box triangle mesh is implemented")
vertices = np.empty((8, 3), dtype= np.float32)
vertices[:] = center_position
vertices[[0, 4, 2, 6], 0] -= size[0] / 2
vertices[[1, 5, 3, 7], 0] += size[0] / 2
vertices[[0, 1, 2, 3], 1] -= size[1] / 2
vertices[[4, 5, 6, 7], 1] += size[1] / 2
vertices[[2, 3, 6, 7], 2] -= size[2] / 2
vertices[[0, 1, 4, 5], 2] += size[2] / 2
triangles = -np.ones((12, 3), dtype= np.uint32)
triangles[0] = [0, 2, 1] #
triangles[1] = [1, 2, 3]
triangles[2] = [0, 4, 2] #
triangles[3] = [2, 4, 6]
triangles[4] = [4, 5, 6] #
triangles[5] = [5, 7, 6]
triangles[6] = [1, 3, 5] #
triangles[7] = [3, 7, 5]
triangles[8] = [0, 1, 4] #
triangles[9] = [1, 5, 4]
triangles[10]= [2, 6, 3] #
triangles[11]= [3, 6, 7]
return vertices, triangles
def combine_trimeshes(*trimeshes):
if len(trimeshes) > 2:
return combine_trimeshes(
trimeshes[0],
combine_trimeshes(trimeshes[1:])
)
# only two trimesh to combine
trimesh_0, trimesh_1 = trimeshes
if trimesh_0[1].shape[0] < trimesh_1[1].shape[0]:
trimesh_0, trimesh_1 = trimesh_1, trimesh_0
trimesh_1 = (trimesh_1[0], trimesh_1[1] + trimesh_0[0].shape[0])
vertices = np.concatenate((trimesh_0[0], trimesh_1[0]), axis= 0)
triangles = np.concatenate((trimesh_0[1], trimesh_1[1]), axis= 0)
return vertices, triangles
def move_trimesh(trimesh, move: np.ndarray):
""" inplace operation """
trimesh[0] += move

Binary file not shown.

View File

@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@ -0,0 +1,579 @@
<?xml version="1.0" ?>
<robot name="a1_description" xmlns:xacro="http://www.ros.org/wiki/xacro">
<material name="black">
<color rgba="0.0 0.0 0.0 1.0"/>
</material>
<material name="blue">
<color rgba="0.0 0.0 0.8 1.0"/>
</material>
<material name="green">
<color rgba="0.0 0.8 0.0 1.0"/>
</material>
<material name="grey">
<color rgba="0.2 0.2 0.2 1.0"/>
</material>
<material name="silver">
<color rgba="0.913725490196 0.913725490196 0.847058823529 1.0"/>
</material>
<material name="orange">
<!-- <color rgba="1.0 0.423529411765 0.0392156862745 1.0"/> -->
<color rgba="0.12 0.15 0.2 1.0"/>
</material>
<material name="brown">
<color rgba="0.870588235294 0.811764705882 0.764705882353 1.0"/>
</material>
<material name="red">
<color rgba="0.8 0.0 0.0 1.0"/>
</material>
<material name="white">
<color rgba="1.0 1.0 1.0 1.0"/>
</material>
<link name="base">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.001 0.001 0.001"/>
</geometry>
</visual>
</link>
<joint name="floating_base" type="fixed">
<origin rpy="0 0 0" xyz="0 0 0"/>
<parent link="base"/>
<child link="trunk"/>
</joint>
<link name="trunk">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/trunk.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.267 0.194 0.114"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.012731 0.002186 0.000515"/>
<mass value="4.713"/>
<inertia ixx="0.01683993" ixy="8.3902e-05" ixz="0.000597679" iyy="0.056579028" iyz="2.5134e-05" izz="0.064713601"/>
</inertial>
</link>
<joint name="imu_joint" type="fixed">
<parent link="trunk"/>
<child link="imu_link"/>
<origin rpy="0 0 0" xyz="0 0 0"/>
</joint>
<link name="imu_link">
<inertial>
<mass value="0.001"/>
<origin rpy="0 0 0" xyz="0 0 0"/>
<inertia ixx="0.0001" ixy="0" ixz="0" iyy="0.0001" iyz="0" izz="0.0001"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<box size="0.001 0.001 0.001"/>
</geometry>
<material name="red"/>
</visual>
<!-- The collision box is for the sensor module in the front -->
<collision>
<origin rpy="0 0 0" xyz="0.25 0 0"/>
<geometry>
<box size=".06 .15 .1"/>
</geometry>
</collision>
</link>
<joint name="FR_hip_joint" type="revolute">
<origin rpy="0 0 0" xyz="0.183 -0.047 0"/>
<parent link="trunk"/>
<child link="FR_hip"/>
<axis xyz="1 0 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-0.802851455917" upper="0.802851455917" velocity="52.4"/>
</joint>
<link name="FR_hip">
<visual>
<origin rpy="3.14159265359 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/hip.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.04" radius="0.046"/>
</geometry>
</collision> -->
<inertial>
<origin rpy="0 0 0" xyz="-0.003311 -0.000635 3.1e-05"/>
<mass value="0.696"/>
<inertia ixx="0.000469246" ixy="9.409e-06" ixz="-3.42e-07" iyy="0.00080749" iyz="4.66e-07" izz="0.000552929"/>
</inertial>
</link>
<joint name="FR_hip_fixed" type="fixed">
<origin rpy="0 0 0" xyz="0 -0.081 0"/>
<parent link="FR_hip"/>
<child link="FR_thigh_shoulder"/>
</joint>
<!-- this link is only for collision -->
<link name="FR_thigh_shoulder">
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.032" radius="0.041"/>
</geometry>
</collision> -->
</link>
<joint name="FR_thigh_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 -0.08505 0"/>
<parent link="FR_hip"/>
<child link="FR_thigh"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-1.0471975512" upper="4.18879020479" velocity="28.6"/>
</joint>
<link name="FR_thigh">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/thigh_mirror.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.0245 0.034"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.003237 0.022327 -0.027326"/>
<mass value="1.013"/>
<inertia ixx="0.005529065" ixy="-4.825e-06" ixz="0.000343869" iyy="0.005139339" iyz="-2.2448e-05" izz="0.001367788"/>
</inertial>
</link>
<joint name="FR_calf_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="FR_thigh"/>
<child link="FR_calf"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-2.69653369433" upper="-0.916297857297" velocity="28.6"/>
</joint>
<link name="FR_calf">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/calf.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.016 0.016"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.006435 0.0 -0.107388"/>
<mass value="0.166"/>
<inertia ixx="0.002997972" ixy="0.0" ixz="-0.000141163" iyy="0.003014022" iyz="0.0" izz="3.2426e-05"/>
</inertial>
</link>
<joint name="FR_foot_fixed" type="fixed" dont_collapse="true">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="FR_calf"/>
<child link="FR_foot"/>
</joint>
<link name="FR_foot">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.01"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.02"/>
</geometry>
</collision>
<inertial>
<mass value="0.06"/>
<inertia ixx="9.6e-06" ixy="0.0" ixz="0.0" iyy="9.6e-06" iyz="0.0" izz="9.6e-06"/>
</inertial>
</link>
<joint name="FL_hip_joint" type="revolute">
<origin rpy="0 0 0" xyz="0.183 0.047 0"/>
<parent link="trunk"/>
<child link="FL_hip"/>
<axis xyz="1 0 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-0.802851455917" upper="0.802851455917" velocity="52.4"/>
</joint>
<link name="FL_hip">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/hip.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.04" radius="0.046"/>
</geometry>
</collision> -->
<inertial>
<origin rpy="0 0 0" xyz="-0.003311 0.000635 3.1e-05"/>
<mass value="0.696"/>
<inertia ixx="0.000469246" ixy="-9.409e-06" ixz="-3.42e-07" iyy="0.00080749" iyz="-4.66e-07" izz="0.000552929"/>
</inertial>
</link>
<joint name="FL_hip_fixed" type="fixed">
<origin rpy="0 0 0" xyz="0 0.081 0"/>
<parent link="FL_hip"/>
<child link="FL_thigh_shoulder"/>
</joint>
<!-- this link is only for collision -->
<link name="FL_thigh_shoulder">
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.032" radius="0.041"/>
</geometry>
</collision> -->
</link>
<joint name="FL_thigh_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 0.08505 0"/>
<parent link="FL_hip"/>
<child link="FL_thigh"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-1.0471975512" upper="4.18879020479" velocity="28.6"/>
</joint>
<link name="FL_thigh">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/thigh.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.0245 0.034"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.003237 -0.022327 -0.027326"/>
<mass value="1.013"/>
<inertia ixx="0.005529065" ixy="4.825e-06" ixz="0.000343869" iyy="0.005139339" iyz="2.2448e-05" izz="0.001367788"/>
</inertial>
</link>
<joint name="FL_calf_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="FL_thigh"/>
<child link="FL_calf"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-2.69653369433" upper="-0.916297857297" velocity="28.6"/>
</joint>
<link name="FL_calf">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/calf.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.016 0.016"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.006435 0.0 -0.107388"/>
<mass value="0.166"/>
<inertia ixx="0.002997972" ixy="0.0" ixz="-0.000141163" iyy="0.003014022" iyz="0.0" izz="3.2426e-05"/>
</inertial>
</link>
<joint name="FL_foot_fixed" type="fixed" dont_collapse="true">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="FL_calf"/>
<child link="FL_foot"/>
</joint>
<link name="FL_foot">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.01"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.02"/>
</geometry>
</collision>
<inertial>
<mass value="0.06"/>
<inertia ixx="9.6e-06" ixy="0.0" ixz="0.0" iyy="9.6e-06" iyz="0.0" izz="9.6e-06"/>
</inertial>
</link>
<joint name="RR_hip_joint" type="revolute">
<origin rpy="0 0 0" xyz="-0.183 -0.047 0"/>
<parent link="trunk"/>
<child link="RR_hip"/>
<axis xyz="1 0 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-0.802851455917" upper="0.802851455917" velocity="52.4"/>
</joint>
<link name="RR_hip">
<visual>
<origin rpy="3.14159265359 3.14159265359 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/hip.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.04" radius="0.046"/>
</geometry>
</collision> -->
<inertial>
<origin rpy="0 0 0" xyz="0.003311 -0.000635 3.1e-05"/>
<mass value="0.696"/>
<inertia ixx="0.000469246" ixy="-9.409e-06" ixz="3.42e-07" iyy="0.00080749" iyz="4.66e-07" izz="0.000552929"/>
</inertial>
</link>
<joint name="RR_hip_fixed" type="fixed">
<origin rpy="0 0 0" xyz="0 -0.081 0"/>
<parent link="RR_hip"/>
<child link="RR_thigh_shoulder"/>
</joint>
<!-- this link is only for collision -->
<link name="RR_thigh_shoulder">
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.032" radius="0.041"/>
</geometry>
</collision> -->
</link>
<joint name="RR_thigh_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 -0.08505 0"/>
<parent link="RR_hip"/>
<child link="RR_thigh"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-1.0471975512" upper="4.18879020479" velocity="28.6"/>
</joint>
<link name="RR_thigh">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/thigh_mirror.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.0245 0.034"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.003237 0.022327 -0.027326"/>
<mass value="1.013"/>
<inertia ixx="0.005529065" ixy="-4.825e-06" ixz="0.000343869" iyy="0.005139339" iyz="-2.2448e-05" izz="0.001367788"/>
</inertial>
</link>
<joint name="RR_calf_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="RR_thigh"/>
<child link="RR_calf"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-2.69653369433" upper="-0.916297857297" velocity="28.6"/>
</joint>
<link name="RR_calf">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/calf.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.016 0.016"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.006435 0.0 -0.107388"/>
<mass value="0.166"/>
<inertia ixx="0.002997972" ixy="0.0" ixz="-0.000141163" iyy="0.003014022" iyz="0.0" izz="3.2426e-05"/>
</inertial>
</link>
<joint name="RR_foot_fixed" type="fixed" dont_collapse="true">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="RR_calf"/>
<child link="RR_foot"/>
</joint>
<link name="RR_foot">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.01"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.02"/>
</geometry>
</collision>
<inertial>
<mass value="0.06"/>
<inertia ixx="9.6e-06" ixy="0.0" ixz="0.0" iyy="9.6e-06" iyz="0.0" izz="9.6e-06"/>
</inertial>
</link>
<joint name="RL_hip_joint" type="revolute">
<origin rpy="0 0 0" xyz="-0.183 0.047 0"/>
<parent link="trunk"/>
<child link="RL_hip"/>
<axis xyz="1 0 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-0.802851455917" upper="0.802851455917" velocity="52.4"/>
</joint>
<link name="RL_hip">
<visual>
<origin rpy="0 3.14159265359 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/hip.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.04" radius="0.046"/>
</geometry>
</collision> -->
<inertial>
<origin rpy="0 0 0" xyz="0.003311 0.000635 3.1e-05"/>
<mass value="0.696"/>
<inertia ixx="0.000469246" ixy="9.409e-06" ixz="3.42e-07" iyy="0.00080749" iyz="-4.66e-07" izz="0.000552929"/>
</inertial>
</link>
<joint name="RL_hip_fixed" type="fixed">
<origin rpy="0 0 0" xyz="0 0.081 0"/>
<parent link="RL_hip"/>
<child link="RL_thigh_shoulder"/>
</joint>
<!-- this link is only for collision -->
<link name="RL_thigh_shoulder">
<!-- <collision>
<origin rpy="1.57079632679 0 0" xyz="0 0 0"/>
<geometry>
<cylinder length="0.032" radius="0.041"/>
</geometry>
</collision> -->
</link>
<joint name="RL_thigh_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 0.08505 0"/>
<parent link="RL_hip"/>
<child link="RL_thigh"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-1.0471975512" upper="4.18879020479" velocity="28.6"/>
</joint>
<link name="RL_thigh">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/thigh.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.0245 0.034"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.003237 -0.022327 -0.027326"/>
<mass value="1.013"/>
<inertia ixx="0.005529065" ixy="4.825e-06" ixz="0.000343869" iyy="0.005139339" iyz="2.2448e-05" izz="0.001367788"/>
</inertial>
</link>
<joint name="RL_calf_joint" type="revolute">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="RL_thigh"/>
<child link="RL_calf"/>
<axis xyz="0 1 0"/>
<dynamics damping="0" friction="0"/>
<limit effort="33.5" lower="-2.69653369433" upper="-0.916297857297" velocity="28.6"/>
</joint>
<link name="RL_calf">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/calf.dae" scale="1 1 1"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 1.57079632679 0" xyz="0 0 -0.1"/>
<geometry>
<box size="0.2 0.016 0.016"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.006435 0.0 -0.107388"/>
<mass value="0.166"/>
<inertia ixx="0.002997972" ixy="0.0" ixz="-0.000141163" iyy="0.003014022" iyz="0.0" izz="3.2426e-05"/>
</inertial>
</link>
<joint name="RL_foot_fixed" type="fixed" dont_collapse="true">
<origin rpy="0 0 0" xyz="0 0 -0.2"/>
<parent link="RL_calf"/>
<child link="RL_foot"/>
</joint>
<link name="RL_foot">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.01"/>
</geometry>
<material name="orange"/>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.02"/>
</geometry>
</collision>
<inertial>
<mass value="0.06"/>
<inertia ixx="9.6e-06" ixy="0.0" ixz="0.0" iyy="9.6e-06" iyz="0.0" izz="9.6e-06"/>
</inertial>
</link>
</robot>

View File

@ -0,0 +1,25 @@
Copyright 2019 ANYbotics, https://www.anybotics.com
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. The name of ANYbotics and ANYmal may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 648 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

View File

@ -0,0 +1,531 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- =================================================================================== -->
<!-- Copyright 2019 ANYbotics, https://www.anybotics.com -->
<!-- =================================================================================== -->
<!-- This file contains the description of the ANYmal B robot. -->
<robot name="anymal"
xmlns:xacro="http://www.ros.org/wiki/xacro">
<!-- Material for the visual primitives -->
<material name="anymal_material">
<color rgba="0.7 0.7 0.7 1.0"/>
</material>
<!-- Base link -->
<link name="base">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_base.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<!-- Main Body -->
<origin rpy="0 0 0" xyz="0 0 0.08"/>
<geometry>
<box size="0.531 0.27 0.24"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.001960558279 -0.001413217745 0.050207125344"/>
<mass value="16.793507758"/>
<inertia ixx="0.217391101503" ixy="-0.00132873239126" ixz="-0.00228200226173" iyy="0.639432546734" iyz="-0.00138078263145" izz="0.62414077654"/>
</inertial>
</link>
<link name="LF_HIP">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_hip_l.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="0.064516258147 -0.003787101702 -0.000152184388"/>
<mass value="1.42462064"/>
<inertia ixx="0.00243023349564" ixy="-1.53023971e-05" ixz="-2.1819095354e-05" iyy="0.00230257239103" iyz="2.6473021273e-05" izz="0.0019806759227"/>
</inertial>
</link>
<!-- Hip joint -->
<joint name="LF_HAA" type="revolute">
<parent link="base"/>
<child link="LF_HIP"/>
<origin xyz="0.277 0.116 0.0"/>
<axis xyz="1 0 0"/>
<limit command_effort="80" current="10" effort="80" gear_velocity="10" lower="-9.42" upper="9.42" velocity="15"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="LF_THIGH">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_thigh_l.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<!-- KFE actuator -->
<origin rpy="1.57079632679 0 0" xyz="0.0 0.069 -0.25"/>
<geometry>
<cylinder length="0.12" radius="0.06"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.003897968082 0.054226618537 -0.214583373795"/>
<mass value="1.634976467"/>
<inertia ixx="0.0120367944369" ixy="6.762065206e-05" ixz="0.000287806340448" iyy="0.0120643637939" iyz="-0.00140610131218" izz="0.00249422574881"/>
</inertial>
</link>
<!-- Thigh joint -->
<joint name="LF_HFE" type="revolute">
<parent link="LF_HIP"/>
<child link="LF_THIGH"/>
<origin xyz="0.0635 0.041 0.0"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="LF_SHANK">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_shank_l.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="0.030816858139 -0.004617229294 0.000893125713"/>
<mass value="0.207204302"/>
<inertia ixx="0.0002104880248" ixy="-5.6750980345e-05" ixz="1.0127699391e-05" iyy="0.000676270210023" iyz="-8.22869024e-07" izz="0.000545032674924"/>
</inertial>
</link>
<!-- Shank joint -->
<joint name="LF_KFE" type="revolute">
<parent link="LF_THIGH"/>
<child link="LF_SHANK"/>
<origin xyz="0.0 0.109 -0.25"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<!-- Shank to Adapter joint -->
<joint name="LF_SHANK_TO_ADAPTER" type="fixed">
<parent link="LF_SHANK"/>
<child link="LF_ADAPTER"/>
<origin rpy="0.0 0.0 0.0" xyz="0.1 -0.02 0.0"/>
</joint>
<!-- Adapter link -->
<link name="LF_ADAPTER">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0.032"/>
<geometry>
<mesh filename="../meshes/anymal_foot.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 -0.160625"/>
<geometry>
<cylinder length="0.25" radius="0.015"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-8.66e-10 -1.472e-09 -0.244345749188"/>
<mass value="0.140170767"/>
<inertia ixx="0.00159938741862" ixy="-9.32e-13" ixz="1.039e-11" iyy="0.00159938741932" iyz="1.7563e-11" izz="5.4423177329e-05"/>
</inertial>
</link>
<!-- Adapter to Foot joint -->
<joint name="LF_ADAPTER_TO_FOOT" type="fixed" dont_collapse="true">
<parent link="LF_ADAPTER"/>
<child link="LF_FOOT"/>
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 -0.32125"/>
</joint>
<!-- Foot link -->
<link name="LF_FOOT">
<collision>
<origin xyz="0 0 0.02325"/>
<geometry>
<sphere radius="0.031"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0.02325"/>
<mass value="0.05"/>
<inertia ixx="0.001" ixy="-2.63048e-07" ixz="6.815581e-06" iyy="0.001" iyz="-6.815583e-06" izz="8.319196e-06"/>
</inertial>
</link>
<link name="RF_HIP">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_hip_r.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="0.064516258147 0.003787101702 -0.000152184388"/>
<mass value="1.42462064"/>
<inertia ixx="0.00243023349564" ixy="1.53023971e-05" ixz="-2.1819095354e-05" iyy="0.00230257239103" iyz="-2.6473021273e-05" izz="0.0019806759227"/>
</inertial>
</link>
<!-- Hip joint -->
<joint name="RF_HAA" type="revolute">
<parent link="base"/>
<child link="RF_HIP"/>
<origin xyz="0.277 -0.116 0.0"/>
<axis xyz="1 0 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="RF_THIGH">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_thigh_r.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<!-- KFE actuator -->
<origin rpy="1.57079632679 0 0" xyz="0.0 -0.069 -0.25"/>
<geometry>
<cylinder length="0.12" radius="0.06"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.003897968082 -0.054226618537 -0.214583373795"/>
<mass value="1.634976467"/>
<inertia ixx="0.0120367944369" ixy="-6.762065206e-05" ixz="0.000287806340448" iyy="0.0120643637939" iyz="0.00140610131218" izz="0.00249422574881"/>
</inertial>
</link>
<!-- Thigh joint -->
<joint name="RF_HFE" type="revolute">
<parent link="RF_HIP"/>
<child link="RF_THIGH"/>
<origin xyz="0.0635 -0.041 0.0"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="RF_SHANK">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_shank_r.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="0.030816858139 0.004617229294 0.000893125713"/>
<mass value="0.207204302"/>
<inertia ixx="0.0002104880248" ixy="5.6750980345e-05" ixz="1.0127699391e-05" iyy="0.000676270210023" iyz="8.22869024e-07" izz="0.000545032674924"/>
</inertial>
</link>
<!-- Shank joint -->
<joint name="RF_KFE" type="revolute">
<parent link="RF_THIGH"/>
<child link="RF_SHANK"/>
<origin xyz="0.0 -0.109 -0.25"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<!-- Shank to Adapter joint -->
<joint name="RF_SHANK_TO_ADAPTER" type="fixed">
<parent link="RF_SHANK"/>
<child link="RF_ADAPTER"/>
<origin rpy="-0.0 0.0 -0.0" xyz="0.1 0.02 0.0"/>
</joint>
<!-- Adapter link -->
<link name="RF_ADAPTER">
<visual>
<origin rpy="0 0 0.0" xyz="0 0 0.032"/>
<geometry>
<mesh filename="../meshes/anymal_foot.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 -0.160625"/>
<geometry>
<cylinder length="0.25" radius="0.015"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-8.66e-10 -1.472e-09 -0.244345749188"/>
<mass value="0.140170767"/>
<inertia ixx="0.00159938741862" ixy="-9.32e-13" ixz="1.039e-11" iyy="0.00159938741932" iyz="1.7563e-11" izz="5.4423177329e-05"/>
</inertial>
</link>
<!-- Adapter to Foot joint -->
<joint name="RF_ADAPTER_TO_FOOT" type="fixed" dont_collapse="true">
<parent link="RF_ADAPTER"/>
<child link="RF_FOOT"/>
<origin rpy="-0.0 0.0 -0.0" xyz="0.0 -0.0 -0.32125"/>
</joint>
<!-- Foot link -->
<link name="RF_FOOT">
<collision>
<origin xyz="0 0 0.02325"/>
<geometry>
<sphere radius="0.031"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0.02325"/>
<mass value="0.05"/>
<inertia ixx="0.001" ixy="-2.63048e-07" ixz="6.815581e-06" iyy="0.001" iyz="-6.815583e-06" izz="8.319196e-06"/>
</inertial>
</link>
<link name="LH_HIP">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_hip_r.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="-0.064516258147 -0.003787101702 -0.000152184388"/>
<mass value="1.42462064"/>
<inertia ixx="0.00243023349564" ixy="1.53023971e-05" ixz="2.1819095354e-05" iyy="0.00230257239103" iyz="2.6473021273e-05" izz="0.0019806759227"/>
</inertial>
</link>
<!-- Hip joint -->
<joint name="LH_HAA" type="revolute">
<parent link="base"/>
<child link="LH_HIP"/>
<origin xyz="-0.277 0.116 0.0"/>
<axis xyz="1 0 0"/>
<limit command_effort="80" current="10" effort="80" gear_velocity="10" lower="-9.42" upper="9.42" velocity="15"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="LH_THIGH">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_thigh_r.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<!-- KFE actuator -->
<origin rpy="1.57079632679 0 0" xyz="0.0 0.069 -0.25"/>
<geometry>
<cylinder length="0.12" radius="0.06"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.003897968082 0.054226618537 -0.214583373795"/>
<mass value="1.634976467"/>
<inertia ixx="0.0120367944369" ixy="-6.762065206e-05" ixz="-0.000287806340448" iyy="0.0120643637939" iyz="-0.00140610131218" izz="0.00249422574881"/>
</inertial>
</link>
<!-- Thigh joint -->
<joint name="LH_HFE" type="revolute">
<parent link="LH_HIP"/>
<child link="LH_THIGH"/>
<origin xyz="-0.0635 0.041 0.0"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="LH_SHANK">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_shank_r.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="-0.030816858139 -0.004617229294 0.000893125713"/>
<mass value="0.207204302"/>
<inertia ixx="0.0002104880248" ixy="5.6750980345e-05" ixz="-1.0127699391e-05" iyy="0.000676270210023" iyz="-8.22869024e-07" izz="0.000545032674924"/>
</inertial>
</link>
<!-- Shank joint -->
<joint name="LH_KFE" type="revolute">
<parent link="LH_THIGH"/>
<child link="LH_SHANK"/>
<origin xyz="-0.0 0.109 -0.25"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<!-- Shank to Adapter joint -->
<joint name="LH_SHANK_TO_ADAPTER" type="fixed">
<parent link="LH_SHANK"/>
<child link="LH_ADAPTER"/>
<origin rpy="0.0 -0.0 -0.0" xyz="-0.1 -0.02 0.0"/>
</joint>
<!-- Adapter link -->
<link name="LH_ADAPTER">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0.032"/>
<geometry>
<mesh filename="../meshes/anymal_foot.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 -0.160625"/>
<geometry>
<cylinder length="0.25" radius="0.015"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-8.66e-10 -1.472e-09 -0.244345749188"/>
<mass value="0.140170767"/>
<inertia ixx="0.00159938741862" ixy="-9.32e-13" ixz="1.039e-11" iyy="0.00159938741932" iyz="1.7563e-11" izz="5.4423177329e-05"/>
</inertial>
</link>
<!-- Adapter to Foot joint -->
<joint name="LH_ADAPTER_TO_FOOT" type="fixed" dont_collapse="true">
<parent link="LH_ADAPTER"/>
<child link="LH_FOOT"/>
<origin rpy="0.0 -0.0 -0.0" xyz="-0.0 0.0 -0.32125"/>
</joint>
<!-- Foot link -->
<link name="LH_FOOT">
<collision>
<origin xyz="0 0 0.02325"/>
<geometry>
<sphere radius="0.031"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0.02325"/>
<mass value="0.05"/>
<inertia ixx="0.001" ixy="-2.63048e-07" ixz="6.815581e-06" iyy="0.001" iyz="-6.815583e-06" izz="8.319196e-06"/>
</inertial>
</link>
<link name="RH_HIP">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_hip_l.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="-0.064516258147 0.003787101702 -0.000152184388"/>
<mass value="1.42462064"/>
<inertia ixx="0.00243023349564" ixy="-1.53023971e-05" ixz="2.1819095354e-05" iyy="0.00230257239103" iyz="-2.6473021273e-05" izz="0.0019806759227"/>
</inertial>
</link>
<!-- Hip joint -->
<joint name="RH_HAA" type="revolute">
<parent link="base"/>
<child link="RH_HIP"/>
<origin xyz="-0.277 -0.116 0.0"/>
<axis xyz="1 0 0"/>
<limit command_effort="80" current="10" effort="80" gear_velocity="10" lower="-9.42" upper="9.42" velocity="15"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="RH_THIGH">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_thigh_l.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<!-- KFE actuator -->
<origin rpy="1.57079632679 0 0" xyz="0.0 -0.069 -0.25"/>
<geometry>
<cylinder length="0.12" radius="0.06"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.003897968082 -0.054226618537 -0.214583373795"/>
<mass value="1.634976467"/>
<inertia ixx="0.0120367944369" ixy="6.762065206e-05" ixz="-0.000287806340448" iyy="0.0120643637939" iyz="0.00140610131218" izz="0.00249422574881"/>
</inertial>
</link>
<!-- Thigh joint -->
<joint name="RH_HFE" type="revolute">
<parent link="RH_HIP"/>
<child link="RH_THIGH"/>
<origin xyz="-0.0635 -0.041 0.0"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<link name="RH_SHANK">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0"/>
<geometry>
<mesh filename="../meshes/anymal_shank_l.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<inertial>
<origin rpy="0 0 0" xyz="-0.030816858139 0.004617229294 0.000893125713"/>
<mass value="0.207204302"/>
<inertia ixx="0.0002104880248" ixy="-5.6750980345e-05" ixz="-1.0127699391e-05" iyy="0.000676270210023" iyz="8.22869024e-07" izz="0.000545032674924"/>
</inertial>
</link>
<!-- Shank joint -->
<joint name="RH_KFE" type="revolute">
<parent link="RH_THIGH"/>
<child link="RH_SHANK"/>
<origin xyz="-0.0 -0.109 -0.25"/>
<axis xyz="0 1 0"/>
<limit command_effort="80" effort="80" lower="-9.42" upper="9.42" velocity="20"/>
<dynamics damping="0.0" friction="0.0"/>
</joint>
<!-- Shank to Adapter joint -->
<joint name="RH_SHANK_TO_ADAPTER" type="fixed">
<parent link="RH_SHANK"/>
<child link="RH_ADAPTER"/>
<origin rpy="-0.0 -0.0 0.0" xyz="-0.1 0.02 0.0"/>
</joint>
<!-- Adapter link -->
<link name="RH_ADAPTER">
<visual>
<origin rpy="0 0 -3.14159265359" xyz="0 0 0.032"/>
<geometry>
<mesh filename="../meshes/anymal_foot.dae" scale="0.001 0.001 0.001"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 -0.160625"/>
<geometry>
<cylinder length="0.25" radius="0.015"/>
</geometry>
<!-- <material name="anymal_material"/> -->
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-8.66e-10 -1.472e-09 -0.244345749188"/>
<mass value="0.140170767"/>
<inertia ixx="0.00159938741862" ixy="-9.32e-13" ixz="1.039e-11" iyy="0.00159938741932" iyz="1.7563e-11" izz="5.4423177329e-05"/>
</inertial>
</link>
<!-- Adapter to Foot joint -->
<joint name="RH_ADAPTER_TO_FOOT" type="fixed" dont_collapse="true">
<parent link="RH_ADAPTER"/>
<child link="RH_FOOT"/>
<origin rpy="-0.0 -0.0 0.0" xyz="-0.0 -0.0 -0.32125"/>
</joint>
<!-- Foot link -->
<link name="RH_FOOT">
<collision>
<origin xyz="0 0 0.02325"/>
<geometry>
<sphere radius="0.031"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0.02325"/>
<mass value="0.05"/>
<inertia ixx="0.001" ixy="-2.63048e-07" ixz="6.815581e-06" iyy="0.001" iyz="-6.815583e-06" izz="8.319196e-06"/>
</inertial>
</link>
</robot>

View File

@ -0,0 +1,29 @@
Copyright 2020, ANYbotics AG.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

View File

@ -0,0 +1,115 @@
<?xml version="1.0" encoding="utf-8"?>
<COLLADA xmlns="http://www.collada.org/2005/11/COLLADASchema" version="1.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<asset>
<contributor>
<author>Blender User</author>
<authoring_tool>Blender 2.81.16 commit date:2019-11-20, commit time:14:27, hash:26bd5ebd42e3</authoring_tool>
</contributor>
<created>2020-01-07T14:47:32</created>
<modified>2020-01-07T14:47:32</modified>
<unit name="meter" meter="1"/>
<up_axis>Z_UP</up_axis>
</asset>
<library_effects>
<effect id="battery-effect">
<profile_COMMON>
<newparam sid="battery-surface">
<surface type="2D">
<init_from>battery</init_from>
</surface>
</newparam>
<newparam sid="battery-sampler">
<sampler2D>
<source>battery-surface</source>
</sampler2D>
</newparam>
<technique sid="common">
<lambert>
<emission>
<color sid="emission">0 0 0 1</color>
</emission>
<diffuse>
<texture texture="battery-sampler" texcoord="UVMap"/>
</diffuse>
<index_of_refraction>
<float sid="ior">1.45</float>
</index_of_refraction>
</lambert>
</technique>
</profile_COMMON>
</effect>
</library_effects>
<library_images>
<image id="battery" name="battery">
<init_from>battery.jpg</init_from>
</image>
</library_images>
<library_materials>
<material id="battery-material" name="battery">
<instance_effect url="#battery-effect"/>
</material>
</library_materials>
<library_geometries>
<geometry id="Cube-mesh" name="Cube">
<mesh>
<source id="Cube-mesh-positions">
<float_array id="Cube-mesh-positions-array" count="24">-0.225881 -0.06249898 -0.03684729 -0.225881 -0.06249898 0.03725165 -0.225881 0.06249898 -0.03684729 -0.225881 0.06249898 0.03725165 0.232213 -0.06249898 -0.03684729 0.232213 -0.06249898 0.03725165 0.232213 0.06249898 -0.03684729 0.232213 0.06249898 0.03725165</float_array>
<technique_common>
<accessor source="#Cube-mesh-positions-array" count="8" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
</technique_common>
</source>
<source id="Cube-mesh-normals">
<float_array id="Cube-mesh-normals-array" count="18">-1 0 0 0 1 0 1 0 0 0 -1 0 0 0 -1 0 0 1</float_array>
<technique_common>
<accessor source="#Cube-mesh-normals-array" count="6" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
</technique_common>
</source>
<source id="Cube-mesh-map">
<float_array id="Cube-mesh-map-array" count="72">0.8431081 0.2728654 1 0 1 0.2728654 0.6862162 1 0.8431079 0 0.8431081 1 0.8431081 0.5457308 1 0.2728654 1 0.5457308 0.6862159 0 0.5293241 1 0.5293239 0 0.264662 0 0.5293239 0.9999999 0.2646622 0.9999999 0.264662 0.9999999 0 0 0.2646618 0 0.8431081 0.2728654 0.8431081 0 1 0 0.6862162 1 0.6862161 0 0.8431079 0 0.8431081 0.5457308 0.8431081 0.2728654 1 0.2728654 0.6862159 0 0.6862161 1 0.5293241 1 0.264662 0 0.5293238 0 0.5293239 0.9999999 0.264662 0.9999999 1.73529e-7 0.9999999 0 0</float_array>
<technique_common>
<accessor source="#Cube-mesh-map-array" count="36" stride="2">
<param name="S" type="float"/>
<param name="T" type="float"/>
</accessor>
</technique_common>
</source>
<vertices id="Cube-mesh-vertices">
<input semantic="POSITION" source="#Cube-mesh-positions"/>
</vertices>
<triangles material="battery-material" count="12">
<input semantic="VERTEX" source="#Cube-mesh-vertices" offset="0"/>
<input semantic="NORMAL" source="#Cube-mesh-normals" offset="1"/>
<input semantic="TEXCOORD" source="#Cube-mesh-map" offset="2" set="0"/>
<p>1 0 0 2 0 1 0 0 2 3 1 3 6 1 4 2 1 5 7 2 6 4 2 7 6 2 8 5 3 9 0 3 10 4 3 11 6 4 12 0 4 13 2 4 14 3 5 15 5 5 16 7 5 17 1 0 18 3 0 19 2 0 20 3 1 21 7 1 22 6 1 23 7 2 24 5 2 25 4 2 26 5 3 27 1 3 28 0 3 29 6 4 30 4 4 31 0 4 32 3 5 33 1 5 34 5 5 35</p>
</triangles>
</mesh>
</geometry>
</library_geometries>
<library_visual_scenes>
<visual_scene id="Scene" name="Scene">
<node id="AM3_Battery_LP" name="AM3 Battery LP" type="NODE">
<matrix sid="transform">1 0 0 0 0 1 0 0 0 0 1 -0.03329167 0 0 0 1</matrix>
<instance_geometry url="#Cube-mesh" name="AM3 Battery LP">
<bind_material>
<technique_common>
<instance_material symbol="battery-material" target="#battery-material">
<bind_vertex_input semantic="UVMap" input_semantic="TEXCOORD" input_set="0"/>
</instance_material>
</technique_common>
</bind_material>
</instance_geometry>
</node>
</visual_scene>
</library_visual_scenes>
<scene>
<instance_visual_scene url="#Scene"/>
</scene>
</COLLADA>

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

View File

@ -0,0 +1,115 @@
<?xml version="1.0" encoding="utf-8"?>
<COLLADA xmlns="http://www.collada.org/2005/11/COLLADASchema" version="1.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<asset>
<contributor>
<author>Blender User</author>
<authoring_tool>Blender 2.82.7 commit date:2020-02-12, commit time:16:20, hash:77d23b0bd76f</authoring_tool>
</contributor>
<created>2020-02-20T10:55:16</created>
<modified>2020-02-20T10:55:16</modified>
<unit name="meter" meter="1"/>
<up_axis>Z_UP</up_axis>
</asset>
<library_effects>
<effect id="depth_camera-effect">
<profile_COMMON>
<newparam sid="depth_camera-surface">
<surface type="2D">
<init_from>depth_camera</init_from>
</surface>
</newparam>
<newparam sid="depth_camera-sampler">
<sampler2D>
<source>depth_camera-surface</source>
</sampler2D>
</newparam>
<technique sid="common">
<lambert>
<emission>
<color sid="emission">0 0 0 1</color>
</emission>
<diffuse>
<texture texture="depth_camera-sampler" texcoord="UVMap"/>
</diffuse>
<index_of_refraction>
<float sid="ior">1.45</float>
</index_of_refraction>
</lambert>
</technique>
</profile_COMMON>
</effect>
</library_effects>
<library_images>
<image id="depth_camera" name="depth_camera">
<init_from>depth_camera.jpg</init_from>
</image>
</library_images>
<library_materials>
<material id="depth_camera-material" name="depth_camera">
<instance_effect url="#depth_camera-effect"/>
</material>
</library_materials>
<library_geometries>
<geometry id="Cube-mesh" name="Cube">
<mesh>
<source id="Cube-mesh-positions">
<float_array id="Cube-mesh-positions-array" count="24">0 -0.04699999 -0.01968461 0 -0.04699999 0.01951932 0 0.04699999 -0.01968461 0 0.04699999 0.01951932 0.03039997 -0.04699999 -0.01968461 0.03039997 -0.04699999 0.01951932 0.03039997 0.04699999 -0.01968461 0.03039997 0.04699999 0.01951932</float_array>
<technique_common>
<accessor source="#Cube-mesh-positions-array" count="8" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
</technique_common>
</source>
<source id="Cube-mesh-normals">
<float_array id="Cube-mesh-normals-array" count="18">-1 0 0 1 0 0 0 0 1 0 -1 0 0 1 0 0 0 -1</float_array>
<technique_common>
<accessor source="#Cube-mesh-normals-array" count="6" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
</technique_common>
</source>
<source id="Cube-mesh-map">
<float_array id="Cube-mesh-map-array" count="72">0.2816218 0.7056847 0.5632433 0 0.5632433 0.7056847 1.60564e-7 0.7056847 0.2816215 0 0.2816217 0.7056847 0.7816217 0.7056846 1 0 1 0.7056846 0.5632433 0.7056847 0.344865 1 0.344865 0.7056847 0.7816216 0.7056847 0.5632434 1 0.5632433 0.7056847 0.7816217 0 0.5632433 0.7056846 0.5632434 0 0.2816218 0.7056847 0.2816217 0 0.5632433 0 1.60564e-7 0.7056847 0 0 0.2816215 0 0.7816217 0.7056846 0.7816217 0 1 0 0.5632433 0.7056847 0.5632433 1 0.344865 1 0.7816216 0.7056847 0.7816218 1 0.5632434 1 0.7816217 0 0.7816216 0.7056846 0.5632433 0.7056846</float_array>
<technique_common>
<accessor source="#Cube-mesh-map-array" count="36" stride="2">
<param name="S" type="float"/>
<param name="T" type="float"/>
</accessor>
</technique_common>
</source>
<vertices id="Cube-mesh-vertices">
<input semantic="POSITION" source="#Cube-mesh-positions"/>
</vertices>
<triangles material="depth_camera-material" count="12">
<input semantic="VERTEX" source="#Cube-mesh-vertices" offset="0"/>
<input semantic="NORMAL" source="#Cube-mesh-normals" offset="1"/>
<input semantic="TEXCOORD" source="#Cube-mesh-map" offset="2" set="0"/>
<p>1 0 0 2 0 1 0 0 2 7 1 3 4 1 4 6 1 5 3 2 6 5 2 7 7 2 8 1 3 9 4 3 10 5 3 11 7 4 12 2 4 13 3 4 14 6 5 15 0 5 16 2 5 17 1 0 18 3 0 19 2 0 20 7 1 21 5 1 22 4 1 23 3 2 24 1 2 25 5 2 26 1 3 27 0 3 28 4 3 29 7 4 30 6 4 31 2 4 32 6 5 33 4 5 34 0 5 35</p>
</triangles>
</mesh>
</geometry>
</library_geometries>
<library_visual_scenes>
<visual_scene id="Scene" name="Scene">
<node id="Realsense_LP" name="Realsense LP" type="NODE">
<matrix sid="transform">1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1</matrix>
<instance_geometry url="#Cube-mesh" name="Realsense LP">
<bind_material>
<technique_common>
<instance_material symbol="depth_camera-material" target="#depth_camera-material">
<bind_vertex_input semantic="UVMap" input_semantic="TEXCOORD" input_set="0"/>
</instance_material>
</technique_common>
</bind_material>
</instance_geometry>
</node>
</visual_scene>
</library_visual_scenes>
<scene>
<instance_visual_scene url="#Scene"/>
</scene>
</COLLADA>

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

View File

@ -0,0 +1,115 @@
<?xml version="1.0" encoding="utf-8"?>
<COLLADA xmlns="http://www.collada.org/2005/11/COLLADASchema" version="1.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<asset>
<contributor>
<author>Blender User</author>
<authoring_tool>Blender 2.81.16 commit date:2019-11-20, commit time:14:27, hash:26bd5ebd42e3</authoring_tool>
</contributor>
<created>2020-01-13T19:28:23</created>
<modified>2020-01-13T19:28:23</modified>
<unit name="meter" meter="1"/>
<up_axis>Z_UP</up_axis>
</asset>
<library_effects>
<effect id="hatch-effect">
<profile_COMMON>
<newparam sid="hatch-surface">
<surface type="2D">
<init_from>hatch</init_from>
</surface>
</newparam>
<newparam sid="hatch-sampler">
<sampler2D>
<source>hatch-surface</source>
</sampler2D>
</newparam>
<technique sid="common">
<lambert>
<emission>
<color sid="emission">0 0 0 1</color>
</emission>
<diffuse>
<texture texture="hatch-sampler" texcoord="UVMap"/>
</diffuse>
<index_of_refraction>
<float sid="ior">1.45</float>
</index_of_refraction>
</lambert>
</technique>
</profile_COMMON>
</effect>
</library_effects>
<library_images>
<image id="hatch" name="hatch">
<init_from>hatch.jpg</init_from>
</image>
</library_images>
<library_materials>
<material id="hatch-material" name="hatch">
<instance_effect url="#hatch-effect"/>
</material>
</library_materials>
<library_geometries>
<geometry id="AM3_Application_Hatch_Detailed_HD_002-mesh" name="AM3 Application Hatch Detailed HD.002">
<mesh>
<source id="AM3_Application_Hatch_Detailed_HD_002-mesh-positions">
<float_array id="AM3_Application_Hatch_Detailed_HD_002-mesh-positions-array" count="96">-76.90127 -64.71254 0 -76.90127 -64.71254 5.500071 -76.90127 64.71254 0 -76.90127 64.71254 5.500071 76.90133 -64.71254 0 76.90133 -64.71254 5.500071 76.90133 64.71254 0 76.90133 64.71254 5.500071 -80.46286 11.54165 0 -91.00833 3.847216 0 -91.00833 -3.847217 0 -80.46286 -11.54165 0 -80.46286 -11.54165 5.500071 -91.00833 -3.847217 5.500071 -91.00833 3.847217 5.500071 -80.46286 11.54165 5.500071 80.46292 -11.54165 0 91.00839 -3.847217 0 91.00839 3.847217 0 80.46292 11.54165 0 80.46292 11.54165 5.500071 91.00839 3.847216 5.500071 91.00839 -3.847217 5.500071 80.46292 -11.54165 5.500071 -80.46286 38.12709 5.500071 80.46292 38.12709 0 -80.46286 38.12709 0 80.46292 38.12709 5.500071 -80.46286 -38.12709 0 80.46292 -38.12709 5.500071 -80.46286 -38.12709 5.500071 80.46292 -38.12709 0</float_array>
<technique_common>
<accessor source="#AM3_Application_Hatch_Detailed_HD_002-mesh-positions-array" count="32" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
</technique_common>
</source>
<source id="AM3_Application_Hatch_Detailed_HD_002-mesh-normals">
<float_array id="AM3_Application_Hatch_Detailed_HD_002-mesh-normals-array" count="63">-0.9911455 0.1327809 0 0 1 0 0.9911455 -0.1327809 0 0 -1 0 0 0 1 0 0 1 1 0 0 0.5894234 0.8078244 0 0.5894234 -0.8078244 0 -1 0 0 -0.5894234 -0.8078244 0 -0.5894234 0.8078243 -2.12521e-7 0.9911454 0.1327813 0 -0.9911454 -0.1327813 0 -0.9911454 0.1327813 0 0.9911454 -0.1327813 0 0 0 1 0.5894235 0.8078243 2.12521e-7 -0.5894234 0.8078244 0 0.9911455 0.1327809 0 -0.9911455 -0.1327809 0</float_array>
<technique_common>
<accessor source="#AM3_Application_Hatch_Detailed_HD_002-mesh-normals-array" count="21" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
</technique_common>
</source>
<source id="AM3_Application_Hatch_Detailed_HD_002-mesh-map">
<float_array id="AM3_Application_Hatch_Detailed_HD_002-mesh-map-array" count="276">0.8547119 0.7845032 0.8910338 0.6826047 0.8910338 0.7845032 0.8547118 0.6826048 0.8910338 0 0.8910338 0.6826047 0.963678 0.5650011 0.927356 0.7110616 0.927356 0.5650011 0.891034 0.7845032 0.927356 0.1018984 0.927356 0.7845032 0.1755681 0.9420632 0 0.07750421 0.1755678 0.05793678 0.679144 0.9420632 0.5035759 0.05793678 0.6791438 0.05793678 0.503576 0.9420632 0.4527625 0 0.5035759 0.05793678 0.4527627 1 0.4019491 0 0.4527625 0 0.4019494 1 0.3511358 0.05793678 0.4019491 0 0.963678 0.1460605 0.927356 0.2921209 0.927356 0.1460605 0.963678 0.2921209 0.927356 0.3343942 0.927356 0.2921209 0.963678 0.3343942 0.927356 0.3766674 0.927356 0.3343942 0.963678 0.3766674 0.927356 0.4189407 0.927356 0.3766674 0.927356 0.7971531 0.963678 0.7110617 0.963678 0.7971532 0.963678 0.07171952 1 0 1 0.07171952 0.8547119 0.9672312 0.8910339 0.9423143 0.8910339 0.9672312 0.8547119 0.9423143 0.8910339 0.8705947 0.8910339 0.9423143 0.963678 0 0.927356 0.1460605 0.9273561 0 0.8547118 0.9224958 0.6791438 0.05793678 0.8547117 0.07750409 0.8547119 0.8705947 0.8910338 0.7845032 0.8910339 0.8705947 0.891034 0.1018984 0.927356 0 0.927356 0.1018984 0.351136 0.9420632 0.1755678 0.05793678 0.3511358 0.05793678 0.963678 0.4189407 0.927356 0.5650011 0.927356 0.4189407 0.8547119 0.7845032 0.8547118 0.6826048 0.8910338 0.6826047 0.8547118 0.6826048 0.8547118 0 0.8910338 0 0.963678 0.5650011 0.963678 0.7110616 0.927356 0.7110616 0.891034 0.7845032 0.891034 0.1018984 0.927356 0.1018984 0.1755681 0.9420632 0 0.9224959 0 0.07750421 0.679144 0.9420632 0.503576 0.9420632 0.5035759 0.05793678 0.503576 0.9420632 0.4527627 1 0.4527625 0 0.4527627 1 0.4019494 1 0.4019491 0 0.4019494 1 0.351136 0.9420632 0.3511358 0.05793678 0.963678 0.1460605 0.963678 0.2921209 0.927356 0.2921209 0.963678 0.2921209 0.963678 0.3343942 0.927356 0.3343942 0.963678 0.3343942 0.963678 0.3766674 0.927356 0.3766674 0.963678 0.3766674 0.963678 0.4189407 0.927356 0.4189407 0.927356 0.7971531 0.927356 0.7110616 0.963678 0.7110617 0.963678 0.07171952 0.963678 0 1 0 0.8547119 0.9672312 0.8547119 0.9423143 0.8910339 0.9423143 0.8547119 0.9423143 0.8547119 0.8705947 0.8910339 0.8705947 0.963678 0 0.963678 0.1460605 0.927356 0.1460605 0.8547118 0.9224958 0.679144 0.9420632 0.6791438 0.05793678 0.8547119 0.8705947 0.8547119 0.7845032 0.8910338 0.7845032 0.891034 0.1018984 0.891034 0 0.927356 0 0.351136 0.9420632 0.1755681 0.9420632 0.1755678 0.05793678 0.963678 0.4189407 0.963678 0.5650011 0.927356 0.5650011</float_array>
<technique_common>
<accessor source="#AM3_Application_Hatch_Detailed_HD_002-mesh-map-array" count="138" stride="2">
<param name="S" type="float"/>
<param name="T" type="float"/>
</accessor>
</technique_common>
</source>
<vertices id="AM3_Application_Hatch_Detailed_HD_002-mesh-vertices">
<input semantic="POSITION" source="#AM3_Application_Hatch_Detailed_HD_002-mesh-positions"/>
</vertices>
<triangles material="hatch-material" count="46">
<input semantic="VERTEX" source="#AM3_Application_Hatch_Detailed_HD_002-mesh-vertices" offset="0"/>
<input semantic="NORMAL" source="#AM3_Application_Hatch_Detailed_HD_002-mesh-normals" offset="1"/>
<input semantic="TEXCOORD" source="#AM3_Application_Hatch_Detailed_HD_002-mesh-map" offset="2" set="0"/>
<p>24 0 0 2 0 1 26 0 2 3 1 3 6 1 4 2 1 5 29 2 6 4 2 7 31 2 8 5 3 9 0 3 10 4 3 11 30 4 12 5 4 13 29 4 14 24 4 15 20 4 16 27 4 17 15 4 18 21 4 19 20 4 20 14 5 21 22 5 22 21 5 23 13 4 24 23 4 25 22 4 26 27 6 27 19 6 28 25 6 29 20 7 30 18 7 31 19 7 32 21 6 33 17 6 34 18 6 35 22 8 36 16 8 37 17 8 38 30 9 39 11 9 40 28 9 41 12 10 42 10 10 43 11 10 44 13 9 45 9 9 46 10 9 47 14 11 48 8 11 49 9 11 50 7 12 51 25 12 52 6 12 53 3 4 54 27 4 55 7 4 56 15 9 57 26 9 58 8 9 59 1 13 60 28 13 61 0 13 62 12 4 63 29 4 64 23 4 65 23 6 66 31 6 67 16 6 68 24 14 69 3 14 70 2 14 71 3 1 72 7 1 73 6 1 74 29 15 75 5 15 76 4 15 77 5 3 78 1 3 79 0 3 80 30 4 81 1 4 82 5 4 83 24 4 84 15 4 85 20 4 86 15 16 87 14 16 88 21 16 89 14 4 90 13 4 91 22 4 92 13 4 93 12 4 94 23 4 95 27 6 96 20 6 97 19 6 98 20 17 99 21 17 100 18 17 101 21 6 102 22 6 103 17 6 104 22 8 105 23 8 106 16 8 107 30 9 108 12 9 109 11 9 110 12 10 111 13 10 112 10 10 113 13 9 114 14 9 115 9 9 116 14 18 117 15 18 118 8 18 119 7 19 120 27 19 121 25 19 122 3 4 123 24 4 124 27 4 125 15 9 126 24 9 127 26 9 128 1 20 129 30 20 130 28 20 131 12 4 132 30 4 133 29 4 134 23 6 135 29 6 136 31 6 137</p>
</triangles>
</mesh>
</geometry>
</library_geometries>
<library_visual_scenes>
<visual_scene id="Scene" name="Scene">
<node id="AM3_Application_Hatch_LP" name="AM3 Application Hatch LP" type="NODE">
<matrix sid="transform">0.001006675 0 0 0 0 9.99987e-4 0 0 0 0 9.99987e-4 0 0 0 0 1</matrix>
<instance_geometry url="#AM3_Application_Hatch_Detailed_HD_002-mesh" name="AM3 Application Hatch LP">
<bind_material>
<technique_common>
<instance_material symbol="hatch-material" target="#hatch-material">
<bind_vertex_input semantic="UVMap" input_semantic="TEXCOORD" input_set="0"/>
</instance_material>
</technique_common>
</bind_material>
</instance_geometry>
</node>
</visual_scene>
</library_visual_scenes>
<scene>
<instance_visual_scene url="#Scene"/>
</scene>
</COLLADA>

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 227 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Some files were not shown because too many files have changed in this diff Show More