Merge branch 'main' of github.com:machines-in-motion/Go2Py into main
This commit is contained in:
commit
0d7708ddbd
|
@ -99,6 +99,67 @@ Finally, update the and upgrade:
|
|||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
### Installing CUDA
|
||||
|
||||
To use the onboard GPU we need to install the CUDA toolkit for Jetson [here](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=aarch64-jetson&Compilation=Native&Distribution=Ubuntu&target_version=20.04&target_type=deb_local). Specifically:
|
||||
|
||||
```
|
||||
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-ubuntu2004.pin
|
||||
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
|
||||
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda-tegra-repo-ubuntu2004-11-8-local_11.8.0-1_arm64.deb
|
||||
sudo dpkg -i cuda-tegra-repo-ubuntu2004-11-8-local_11.8.0-1_arm64.deb
|
||||
sudo cp /var/cuda-tegra-repo-ubuntu2004-11-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
|
||||
sudo apt-get update
|
||||
sudo apt-get -y install cuda-11-8
|
||||
```
|
||||
Additionally, we also need to install the following packages for successful CUDA kernel compilation in our projects:
|
||||
```
|
||||
sudo apt install -y libcudnn8-dev libcusolver-dev-11-8 libcublas-dev-11-8 libcublas-11-8 libcusparse-11-8 libcusparse-dev-11-8
|
||||
```
|
||||
In order to install Torch, we need to know what Jetpack revision we have. You can check it out through the following command:
|
||||
```
|
||||
dpkg-query --show nvidia-l4t-core
|
||||
```
|
||||
### Installing Conda
|
||||
|
||||
Simply run the following commands to install Conda:
|
||||
|
||||
```bash
|
||||
sudo chown $USER /opt
|
||||
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh -O ~/miniconda.sh
|
||||
/bin/bash ~/miniconda.sh -b -p /opt/conda
|
||||
rm ~/miniconda.sh
|
||||
sudo ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
|
||||
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc
|
||||
echo "conda activate base" >> ~/.bashrc
|
||||
```
|
||||
|
||||
Now create a virtual environment for the deployment of the RL policies available in Go2Py:
|
||||
|
||||
```bash
|
||||
conda create --name rl-deploy python==3.8.10
|
||||
conda activate rl-deploy
|
||||
```
|
||||
|
||||
Finally, install the deep learning libraries needed for the deployments:
|
||||
|
||||
#### DL Frameworks
|
||||
Download and install appropriate version as described [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048):
|
||||
|
||||
```bash
|
||||
cd ~
|
||||
wget https://developer.download.nvidia.cn/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
|
||||
python -m pip install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
|
||||
# Install warp-lang if required
|
||||
pip install https://github.com/NVIDIA/warp/releases/download/v1.5.1/warp_lang-1.5.1+cu11-py3-none-manylinux2014_aarch64.whl
|
||||
```
|
||||
If successful, the following check should return true:
|
||||
|
||||
```python
|
||||
import torch
|
||||
print(torch.cuda.is_available())
|
||||
```
|
||||
|
||||
|
||||
### Robot
|
||||
Now tell the computer on the robot to use the internet shared by the host computer. SSH into the robot's computer with IP address `192.168.123.18`, username `unitree`, and password `123`. Note that the host computer's IP range should have already been set to static mode with an IP in the `192.168.123.x` range where x is anything except IPs already used by the others (e.g. `.18`).
|
||||
|
|
Loading…
Reference in New Issue