lerobot/examples/12_use_trossen_ai.md

8.7 KiB

This tutorial explains how to use Trossen AI Bimanual with LeRobot.

Setup

Follow the documentation from Trossen Robotics for setting up the hardware.

Install LeRobot

On your computer:

  1. Install Miniconda:
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
  1. Restart shell or source ~/.bashrc

  2. Create and activate a fresh conda environment for lerobot

conda create -y -n lerobot python=3.10 && conda activate lerobot
  1. Clone LeRobot:
git clone https://github.com/Interbotix/lerobot.git ~/lerobot
  1. Install LeRobot with dependencies for the Trossen AI arms (trossen-arm) and cameras (intelrealsense):
cd ~/lerobot && pip install -e ".[trossen_ai]"

For Linux only (not Mac), install extra dependencies for recording datasets:

conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"

Troubleshooting

If you encounter the following error.

ImportError: /xxx/xxx/xxx/envs/lerobot/lib/python3.10/site-packages/cv2/python-3.10/../../../.././libtiff.so.6: undefined symbol: jpeg12_write_raw_data, version LIBJPEG_8.0

The below are the 2 known system specific solutions

System 76 Serval Workstation (serw13) & Dell Precision 7670

    conda install pytorch==2.5.1=cpu_openblas_py310ha613aac_2 -y
    conda install torchvision==0.21.0 -y

HP

    pip install torch==2.5.1+cu121 torchvision==0.20.1+cu121 torchaudio==2.5.1+cu121 --index-url https://download.pytorch.org/whl/cu121

Teleoperate

By running the following code, you can start your first SAFE teleoperation:

python lerobot/scripts/control_robot.py \
  --robot.type=trossen_ai_bimanual \
  --robot.max_relative_target=5 \
  --control.type=teleoperate

By adding --robot.max_relative_target=5, we override the default value for max_relative_target defined in TrossenAIBimanualRobot. It is expected to be 5 to limit the magnitude of the movement for more safety, but the teleoperation won't be smooth. When you feel confident, you can disable this limit by adding --robot.max_relative_target=null to the command line:

python lerobot/scripts/control_robot.py \
  --robot.type=trossen_ai \
  --robot.max_relative_target=null \
  --control.type=teleoperate

Record a dataset

Once you're familiar with teleoperation, you can record your first dataset with Trossen AI.

If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the Hugging Face settings:

huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential

Store your Hugging Face repository name in a variable to run these commands:

HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER

Record 2 episodes and upload your dataset to the hub:

python lerobot/scripts/control_robot.py \
  --robot.type=trossen_ai_bimanual \
  --robot.max_relative_target=null \
  --control.type=record \
  --control.fps=30 \
  --control.single_task="Grasp a lego block and put it in the bin." \
  --control.repo_id=${HF_USER}/trossen_ai_bimanual_test \
  --control.tags='["tutorial"]' \
  --control.warmup_time_s=5 \
  --control.episode_time_s=30 \
  --control.reset_time_s=30 \
  --control.num_episodes=2 \
  --control.push_to_hub=true

Note: If the camera fps is unstable consider increasing the number of image writers per thread.

python lerobot/scripts/control_robot.py \
  --robot.type=trossen_ai_bimanual \
  --robot.max_relative_target=null \
  --control.type=record \
  --control.fps=30 \
  --control.single_task="Grasp a lego block and put it in the bin." \
  --control.repo_id=${HF_USER}/trossen_ai_bimanual_test \
  --control.tags='["tutorial"]' \
  --control.warmup_time_s=5 \
  --control.episode_time_s=30 \
  --control.reset_time_s=30 \
  --control.num_episodes=2 \
  --control.push_to_hub=true \
  --control.num_image_writer_threads_per_camera = 8

Visualize a dataset

If you uploaded your dataset to the hub with --control.push_to_hub=true, you can visualize your dataset online by copy pasting your repo id given by:

echo ${HF_USER}/trossen_ai_bimanual_test

If you didn't upload with --control.push_to_hub=false, you can also visualize it locally with:

python lerobot/scripts/visualize_dataset_html.py \
  --repo-id ${HF_USER}/trossen_ai_bimanual_test

Replay an episode

Now try to replay the first episode on your robot:

python lerobot/scripts/control_robot.py \
  --robot.type=trossen_ai_bimanual \
  --robot.max_relative_target=null \
  --control.type=replay \
  --control.fps=30 \
  --control.repo_id=${HF_USER}/trossen_ai_bimanual_test \
  --control.episode=0

Train a policy

To train a policy to control your robot, use the python lerobot/scripts/train.py script. A few arguments are required. Here is an example command:

python lerobot/scripts/train.py \
  --dataset.repo_id=${HF_USER}/trossen_ai_bimanual_test \
  --policy.type=act \
  --output_dir=outputs/train/act_trossen_ai_bimanual_test \
  --job_name=act_trossen_ai_bimanual_test \
  --device=cuda \
  --wandb.enable=true

Let's explain it:

  1. We provided the dataset as argument with --dataset.repo_id=${HF_USER}/trossen_ai_bimanual_test.
  2. We provided the policy with policy.type=act. This loads configurations from configuration_act.py. Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. laptop and phone) which have been saved in your dataset.
  3. We provided device=cuda since we are training on a Nvidia GPU, but you could use device=mps to train on Apple silicon.
  4. We provided wandb.enable=true to use Weights and Biases for visualizing training plots. This is optional but if you use it, make sure you are logged in by running wandb login.

For more information on the train script see the previous tutorial: examples/4_train_policy_with_script.md

Training should take several hours. You will find checkpoints in outputs/train/act_trossen_ai_bimanual_test/checkpoints.

Evaluate your policy

You can use the record function from lerobot/scripts/control_robot.py but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:

python lerobot/scripts/control_robot.py \
  --robot.type=trossen_ai_bimanual \
  --control.type=record \
  --control.fps=30 \
  --control.single_task="Grasp a lego block and put it in the bin." \
  --control.repo_id=${HF_USER}/eval_act_trossen_ai_bimanual_test \
  --control.tags='["tutorial"]' \
  --control.warmup_time_s=5 \
  --control.episode_time_s=30 \
  --control.reset_time_s=30 \
  --control.num_episodes=10 \
  --control.push_to_hub=true \
  --control.policy.path=outputs/train/act_trossen_ai_bimanual_test/checkpoints/last/pretrained_model \
  --control.num_image_writer_processes=1

As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:

  1. There is an additional --control.policy.path argument which indicates the path to your policy checkpoint with (e.g. outputs/train/eval_act_trossen_ai_bimanual_test/checkpoints/last/pretrained_model). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. ${HF_USER}/act_trossen_ai_bimanual_test).
  2. The name of dataset begins by eval to reflect that you are running inference (e.g. ${HF_USER}/eval_act_trossen_ai_bimanual_test).
  3. We use --control.num_image_writer_processes=1 instead of the default value (0). On our computer, using a dedicated process to write images from the 4 cameras on disk allows to reach constent 30 fps during inference. Feel free to explore different values for --control.num_image_writer_processes.

More

Follow this previous tutorial for a more in-depth explaination.