Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it's often been too slow, siloed, or difficult to translate into real-world systems. That’s now changing. With new advances in GPU-accelerated simulation and digital twins, developers can design, test, and validate robotic workflows entirely in virtual environments - reducing prototyping time from months to days, improving model accuracy, and enabling safer, faster innovation before a single device reaches the operating room.
That's why NVIDIA introduced Isaac for Healthcare earlier this year, a developer framework for AI healthcare robotics, that enables developers in solving these challenges via integrated data collection, training, and evaluation pipelines that work across both simulation and hardware. Specifically, the Isaac for Healthcare v0.4 release provides users with an end-to-end SO-ARM based starter workflow and the bring your own operating room tutorial. The SO-ARM starter workflow lowers the barrier for MedTech developers to experience the full workflow from simulation to training to deployment and start building and validating autonomously on real hardware right away.
In this post, we'll walk through the starter workflow and its technical implementation details to help you build a surgical assistant robot in less time than ever imaginable before.
The SO-ARM starter workflow introduces a new way to explore surgical assistance tasks, and provides developers with a complete end-to-end pipeline for autonomous surgical assistance:
This workflow gives developers a safe, repeatable environment to train and refine assistive skills before moving into the Operating Room.
The workflow implements a three-stage pipeline that integrates simulation and real hardware:
Notably, over 93% of the data used for policy training was generated synthetically in simulation, underscoring the strength of simulation in bridging the robotic data gap.
The workflow combines simulation and real-world data to address the fundamental challenge that training robots in the real world is expensive and limited, while pure simulation often fails to capture real-world complexities. The approach uses approximately 70 simulation episodes for diverse scenarios and environmental variations, combined with 10-20 real-world episodes for authenticity and grounding. This mixed training creates policies that generalize beyond either domain alone.
The workflow requires:
Notably, developers could run all the simulation, training and deployment (3 computers needed for physical AI) on one DGX Spark.
For real-world data collection with SO-ARM101 hardware or any other version supported in LeRobot:
python /path/to/lerobot-record \
--robot.type=so101_follower \
--robot.port=<follower_port_id> \
--robot.cameras="{wrist: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, room: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}}" \
--robot.id=so101_follower_arm \
--teleop.type=so101_leader \
--teleop.port=<leader_port_id> \
--teleop.id=so101_leader_arm \
--dataset.repo_id=<user>/surgical_assistance/surgical_assistance \
--dataset.num_episodes=15 \
--dataset.single_task="Prepare and hand surgical instruments to surgeon"
For simulation-based data collection:
# With keyboard teleoperation
python -m simulation.environments.teleoperation_record \
--enable_cameras \
--record \
--dataset_path=/path/to/save/dataset.hdf5 \
--teleop_device=keyboard
# With SO-ARM101 leader arm
python -m simulation.environments.teleoperation_record \
--port=<your_leader_arm_port_id> \
--enable_cameras \
--record \
--dataset_path=/path/to/save/dataset.hdf5
For users without physical SO-ARM101 hardware, the workflow provides keyboard-based teleoperation with the following joint controls:
After collecting both simulation and real-world data, convert and combine datasets for training:
# Convert simulation data to LeRobot format
python -m training.hdf5_to_lerobot \
--repo_id=surgical_assistance_dataset \
--hdf5_path=/path/to/your/sim_dataset.hdf5 \
--task_description="Autonomous surgical instrument handling and preparation"
# Post-train GR00T N1.5 on mixed dataset
python -m training.gr00t_n1_5.train \
--dataset_path /path/to/your/surgical_assistance_dataset \
--output_dir /path/to/surgical_checkpoints \
--data_config so100_dualcam
The trained model processes natural language instructions such as "Prepare the scalpel for the surgeon" or "Hand me the forceps" and executes the corresponding robotic actions. With the latest LeRobot release (v0.4.0) you will be able to post-train GR00T N1.5 natively in LeRobot!
Simulation is most powerful when it's part of a loop: collect data → train → evaluate → deploy. Isaac Lab supports this full pipeline:
This reduces time from experiment to deployment and makes sim-to-real a practical part of daily development.
Isaac for Healthcare SO-ARM Starter Workflow is available now. To get started:
git clone https://github.com/isaac-for-healthcare/i4h-workflows.gittools/env_setup_so_arm_starter.sh)