Datasets: Ready for the Next Wave of Large-Scale Robot Learning
Robots: A New Era of Hardware Integration with the Plugin System
We're thrilled to announce a series of significant advancements across LeRobot, designed to make open-source robot learning more powerful, scalable, and user-friendly than ever before! From revamped datasets to versatile editing tools, new simulation environments, and a groundbreaking plugin system for hardware, LeRobot is continuously evolving to meet the demands of cutting-edge embodied AI.
LeRobot v0.4.0 delivers a major upgrade for open-source robotics, introducing scalable Datasets v3.0, powerful new VLA models like PI0.5 and GR00T N1.5, and a new plugin system for easier hardware integration. The release also adds support for LIBERO and Meta-World simulations, simplified multi-GPU training, and a new Hugging Face Robot Learning Course.
We've completely overhauled our dataset infrastructure with LeRobotDataset v3.0, featuring a new chunked episode format and streaming capabilities. This is a game-changer for handling massive datasets like OXE (Open X Embodiment) and Droid, bringing unparalleled efficiency and scalability.
We've also provided a conversion script to easily migrate your existing v2.1 datasets to the new v3.0 format, ensuring a smooth transition. Read more about it in our previous blog post. Open-source robotics keeps leveling up!
Working with LeRobot datasets just got a whole lot easier! We've introduced a powerful set of utilities for flexible dataset editing.
With our new lerobot-edit-dataset CLI, you can now:
Delete specific episodes from existing datasets.
Split datasets by fractions or episode indices.
Add or remove features with ease.
Merge multiple datasets into one unified set.
lerobot-edit-dataset
--repo_id lerobot/pusht_merged
--operation.type merge
--operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']"
lerobot-edit-dataset
--repo_id lerobot/pusht
--new_repo_id lerobot/pusht_after_deletion
--operation.type delete_episodes
--operation.episode_indices "[0, 2, 5]"
These tools streamline your workflow, allowing you to curate and optimize your robot datasets like never before. Check out the docs for more details!
We're continuously expanding LeRobot's simulation capabilities to provide richer and more diverse training environments for your robotic policies.
LeRobot now officially supports LIBERO, one of the largest open benchmarks for Vision-Language-Action (VLA) policies, boasting over 130 tasks! This is a huge step toward building the go-to evaluation hub for VLAs, enabling easy integration and a unified setup for evaluating any VLA policy.
Check out the LIBERO dataset and our docs to get started!
We've integrated Meta-World, a premier benchmark for testing multi-task and generalization abilities in robotic manipulation, featuring over 50 diverse manipulation tasks. This integration, along with our standardized use of gymnasium ≥ 1.0.0 and mujoco ≥ 3.0.0, ensures deterministic seeding and a robust simulation foundation.
Train your policies with the Meta-World dataset today!
We're making robot control more flexible and accessible, enabling new possibilities for data collection and model training.
Getting data from a robot to a model (and back!) is tricky. Raw sensor data, joint positions, and language instructions don't match what AI models expect. Models need normalized, batched tensors on the right device, while your robot hardware needs specific action commands.
We're excited to introduce Processors: a new, modular pipeline that acts as a universal translator for your data. Think of it as an assembly line where each ProcessorStep handles one specific job—like normalizing, tokenizing text, or moving data to the GPU.
You can chain these steps together into a powerful pipeline to perfectly manage your data flow. We've even created two distinct types to make life easier:
PolicyProcessorPipeline: Built for models. It expertly handles batched tensors for high-performance training and inference.
RobotProcessorPipeline: Built for hardware. It processes individual data points (like a single observation or action) for real-time robot control.
obs = robot.get_observation()
obs_processed = preprocess(obs)
action = model.select_action(obs_processed)
action_processed = postprocess(action)
robot.send_action(action_processed)
This system makes it simple to connect any policy to any robot, ensuring your data is always in the perfect format for every step of the way. Learn more about it in our Introduction to Processors documentation.
Training large robot policies just got a lot faster! We've integrated Accelerate directly into our training pipeline, making it incredibly simple to scale your experiments across multiple GPUs with just one command:
accelerate launch \
--multi_gpu \
--num_processes=$NUM_GPUs \
$(which lerobot-train) \
--dataset.repo_id=${HF_USER}/my_dataset \
--policy.repo_id=${HF_USER}/my_trained_policy \
--policy.type=$POLICY_TYPE \
# ... More training configuration flags
Whether you're fine-tuning a policy or running large-scale experiments, LeRobot now handles all the complexities of distributed training for you. This means you can drastically reduce training time, cutting it in half with 2 GPUs, down to a third with 3 GPUs, and beyond.
Check out the documentation to accelerate your robot learning!
In a major milestone for open-source robotics, we've integrated pi0 and pi0.5 policies by Physical Intelligence into LeRobot! These Vision-Language-Action (VLA) models represent a significant leap towards addressing open-world generalization in robotics. But what makes π₀.₅ revolutionary?
You can find the models on the Hugging Face Hub: pi0.5_base, pi0_base, and their Libero-tuned counterparts. For more details, checkout the Physical Intelligence Reasearch
In another exciting development, we've integrated NVIDIA's GR00T N1.5 into LeRobot, thanks to a fantastic collaboration with the NVIDIA robotics team! This open foundation model is a powerhouse for generalized robot reasoning and skills. As a cross-embodiment model, it takes multimodal input (like language and images) to perform complex manipulation tasks in diverse environments, marking another major leap in generalized robotics. But what makes GR00T N1.5 a game-changer?
You can find the model on the Hugging Face Hub: GR00T-N1.5-3B. For more details, check out the NVIDIA research page and the official GitHub repository.
The native integration of these policies in lerobot is a huge step forward in making robot learning as open and reproducible as it can be. Try them out today, share your runs, and let's push forward the frontier of embodied AI together!
Big news for hardware enthusiasts! We've launched a brand-new plugin system to revolutionize how you integrate third-party hardware with LeRobot. Now, connecting any robot, camera, or teleoperator is as simple as a pip install, eliminating the need to modify the core library.
Learn how to create your own plugin in our documentation.
pip install lerobot_teleoperator_my_awesome_teleop
lerobot-teleoperate --teleop.type=my_awesome_teleop
Thanks to our new plugin system, we've also added Reachy 2 from Pollen Robotics to LeRobot! Reachy 2 is available for both real robot control and simulation, enabling you to experiment with teleoperation and autonomous demos right away.
Thanks to our powerful new pipeline system, you can now teleoperate your follower arm right from your phone (iOS/Android). The phone acts as a teleoperator device, and our RobotProcessor pipeline handles all the transformations, allowing you to drive robots in different action spaces (like end-effector space) with ease. Check out the examples!
We're launching a comprehensive, self-paced, and entirely open-source course designed to make robot learning accessible to everyone! If you're curious about how real-world robots learn, this is the perfect place to start.
In this course, you’ll learn how to:
Join the Hugging Face Robotics organization to follow along and start your journey!
For those who want to go deeper, we've also published a hands-on tutorial on the most recent advancements in robotics. This guide provides self-contained explanations, re-derives modern techniques from first principles, and includes ready-to-use code examples using LeRobot and Hugging Face.
The tutorial itself is hosted in a Space and it features practical examples using LeRobot, with all models and datasets on the Hugging Hub. You can also check out our paper for a detailed overview.
Beyond these major features, this release is packed with numerous bug fixes, documentation improvements, updated dependencies, more examples and better infrastructure to make your experience with LeRobot smoother and more reliable.
We want to extend a huge thank you to everyone in the community for your invaluable contributions, feedback, and support. We're incredibly excited about the future of open-source robotics and can't wait to work with you on what's next!
Stay tuned for more to come 🤗 Get started here! – The LeRobot team ❤️