With tools like TRL, TorchForge and verl, the open-source community has shown how to scale AI across complex compute infrastructure. But compute is only one side of the coin. The other side is the developer community; the people and tools that make agentic systems possible. That’s why Meta and Hugging Face are partnering to launch the OpenEnv Hub: a shared and open community hub for agentic environments.
Agentic environments define everything an agent needs to perform a task: the tools, APIs, credentials, execution context, and nothing else. They bring clarity, safety, and sandboxed control to agent behavior.
These environments can be used for both training and deployment, and serve as the foundation for scalable agentic development.
Modern AI agents can act autonomously across thousands of tasks. However, a large language model isn’t enough to get those tasks to actually run — it needs access to the right tools. Exposing millions of tools directly to a model isn’t reasonable (or safe). Instead, we need agentic environments: secure, semantically clear sandboxes that define exactly what’s required for a task, and nothing more. These environments handle the critical details:
To supercharge this next wave of agentic development, Meta-PyTorch and Hugging Face are partnering to launch a Hub for Environments: a shared space where developers can build, share, and explore OpenEnv-compatible environments for both training and deployment. The figure below shows how OpenEnv fits in the new post-training stack being developed by Meta, with integrations for other libraries like TRL, SkyRL, and Unsloth underway:

Starting next week, developers can:
Alongside this, we’re releasing the OpenEnv 0.1 Spec (RFC) to gather community feedback and help shape the standard.
In the current state of the repository, environment creators can create environments using step(), reset(), close() APIs (part of RFCs below). A few examples on how to create such environments can be seen here. Environment users can play with local Docker based environments for all environments already available in the repo. Following RFCs are under review:
This is just the beginning. We’re integrating the OpenEnv Hub with Meta’s new TorchForge RL library, and collaborating with other open-source RL projects such as verl, TRL, and SkyRL to expand compatibility. Join us at the PyTorch Conference on Oct 23 for a live demo and walkthrough of the spec, and stay tuned for our upcoming community meetup on environments, RL post-training, and agentic development.
👉 Explore the OpenEnv Hub on Hugging Face and start building the environments that will power the next generation of agents.
👉 Check out the 0.1 spec which can be found implemented in the OpenEnv project → we welcome ideas and contributions to making it better!
👉 Engage on Discord and talk with the community about RL, environments and agentic development
👉 Try it out yourself - We created a comprehensive notebook that walks you through an end to end example and of course you can easily pip install the package via PyPI. This notebook walks you through the abstractions we’ve built, along with an example of how to use existing integrations and how to add yours - Try it out in Google Colab!
👉 Check out supporting platforms - Unsloth, TRL, Lightning.AI
Let's build the future of open agents together, one environment at a time 🔥!