- Published on
Figure Launches 'Project Go-Big' to Train Humanoid Robots on Human Video Data
- Authors
- Name
- Humanoids daily
- @humanoidsdaily

SUNNYVALE, CA — Figure is betting that the best way to teach a robot to act like a human is to let it watch humans. The humanoid robotics company today announced "Project Go-Big," an ambitious initiative to build what it calls the "world’s largest and most diverse humanoid pretraining dataset" by capturing video of people in their everyday environments.
The project's goal is to solve one of the biggest bottlenecks in robotics: the lack of massive, real-world data needed to train capable, general-purpose AI. While fields like computer vision and natural language processing have benefited from vast internet datasets like ImageNet and Wikipedia, robotics has no direct equivalent.
Figure plans to create its own. Accelerated by its recently announced strategic partnership with Brookfield, a global asset manager with over 100,000 residential units, Figure has begun capturing video of people performing tasks in a wide variety of real homes.
"Every machine learning breakthrough has come from massive, diverse datasets," Figure CEO Brett Adcock stated in an announcement on X. "There is nothing like this for robotics so we are building our own."
Every machine learning breakthrough has come from massive, diverse datasets There is nothing like this for robotics so we are building our own. A single Helix neural network now outputs both manipulation and navigation, end-to-end from language and pixel input
The company also revealed a significant early result from this new data source: its Helix AI model can now power a robot to navigate cluttered home environments based on natural language commands after being trained exclusively on first-person video from humans.
From Human Video to Robot Action
According to Figure, this achievement marks a critical milestone in AI development, which it describes as "zero-shot human-to-robot transfer." In essence, the robot learned to navigate without any specific robotic training data. It didn't need to see another robot perform the task or be manually tele-operated through the space. Instead, the Helix model learned to translate human navigation strategies directly into robot control commands.
The company claims this is the first instance of a humanoid robot learning navigation end-to-end—from pixel and language inputs to low-level velocity commands—using only human video. After training on this new dataset, the robot can reportedly respond to conversational prompts like "Walk to the kitchen table" and autonomously find its way through a complex space.
This new navigation capability is not a separate, specialized system. Figure emphasized that it has been integrated into a single, unified Helix neural network that also handles the dexterous, upper-body manipulation tasks the company has demonstrated previously. This unified model is a key step toward creating a truly generalist robot that can both move through and interact with its environment seamlessly.
Solving the Data Problem at Scale
Traditionally, training robots has relied on costly and time-consuming methods like direct teleoperation or programming behaviors for specific, controlled environments. These approaches struggle to scale and often fail to capture the "messiness" of the real world.
Figure's strategy with Project Go-Big is to leverage the unique advantage of the humanoid form factor. Because the robots have a body plan, perspective, and kinematics that mirror a human's, it is theoretically more straightforward to transfer knowledge directly from human video.
The partnership with Brookfield is central to this effort, providing Figure with an unparalleled diversity of real-world training grounds. Data has already been collected in Brookfield residential units, and the program is set to scale in the coming months.
While the initial results focus on navigation, the long-term vision is to build a foundational dataset that can teach the Helix model a vast range of human behaviors, from simple locomotion to complex manipulation. By building its own "YouTube for robot behaviors," Figure is making an aggressive push to accelerate past R&D and toward its goal of deploying useful humanoids in commercial and domestic settings.
Read more here: https://www.figure.ai/news/project-go-big