- Published on
1X Unveils Redwood AI: A Unified, Onboard Brain for Its NEO Humanoid Robots
- Authors
- Name
- Humanoids daily
- @humanoidsdaily

After teasing a major announcement with cryptic videos of its NEO humanoids in a forest, OpenAI-backed robotics firm 1X today revealed what they were working on: Redwood, a new AI model designed to serve as the brain for its bipedal robots.
The name itself was the answer to the teaser. Redwood is a single, unified AI system that handles perception, navigation, and complex manipulation, all while running directly on the robot's onboard hardware.
A Single Brain for a Whole Body
According to a company blog post, Redwood is a vision-language-action (VLA) model designed to give the NEO robot the ability to perform end-to-end mobile manipulation tasks. In a demonstration video, a NEO robot responds to a voice command like, "Hey, Neo, can you hand me my beer?", navigates an apartment, identifies the correct object, and delivers it.
The key technical distinction 1X emphasizes is "whole-body control." Unlike many robotic systems that treat locomotion (walking) and manipulation (using arms and hands) as separate problems, Redwood controls them jointly. This allows the robot to perform more fluid, human-like movements such as bending at the hips and spine to pick up clothes from the floor, or bracing a hand against a surface for stability while opening a heavy door.
This integrated approach, the company claims, is essential for tackling the physical complexity of real-world home environments.
Onboard, Efficient, and Learning from Mistakes
Significantly, the entire Redwood AI runs locally on NEO’s onboard GPU. The 160-million-parameter transformer model operates at approximately 5 Hz, allowing the robot to function fully without a constant connection to the cloud. This is a critical factor for deploying autonomous robots in locations with unreliable internet, from basements to gardens.
To achieve generalization—the ability to handle novel objects and situations—Redwood is trained on a diverse dataset of tasks performed by both human teleoperators and autonomous robots in 1X offices and employee homes.
Perhaps most interestingly, 1X highlights that the model learns from both successes and failures. "Failure data is really important for improving the model because it teaches the AI what not to do," a narrator states in the company's video. This approach acknowledges the reality of robot deployment, where not every attempt is perfect. By training on failures, the system can learn the boundaries of its capabilities and avoid repeating mistakes, a method that could lead to more robust and reliable behavior over time.
The Road Ahead
While the demonstrations are compelling, 1X is candid about the system's current stage. "Redwood is still early in development," the video admits. "It doesn't always succeed on the first try."
The announcement positions 1X’s strategy in the increasingly competitive humanoid race: a focus on building an efficient, embodied intelligence that can learn directly from physical interaction within the messy, unpredictable environments humans occupy. The company states an ambitious goal to "deploy a production-grade AI in as many homes as possible this year," a plan that will undoubtedly test Redwood's ability to generalize and scale.