Generalist AI CEO Pete Florence argues that terms like 'VLA' and 'World Model' are temporary crutches for the industry, revealing that GEN-1's 99% scratch-trained architecture is a bet on the eventual dominance of pure robotic data.
Generalist AI has released GEN-1, an embodied foundation model trained on 500,000 hours of interaction data. Boasting a 99% success rate and 3x faster task completion, the startup claims its system has reached a commercial threshold for “simple physical tasks” through emergent improvisation.
Generalist AI's chief scientist argues that the next leap in robotics won't come from internet text, but from scaling the 'reflexive' intelligence of physical interaction.
In a technical addendum to its GEN-0 launch, Generalist AI reveals new details on its pretraining methodology, introducing metrics like "Reverse KL" to measure model creativity and claiming infrastructure capable of absorbing 6.85 years of robot experience per day.
Startup Generalist AI has unveiled GEN-0, an embodied foundation model it claims is trained on an unprecedented 270,000 hours of real-world manipulation data. The company reports its new architecture, "Harmonic Reasoning," and massive dataset have unlocked predictable scaling laws for robot intelligence.