- Published on
Tesla AI Chief Details Unified 'World Simulator' for FSD and Optimus

Ashok Elluswamy, Tesla's Vice President of AI Software and the current head of the Optimus program, has published a detailed explanation of the company's end-to-end autonomy strategy, explicitly connecting the technology powering its vehicles directly to its humanoid robot.
In an abridged version of a talk from the International Conference of Computer Vision (ICCV), Elluswamy argues that Tesla's single, end-to-end neural network—which learns from video, maps, and kinematic data—is the only scalable path to solving real-world robotics.
The most significant confirmation for the humanoid industry was the final section, titled "The great thing about all the above points is that, they not just solve for vehicle autonomy, but also seamlessly transfer to Optimus."
The great thing about all the above points is that, they not just solve for vehicle autonomy, but also seamlessly transfer to Optimus.
Elluswamy's post included a video demonstrating this transfer, showing the Optimus robot navigating a Tesla gigafactory inside the "neural world simulator" that the company uses to train and validate its driving AI. This provides the first concrete, technical look at the software foundation Tesla is building for its robot.
The 'Neural World Simulator'
Elluswamy's presentation offers a direct look at what Tesla believes is the solution to one of AI's biggest challenges: evaluation. He notes that "loss on open-loop predictions might not correlate to great performance in the real-world."
Tesla's answer is a "neural world simulator." This system is trained on the same "Niagara Falls of data" from its vehicle fleet and learns to synthesize new, high-fidelity video of the world in response to the AI's actions. This allows Tesla to:
- Run closed-loop simulations to evaluate new AI models.
- Validate models against historical data, allowing the AI to "diverge" and show what it would have done differently.
- Synthetically create new "adversarial scenarios" to test corner cases.
- Perform large-scale reinforcement learning to "achieve superhuman performance."
Crucially, Elluswamy demonstrated this simulator is not just for cars. The system can generate realistic video of the Optimus robot's actions, such as walking and turning within the factory, providing a powerful tool for training and testing the robot's AI in a safe, virtual environment.

A Direct Answer to Industry Skeptics
This presentation appears to be a direct counter-argument to recent high-profile critiques of the humanoid robot industry. Just this week, Meta's AI Chief Yann LeCun claimed that most humanoid companies "have no idea" how to build the required AI and lack the fundamental "world models" needed for general-purpose use.
LeCun argued that true AI would come from systems trained on high-bandwidth video to build an internal understanding of the physical world.
Elluswamy's post presents Tesla's "neural world simulator"—a system trained entirely on video to predict future states—as exactly that: a functioning, scalable world model.
This represents a major strategic update. While CEO Elon Musk has frequently focused on the "immense" manufacturing challenge and non-existent supply chain for Optimus, Elluswamy is making an equally ambitious claim on the software side.
Why End-to-End?
Elluswamy frames Tesla's end-to-end architecture as a fundamental necessity, contrasting it with the "sensor-heavy, modular approach" used by most competitors. He argues that trying to "codify human values" in traditional programming logic is "incredibly difficult."
He cites two key examples:
- "Mini-Trolley Problems": A car deciding between driving over a large puddle or briefly entering a clear oncoming lane. Elluswamy notes this is a trade-off that is "rather straightforward for a human" but nearly impossible to hard-code. By training on human data, the AI "learns values that are aligned with what humans value."
- "Soft Intent": He shows two clips, one where FSD waits for chickens to cross the road and another where it drives around geese that are "just want[ing] to hang out." This "soft intent," he argues, "is best communicated in an end-to-end latent fashion" rather than through a rigid perception-to-planning interface.
This unified strategy has been Elluswamy's core thesis for years, but his new post is the most detailed public confirmation of how it all connects. It follows a comment he made earlier this month, after an Optimus "Kung Fu" demo, that unifying the AI models for self-driving and Optimus "is going to be fire."
By publicly detailing its "neural world simulator" and showing it running with Optimus, Tesla is asserting that it is not just solving the manufacturing problem, but that it has already built the foundational AI architecture that its critics claim is still years of research away.
Share this article
Stay Ahead in Humanoid Robotics
Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.