Published on

The Era of Eka: New Startup Unveils Vision-Force-Action Model to Crack Dexterity

Humanoids Daily
Written byHumanoids Daily
  • Eka Robotics has exited stealth, debuting a Vision-Force-Action (VFA) foundation model designed to overcome the historical tradeoff between robotic generality and high-speed performance.
  • Co-founded by MIT professor Pulkit Agrawal and former DeepMind researcher Tuomas Haarnoja, the startup utilizes high-fidelity simulations and custom tactile grippers rather than human imitation data.
  • The VFA model treats force as the native language of the physical world, allowing robots to perform delicate tasks like screwing in light bulbs or handling varied food items with human-like improvisation.
  • Unlike competitors like Physical Intelligence or Generalist AI, Eka focuses on "superhuman" capabilities derived from autonomous practice in simulated environments.

In a field currently dominated by "LLM-pilled" reasoning and massive datasets of human video, a new challenger from Cambridge, Massachusetts, is betting that the secret to robotic dexterity isn't watching humans—it’s feeling physics. Eka Robotics, co-founded by MIT’s Pulkit Agrawal and DeepMind veteran Tuomas Haarnoja, officially launched today, promising a "GPT-1 moment" for the "last millimeter" of physical interaction.

A extreme close-up of a black robotic gripper with two pincers delicately touching a single red raspberry on a gray textured surface.
Mastering the "last millimeter" of force: A 1/25x slow-motion still of an Eka robot swiftly capturing a delicate raspberry without crushing it.

The company’s core breakthrough is the Vision-Force-Action (VFA) model. While the industry has recently trended toward Vision-Language-Action (VLA) models—used by firms like Physical Intelligence and Rhoda AI to link text commands to visual tasks—Eka argues that language is a "helpful crutch" that misses the fundamental reality of force.

Force: The Native Language of Robotics

Eka’s philosophy centers on the idea that "trillions of dollars flow through the human hand." To capture that value, robots must move beyond simple pick-and-place maneuvers to master contact-rich tasks that require a "sense of touch."

"We’re building intelligence for the physical world in its native language: forces," Agrawal shared on X. The VFA model enables robots to understand mass, inertia, and friction, allowing for a level of fluid, reactive movement that appears more biological than mechanical. In live demonstrations, Eka’s robots have performed tasks that have long haunted the sector:

  • Precision Assembly: Gingerly grasping a light bulb and screwing it into a socket—a task requiring sub-millimeter precision and constant force adjustment.
  • Improvisational Sorting: Packing chicken nuggets into moving containers on a conveyor belt, including "tossing" items when time is short—a level of speed and adaptability typically reserved for human workers.
  • Tactile Recovery: Identifying when an object (like a hairbrush or a plush key ring) is slipping and adjusting the grip in real-time.

Closing the Sim-to-Real Gap

While Generalist AI and Sunday Robotics rely on hundreds of thousands of hours of real-world data collected via human-worn gloves, Eka is doubling down on simulation.

This approach targets the "data bottleneck" by allowing robots to practice for thousands of computer hours inside virtual worlds where they can invent their own solutions to physical puzzles. Eka claims their proprietary algorithms have finally bridged the "sim-to-real gap," allowing skills learned in a physics-perfect simulator to transfer seamlessly to a messy, unpredictable office or factory floor.

This strategy draws parallels to DeepMind’s AlphaZero, which learned superhuman board game strategies by playing against itself. By removing the "human-in-the-loop" requirement favored by companies like 1X Technologies, Eka aims for "superhuman" performance rather than mere human imitation.

A Competitive Shift in "Physical AI"

Eka’s emergence sharpens a growing divide in the robotics arms race:

  1. The Imitators: Companies like Rhoda AI that ingest massive amounts of human video to learn "what a task looks like."
  2. The Practitioners: Startups like Physical Intelligence that use Reinforcement Learning (RL) to let robots practice in the real world.
  3. The Simulators: Eka, which argues that high-fidelity digital practice is the fastest path to scaling dexterity across "objects, tasks, and environments."

The startup’s team is a "who’s who" of robotics research, with members hailing from MIT, Berkeley, Boston Dynamics, and DeepMind. By focusing on a foundation model that unites "generality, performance, and safety," Eka is positioning itself to be the intelligence layer for everything from e-commerce fulfillment to household chores.

As Agrawal noted, the goal isn't just to reach human levels of competence, but to surpass them. If dexterity truly becomes scalable through simulation and force-sensing, the "Moravec’s Paradox" that has limited robots for decades may finally be nearing its end.

Comments

No comments yet. Be the first to share your thoughts!

Share this article

Stay Ahead in Humanoid Robotics

Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.