Published on

The API-fication of Robotics: Physical Intelligence Unveils Real-World Performance Data with Weave and Ultra

A high-angle view of the white Isaac 0 robot using its dual arms and three-finger grippers to fold a red sweatshirt on a white table. A pile of colorful laundry is visible to the left, while a neatly folded navy blue t-shirt sits to the right.
In collaboration with Weave Robotics, the π0.6 foundation model demonstrated a significant step up in autonomy, reducing human interventions in laundry folding by 50% during live deployments.

In a move to standardize the "physical intelligence stack," Physical Intelligence (Pi) released a technical update on February 24, 2026, detailing how its foundation models are now being used as a plug-and-play "intelligence layer" for third-party hardware. By providing data from live deployments with home-robot startup Weave Robotics and industrial automation firm Ultra, Pi is betting that the future of robotics lies in a software-centric API model rather than custom, vertically integrated stacks.

The "Physical Intelligence Layer" Thesis

The central argument from Pi is that robotics currently lacks the ready-made intelligence layer that software developers take for granted. While an app developer can simply call an LLM API for language tasks, roboticists have historically been forced to build their own controllers and data pipelines from scratch.

Pi aims to change this with its family of Vision-Language-Action (VLA) models—including π0\pi_0, π0.5\pi_{0.5}, and π0.6\pi_{0.6}. By providing these as a foundational layer, Pi claims it can significantly lower the barrier to entry for diverse robot morphologies, from stationary arms to mobile humanoids. This software-first strategy follows Pi’s massive $600 million funding round in late 2025, which established the firm as a primary challenger to vertically integrated efforts like Tesla’s Optimus or Figure AI.

Weave: Scaling Laundry Autonomy in the Wild

For Weave Robotics, which recently pivoted to a stationary $7,999 laundry-folding unit called Isaac 0, the partnership with Pi has yielded measurable improvements in handling "deformable" materials. Folding laundry is notoriously difficult due to the high variability of fabric, shape, and color.

According to the new data, the transition from the π0.5\pi_{0.5} model to the newer π0.6\pi_{0.6} has significantly increased the "autonomy as a percentage of total time" for Weave’s robots. Key findings from live deployments in San Francisco laundromats include:

  • Pre-training Impact: Including Weave-specific data in the pre-training phase (+WPT) reduced missed grasp sequences by 42%.
  • Intervention Reduction: Human teleoperator interventions per full laundry load dropped by 50% when the model was trained on specialized data.
  • Performance: The robots are currently capable of folding an average load in 30 to 90 minutes, covering t-shirts, pants, and towels.

Ultra: Tackling the Logistics "Long Tail"

The update also provided the first deep look at Ultra, a company building industrial AI robots for e-commerce packaging. While Humanoids Daily previously shared footage of Ultra’s robots packing orders on social media, the new data quantifies the "intelligence step up" provided by the Pi "brain."

Ultra’s robots are designed to slot into existing workstations, handling the "long tail" of packaging problems—such as deformable mailers and shifting item types—that traditional, rigid automation cannot solve. Pi’s data shows that the π0.6\pi_{0.6} model, when fine-tuned with Ultra-specific pre-training data (+UPT), led to a significant increase in order packing throughput (items per hour).

Pi also noted qualitative improvements in the π0.6\pi_{0.6} architecture:

  • Better Prompt Adherence: The model can now break down complex workflows into smaller, autonomous subtasks.
  • Diverse Recovery Strategies: In "edge cases" where a robot might typically fail, the π0.6\pi_{0.6} model selects from a more diverse pool of strategies to recover, showing higher commitment to task success.
A white industrial robotic arm with multiple joints and an orange-tipped gripper operating at a workstation next to a yellow conveyor belt. An inset video in the top-left corner shows a top-down view of the robot's workspace and sorting bins.
Through a partnership with Ultra, Physical Intelligence’s π0.6 model is being used to automate complex, 'long tail' e-commerce order packaging in live warehouse environments.

The Race for "Physical Commonsense"

This collaboration reinforces the industry-wide shift toward active, embodied learning. While competitors like Google DeepMind argue that a "big breakthrough" in robot data is still needed, Pi is betting that its Recap method—which combines imitation learning with autonomous reinforcement learning—is the key to reliability.

By allowing robots like the Isaac 0 and Ultra’s packaging fleet to "practice" and learn from their own mistakes, Pi is attempting to solve the compounding error problem that often causes autonomous systems to stumble. As Generalist AI’s Andy Zeng recently noted, this "physical commonsense" is the "dark matter" of robotics—the reactive intuition for forces and friction that transforms a machine from a novelty into a tool.

With Weave and Ultra now reporting revenue-generating shifts at high autonomy levels, the industry may finally be moving past the "lab demo" era and into a phase where the software brain is as transferable as the code in a standard web application.

Share this article

Stay Ahead in Humanoid Robotics

Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.