Published on

Beyond the 60-Hour Mark: Figure AI’s Endurance Marathon Signals Playbook for Figure 4 and Supply Chain Independence

Humanoids Daily
Written byHumanoids Daily
  • Figure AI’s unedited autonomous livestream has shattered its initial milestones, surpassing 64 hours of continuous operation and processing 80,000 packages at a steady 2.9-second cadence.
  • CEO Brett Adcock flatly denied growing teleoperation skepticism, explaining that specific head-gesturing movements are a natural byproduct of the Helix-02 whole-body controller clearing arm pathways.
  • The company is aggressively onshoring its production, forecasting zero supply chain exposure to China by next quarter to eliminate geopolitical risks and tariff liabilities.
  • Figure has locked the architecture for Figure 4 following a critical design review, describing it as a ground-up redesign optimized for data collection and featuring a watchmaker-grade humanoid hand.
  • Supported by a newly deployed cluster of Nvidia Blackwell B200 GPUs, Figure has commenced training on its largest AI models to target true generalized autonomy in unseen environments.

The high-stakes transparency experiment at Figure AI shows no signs of slowing down. What began as a direct response to a social media challenge from RoboStrategy's Dr. Scott Walter regarding humanoid endurance has expanded far beyond the company's originally planned 8-hour shift. After blowing past its 26-hour milestone, the unedited live broadcast has officially crossed 64 hours of continuous, fully autonomous operation.

The rotating fleet of Figure 03 humanoids has successfully sorted over 80,000 packages, maintaining a blistering pace of roughly 2.9 seconds per item. Capitalizing on the viral nature of the stream, Figure launched a limited run of live stream merchandise priced at $24.07, which rapidly sold out. In a bid to demonstrate real-time adaptability, Adcock even appeared on camera to toss a packaged t-shirt into the sorting pile, which a robot promptly processed. The livestream fleet has also expanded; joining the initial trio of Bob, Frank, and Gary are the fourth and fifth humanoids named Rose and Jim.

A screenshot from a Figure AI live video feed showing CEO Brett Adcock on the left holding up a black t-shirt enclosed in a clear plastic bag with a barcode tag. To his right, a charcoal-grey Figure 03 humanoid robot with a red 'BOB' name tag reaches for a package on a metal conveyor belt. Video overlays at the bottom display a runtime of 54:24:10 and a package count of 68,45X.
Figure AI CEO Brett Adcock introduces the company's newly launched live stream merchandise directly on camera, holding up a packaged t-shirt before dropping it onto the conveyor line. The autonomous Figure 03 unit, nicknamed 'Bob' , processed the item as the continuous livestream marathon crossed the 54-hour mark with over 68,000 packages sorted.

Deconstructing the "Head Gesture" Skepticism

As the stream continues to draw a massive global audience, it has attracted intense technical scrutiny. Industry observers pointed to video segments where the humanoids frequently gesture toward their heads while working—a movement historically flagged as a telltale sign of human-in-the-loop teleoperation.

In an interview with Bloomberg Technology, Adcock addressed these allegations directly. "There’s absolutely no teleoperation in this," Adcock stated, clarifying that the characteristic head gesture occurs mechanically when the robot turns left to grab a package, triggering the whole-body controller to lift the left hand upward and out of the way.

Watch the interview below:

The entire operation runs on Helix-02, Figure's proprietary end-to-end vision-language-action neural network. Helix-02 operates completely locally, computing motor torque directly from raw camera pixels via an onboard computer housed within the robot’s torso. Because inference is entirely edge-based, the system requires no network connection to execute actions, eliminating latency and connectivity risks. Adcock noted that the system has achieved zero mechanical robot failures across the multi-day run.

The Fleet Playbook: 30-Second Swaps and Wireless Charging

Maintaining a continuous logistics line requires a complex orchestration protocol to maximize uptime. Each Figure 03 humanoid operates on roughly a four-hour battery life. When a unit's state of charge runs low, it autonomously messages an idle robot in the fleet to take its place. The incoming robot steps up to the conveyor belt while the depleted unit backs off to dock at a wireless inductive charging stand built into its feet.

A screenshot from the Figure AI live video feed showing the charcoal-grey Figure 03 humanoid robot named 'Bob' pausing at a metal conveyor belt. Behind Bob, a second identical humanoid robot stands ready to step forward in a dimly lit facility background. In-video overlays at the bottom indicate a runtime of 56:36:01 and a package count of 71,205.
A real-time look at Figure’s autonomous fleet rotation protocol. As the unedited endurance marathon hits 56 hours and 36 minutes, the humanoid 'Bob' prepares to step away from the conveyor line to charge, clearing the workspace for the incoming unit to maintain continuous logistics throughput with over 71,000 packages processed.

This hot-swap sequence completes in under 30 seconds, ensuring the conveyor system suffers minimal downtime. If a robot encounters a deeper hardware or software anomaly, it automatically walks off the line to a dedicated maintenance area and commands a fresh replacement from Figure's extensive campus fleet to deploy.

Stay Ahead in Humanoid Robotics

Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.

Onshoring the Supply Chain and Scaling Production

While industry competitors like Agility Robotics and Ultra have emphasized their real-world commercial deployments in customer facilities, Figure is focusing heavily on scaling its infrastructure. Operating from its "BotQ" manufacturing facility located on the Figure campus, the company is on track to manufacture between 60 and 70 humanoid robots this week alone, representing an annual production run rate in the thousands.

Backed by more than $1 billion in cash on its balance sheet, the company is leveraging its financial cushion to execute a massive geopolitical de-risking strategy. Recognizing the vulnerability of relying on foreign components, Figure has spent the past year aggressively moving its manufacturing pipelines out of China. Adcock revealed a striking forecast: by next quarter, Figure expects to have zero supply chain exposure to China, having successfully onshored or diversified the production of its custom motors, gearboxes, sensors, and printed circuit boards (PCBs).

Figure 4: An "iPhone 1 Moment" for Humanoids

The operational data harvested from the ongoing package-sorting marathon is feeding directly into Figure's next-generation platform. The company recently completed its critical design review for Figure 4, locking in a ground-up architectural overhaul designed entirely around data collection and the upcoming Helix-3 model architecture. Adcock described the upcoming machine as "unrecognizable" from prior iterations, comparing its impending launch to the industry's "iPhone 1 moment."

A focal point of this leap is a complete reimagining of the humanoid hand. During an appearance on the Over The Horizon podcast, Adcock admitted that Figure’s early reliance on a forearm-actuated, tendon-driven hand for Figure 1 was a major engineering mistake that hit a local functional maximum. In contrast, the new hand designed for Figure 4 packs more actuators than the entire rest of the robot's body combined. Built with watchmaker-level precision to overcome intense thermal, electrical, and spatial constraints, the hand achieves full range-of-motion parity with human anatomy. According to Adcock, human-level hand kinematics are fundamental to solving generalized AI; limited degrees of freedom pollute training data sets because the robot cannot mirror human demonstrations accurately—such as executing complex manipulation tasks like folding socks.

Watch the Over The Horizon interview below:

Solving the Generalization Bottleneck

Despite the blistering 2.9-second throughput achieved in the current stream, critics note that flipping boxes on a single logistics line represents a narrow validation of AI. Figure acknowledges that the true hurdle for bipedal automation is generalization—the ability to drop a robot into an unseen environment, like a random Airbnb or an unfamiliar factory workstation, and have it successfully complete tasks based solely on verbal human instructions.

To break this bottleneck, Figure launched a dedicated 70-person AI laboratory called HARK last summer, which has already deployed an onboard, real-time speech-to-speech voice model across the current fleet. More crucially, the company is leveraging its deep partnership with Nvidia. Having recently deployed a brand-new cluster of Blackwell B200 GPUs, Figure commenced a massive pre-training run last week for its Helix-3 architecture, scaling its training models by several factors.

While Adcock admits that general robotics remains heavily data-constrained, the company is building an architectural pipeline where a breakthrough in one task collectively upgrades the entire fleet via over-the-air updates. This visual coordination was highlighted in a recent collaborative bed-making demonstration, where two robots managed tension entirely through visual cues and head nods without explicit wireless messaging. While the timeline for true generalized autonomy remains fluid, Figure aims to demonstrate the first foundational building blocks of this capability later this year, targeting a broader commercial rollout over the next one to three years.

Watch the livestream below:

Comments

No comments yet. Be the first to share your thoughts!

Share this article

Stay Ahead in Humanoid Robotics

Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.