- Published on
Xpeng Demos 'Iron' Robot Dancing, Credits 'Human-Like Spine' and New AI for Rapid Learning

Following its controversial AI Day 2025 demonstration, Xpeng CEO He Xiaopeng has released new footage showing the next-generation 'Iron' humanoid robot dancing in a lab setting. The video, which features a stripped-down, bare-metal R&D version of the robot, aims to explain the "advanced technology" behind the human-like gait that sparked widespread online debate last week.
In the short video, the CEO claims the robot's ability to dance showcases the flexibility of its 82 degrees of freedom. More significantly, he details a new AI training method that dramatically accelerates the learning process.
"We have adopted a comprehensive imitation of learning training method," He Xiaopeng explained. "As long as you input human dance data, it can directly learn the corresponding movements."
He claims the dance routine shown in the video was trained in only "two hours." This, he argues, is a major leap from past methods. "In the past reinforcement learning took us weeks and had no generalisation at all," he said. "Under the new large model situation, Iron can definitely perform more actions, more generalised, more human-like."
The IRON robot, in its full bare metal glory! XPeng's CEO shares an inside look at the humanoid in the lab.
The new footage also provides a clearer explanation for the "model-like" walk from the AI Day keynote, which was so fluid it led some observers to speculate it was a human in a suit. The CEO directly attributes the robot's convincing hip-swaying motion to new hardware.
"This is because we added a human-like spine design, which increases the degrees of freedom in the robot's waist," He said. "Therefore, its movement can drive the hip creating a joint motion like a human."
This "human-like spine" was one of several new hardware features, alongside "dexterous hands" and an all-solid-state battery, that Xpeng detailed during its AI Day presentation. The robot's "brain" is a multi-modal AI system combining VLT (Vision-Language-Task), VLA (Vision-Language-Action), and VLM (Vision-Language-Model) components.
The R&D robot in the video is seen in a lab, at times connected to an overhead safety harness or a wheeled support rig—common practice in robotics development to prevent the machine from being damaged during testing.
This new demo appears to be a direct continuation of Xpeng's efforts to prove the authenticity of its new platform. After the initial "human-in-a-suit" speculation, the company released follow-up videos showing staff cutting away the robot's suit to reveal the mechanical "skeleton" underneath. This latest video from the CEO goes a step further, attempting to explain how its new hardware and AI models work together to achieve the human-like locomotion that caused the debate in the first place.
Share this article
Stay Ahead in Humanoid Robotics
Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.