Friday, March 13, 2026
The martial arts performances by humanoid robots during this year’s Chinese Spring Festival Gala went viral across social media platforms. These robots demonstrated significantly enhanced motion control capabilities through kicks, flips, and coordinated movements, providing the public with their first intuitive experience of the rapid technological progress in humanoid robotics. This momentum carried over to the Smart Factory & Automation World 2026 (AW 2026) recently held in Seoul, where humanoid robots emerged as one of the most prominent highlights of the exhibition.
This event also marked the first collective overseas appearance of multiple Chinese humanoid robot enterprises, including AGIBOT, Fourier Intelligence, Leju Robotics, Unitree Robotics, and Huawei. Observations from the on-site demonstrations reveal that humanoid robots are no longer merely mechanical assemblies; they are rapidly evolving into heterogeneous computing platforms that integrate hardware architecture, AI models, sensory data, and cloud computing.
As AI progressively transcends the boundaries of the software world, embodied AI, which enables direct interaction with the physical environment, is being recognized as a critical development direction for next-generation AI. During the inaugural “China Humanoid Robot Conference” at AW 2026, experts from academia and industry pointed out that embodied AI is shifting from an algorithm-centric research field to an engineering system capable of perception, decision-making, and task execution within real-world environments.
However, as AI enters the physical realm, engineering challenges inevitably surface: In the era of embodied AI, where AI and robotics converge, how can technical bottlenecks be overcome to transition robots from the laboratory into large-scale applications?
The data flywheel accelerates technological iteration
In his opening keynote, Hyung-kwan Shin, CEO of the research institute China Capital Markets Lab and moderator of the forum, noted that the development of humanoid robots doesn’t rely on a single technological breakthrough but is built upon a continuously iterating data flywheel.
He explained that this data flywheel is composed of hardware, data, and algorithms, forming a cyclical and self-reinforcing iterative mechanism. When robots operate in real-world environments, they constantly generate perceptual data, which is then used to train new AI models. Conversely, more powerful models enhance the robot’s perception and manipulation capabilities, allowing them to be deployed in a wider variety of scenarios and, in turn, generate even more data.
As this cycle accelerates, robotic capabilities will gradually improve, eventually achieving the large-scale generalization required for embodied AI. This data-centric development model is steadily replacing traditional robot programming methods that rely on predefined instructions.
Hierarchical control and heterogeneous computing architectures
Faced with a physical world characterized by high complexity and uncertainty, the architecture of embodied AI systems is moving toward a multi-layered heterogeneous computing model. Weixin Yan, chief scientist at the Shanghai Jiao Tong University Institute of Robotics, pointed out that from a control system perspective, humanoid robots can be understood through an architecture that divides labor between a “brain” and a “cerebellum.” The brain is responsible for semantic understanding, logical operations, and task decision-making, while the cerebellum handles low-latency motion control, such as gait generation, dynamic balance, and joint coordination.
From the perspective of embedded AI and computing platforms, Huawei is pushing this architecture further toward deep “end-edge-cloud” synergy. Daniel He, senior AI architect at Huawei, noted that humanoid robots are essentially extremely complex embedded computing platforms that must simultaneously handle multiple tasks, including visual perception, voice interaction, and real-time motion control. If all computations were performed locally on the robot’s body, they would face severe power consumption and hardware constraints.
To address this, Huawei proposed the Robot-to-Cloud (R2C) protocol architecture, which distributes computing tasks through three-tier synergy across the end, edge, and cloud. The cloud handles large-scale AI model training and updates, edge nodes manage real-time inference, and the robot terminal focuses on low-latency motion control. This distributed architecture maintains the robot’s real-time responsiveness to the environment while reducing onboard power consumption, providing a foundation for the large-scale deployment of embodied AI.
“Humanoid robots are, in essence, complete AI computing platforms. Without a new embedded AI architecture, it will be difficult for humanoid robots to truly enter the industrialization stage,” Huawei’s He said.
Conquering the chopstick problem
One of the most challenging capabilities in the development of embodied AI is fine manipulation. Yan pointed out that the current bottleneck in robotics is shifting from visual navigation toward tactile and force sensing—often referred to as the chopstick problem. Relying solely on visual systems makes it difficult for robots to perform delicate grasping and manipulation tasks like humans. Therefore, it’s essential to integrate high-frequency tactile and force feedback with visual information to form a complete perception-control closed loop.
Addressing this challenge, Bin Zhou, co-founder of Fourier Intelligence, shared material and sensing innovations integrated into the company’s GR-3 humanoid robot. “The prerequisite for humanoid robots to enter home applications lies in safety and warmth. This is not just a software algorithm issue but involves material and hardware design,” Zhou said.
Fourier has transitioned its high-precision drive technology, accumulated in the fields of medical rehabilitation and exoskeletons, into the GR-3 humanoid platform. The platform features a body encased in soft materials and integrates a full-body tactile sensing array, allowing the robot to automatically adjust joint torque based on force feedback when grasping irregular objects. Through whole-body control, the system can adjust motion strategies in real-time, shifting the robot from pre-set trajectory control to more flexible, real-time interactive operations. This adaptive control method based on force feedback is a critical technical foundation for enabling fine manipulation.
From demonstration platforms to industrial applications
Whether humanoid robots can truly be implemented ultimately depends on cost, reliability, and engineering capabilities. Chengyi Jiang, head of solutions at Unitree Robotics, pointed out that environmental adaptability is the first step. “In the past, AI progress was mostly confined to logical reasoning in the digital world, whereas the core of embodied AI lies in achieving a physical closed loop of perception, understanding, and execution,” Jiang said.
Jiang believes that since over 90% of global infrastructure is designed based on human standards, the humanoid form remains the optimal shape for general-purpose robots. Unitree’s humanoid platform adopts a three-tier control architecture—body, cerebellum, and brain—combined with reinforcement learning and natural gait generation technology, enabling the robot to maintain stable movement in non-structured environments such as ramps and stairs.
Regarding engineering reliability, Guangjie Ren, head of solutions at Leju Robotics, defined robotic maturity through engineering metrics: mean time between failures (MTBF) and continuous operation capability. According to factory test data provided by Leju, its latest platform’s MTBF has exceeded 1,000 hours, and it can operate continuously for 9.5 hours. “This means that humanoid robots are crossing from demonstration platforms into the category of productivity tools capable of actual work,” Ren said.
Additionally, through 5G remote control interfaced with MES systems, operators can even control robot operations from 1,200 kilometers away, with end-to-end latency kept under 20 milliseconds. Ren stated that the role of humanoid robots is not to replace traditional industrial robotic arms but to fill the last mile in industrial automation lines that are difficult to standardize.
Leju’s industrial humanoid robot automatically adjusts its height from 1–2.6 meters to accurately pick up materials.
The starting point of the embodied AI industry
AW 2026 has emerged as a strategic vanguard for embodied AI. Beyond the simultaneous showcasing of hallmark platforms—such as AGIBOT’s X2 and G2, Unitree’s G1, and Leju’s Kuavo 4 Pro—the event catalyzed the participation of over 50 global robotics enterprises. Together, they have mapped out a comprehensive technological spectrum, ranging from foundational motion control to high-level multimodal sensor fusion.
Although humanoid robots are still in the early stages of moving from research platforms to industrial applications, their technological maturity has improved significantly. From AI chip power management and sensor fusion processing to high-efficiency actuator design, humanoid robotics is gradually forming a new engineering ecosystem.
In the coming decade, as hardware, data, and AI models continue to iterate, humanoid robots may become the next major technological platform driving the electronics and AI industries, following in the footsteps of smartphones and autonomous driving.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|