Creative AI that mimics human movement.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Walking and running are notoriously difficult to recreate in robots. Now, a group of researchers has overcome some of these challenges by developing an innovative method that uses central pattern generators — neural circuits located in the spinal cord that coordinate muscle activity. Create patterns — with deep reinforcement learning. An international group of researchers has created a new method for simulating human movement through a combination of central pattern generators (CPGs) and deep reinforcement learning (DRL). The method not only simulates walking and running motions, but also generates motion for frequencies where motion data is not available, enables smooth transition motions from walking to running, and unstable surfaces. Allows adaptation to the environment with

Details of their progress were published in the journal IEEE Robotics and Automation Letters On 15 April 2024.

We may not think much about it, but walking and running involve natural biological reflexes that enable us to adapt to the environment or change our walking/walking speed. Given its complexity and complexity, reproducing these human-like movements in robots is notoriously challenging.

Existing models often struggle to adjust to unknown or challenging environments, making them less efficient and effective. This is because AI is suited to generating one or a small number of correct solutions. With organisms and their movements, there is not just one correct pattern to follow. There's a whole range of possible movements, and it's not always clear which one is best or most efficient.

DRL is one way researchers have tried to overcome this. DRL extends traditional reinforcement learning by leveraging deep neural networks to handle more complex tasks and learn directly from raw sensory input, enabling more flexible and powerful learning capabilities. Its disadvantage is the huge computational cost of searching a wide input space, especially when the system has many degrees of freedom.

Another method is simulation learning, in which a robot learns by simulating motion measurement data that performs the same motion task. Although imitation learning is good at learning in stable environments, it struggles when it is not exposed to new situations or environments during training. His ability to effectively modify and navigate is limited by the narrow scope of his learned behaviors.

“We overcame many of the limitations of these two methods by combining them,” explains Mitsuhiro Hayashibe, a professor at Tohoku University's Graduate School of Engineering. “Imitation learning was used to train a CPG-like controller, and instead of applying deep learning to the CPG itself, we implemented it in the form of a reflexive neural network that mimics the CPG. supported.”

CPGs are neural circuits located in the spinal cord that generate rhythms of muscle activity like biological conductors. In animals, a reflex circuit works together with CPGs to provide the appropriate feedback that allows them to adjust their speed and walking/walking movements to the ground.

By adopting the structure of CPG and its reflexive counterpart, the adaptive simulation CPG (AI-CPG) method achieves remarkable adaptability and stability in generating motion simulating human motion.

“This breakthrough sets a new standard in robotics for generating human-like motion, with unprecedented environmental adaptability,” Hayashibe says, adding, “Our approach is a key step in the development of generative AI technologies for robot control. represents a step, which has potential applications in various industries.”

The research group included members of Tohoku University's Graduate School of Engineering and École Polytechnique Fédérale de Lausanne, or Swiss Federal Institute of Technology in Lausanne.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment