AI startup Figure OpenAI demonstrates a conversational robot with tech

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Robotics developer Figure made waves on Wednesday when it shared a video demonstration of its first humanoid robot engaging in a real-time conversation thanks to Creative AI from OpenAI.

“With OpenAI, Figure 01 can now fully interact with people,” Fig said On Twitter, it highlights the ability to instantly understand and react to human interactions.

The company explained that its recent alliance with OpenAI brings high-level visual and language intelligence to its robots, allowing for “fast, low-level, skilled robot actions.”

In the video, Figure 01 interacts with its creator’s senior AI engineer Cory Lynch, who puts the robot through several tasks in a makeshift kitchen, including identifying an apple, pot and cup.

Figure 01 identified the apple as food when Lynch asked the robot to give it something to eat. Lynch then had Image 01 collect the trash in the basket and simultaneously asked it questions, demonstrating the robot’s multitasking capabilities.

On Twitter, Lynch Explained Figure 01 Project in more detail.

“Our robot can describe its visual experience, plan future actions, reflect on its memory, and verbally explain its reasoning,” he wrote in an extensive thread. I wrote

According to Lynch, they feed images from the robot’s cameras and transcribe speech text captured by the plane’s microphones into a large multimodal model trained by OpenAI.

Multimodal AI refers to artificial intelligence that can understand and process different types of data such as text and images.

Lynch emphasized that Figure 01’s behavior was learned, operated at normal speed, and not remotely controlled.

“The model processes the entire conversation history, including past images, to come up with language responses that are spoken back to the human via text-to-speech,” Lynch said. “The same model is responsible for deciding which learned, closed-loop behavior to execute on the robot to fulfill a given command, loading specific neural network weights on the GPU and executing the policy. “

Lynch explained that figure 01 is designed to briefly describe its surroundings, and can be applied.intellectFor decisions, such as speculations will be placed in a rack. It can also parse ambiguous statements, such as hunger, into actions, such as offering an apple, all the while explaining its actions.

The first film sparked an enthusiastic response on Twitter, with many impressed by Figure 01’s capabilities — and more than a few adding it to their list of milestones on the way to the Singularity.

Please tell me your team has seen every Terminator movie,” one replied.

“We need to find John Connor as soon as possible,” added another.

For AI developers and researchers, Lynch provided a number of technical details.

“All behaviors are driven by neural network visuomotor transformer policies, mapping pixels directly to action,” Lynch said. “These networks take onboard images at 10hz and generate 24-DOF actions (wrist poses and finger joint angles) at 200hz.”

Figure 01’s influential debut comes as policymakers and global leaders seek to mainstream the spread of AI tools. While much of the discussion has been around large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude AI, developers are also exploring ways to give AI a physical humanoid robotic body.

Figure AI and OpenAI did not immediately respond. Decrypt Request for comment.

“There’s a kind of utilitarian goal that Elon Musk and others are trying to achieve,” UC Berkeley industrial engineering professor Ken Goldberg previously said. Decrypt. “There’s a lot of work going on right now — why people are investing in companies like this figure — is the hope that these things can work and be compatible,” he said, particularly in the realm of space exploration. I.

Along with Figure, others working to integrate AI with robotics are Hanson Robotics, which debuted its Desdemona AI robot in 2016.

“Even just a few years ago, I would have thought that fully interacting with a humanoid robot as it plans and executes its fully learned behaviors would wait decades. Gotta do it,” Corey Lynch, Figure AI’s senior AI engineer, said on Twitter. “Obviously, a lot has changed.”

Edited by Ryan Ozawa.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment