MIT robotics pioneer Rodney Brooks believes that people are overestimating creative AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor Emeritus of Robotics at MIT, he also co-founded three major companies, including Rethink Robotics, iRobot and his current endeavor Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade beginning in 1997.

In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog about how well he's doing.

He knows what he's talking about, and he thinks it might be time to put the brakes on the screaming hype that is creative AI. Brooks believes it's impressive technology, but perhaps not as capable as many are suggesting. “I am not saying that LLMs are not important, but we have to be careful. [with] How do we diagnose them,” he told TechCrunch.

The problem with creative AI, he says, is that, while it's fully capable of performing a specific set of tasks, it can't do everything a human can do, and humans can limit its capabilities. Give more importance. “When a human sees an AI system perform a task, they immediately generalize it to similar things and evaluate the AI ​​system's competence. Not just its performance, but its ability around,” Brooks said. “And they're usually very optimistic, and that's because they use a model of a person's performance on a task.”

The problem, he added, is that generative AI is not human or human-like, and is flawed in trying and assigning human capabilities. He says that people find it so capable that they want to use it even for applications that don't make sense.

Brooks cites his latest company, a warehouse robotics system, as an example. Someone recently suggested to him that it would be good and useful to tell his warehouse robot where to go by creating an LLM for his system. However, in his estimation, this is not a reasonable use case for generative AI and will actually slow things down. It is rather simple to connect the robot to the data stream coming from the warehouse management software.

“When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not going to help. It's just going to slow things down,” he said. . “We have large-scale data processing and large-scale AI optimization techniques and planning. And thus we complete orders faster.

Another lesson Brooks has learned when it comes to robots and AI is that you can't try to do too much. You should solve a solvable problem where robots can easily integrate.

“We need to automate places where things are already cleaned. So the example of my company is that we're doing very well in warehouses, and warehouses are actually quite limited. These big The light doesn't change with the buildings. There's no stuff on the floor because people pushing the cars will run into it. There's no plastic bags floating around Acts are malicious for robots,” he said.

Brooks explains that it's also about robots and humans working together, so his company designed these robots for practical purposes related to warehouse operations, such as creating a human-looking robot. It is opposite. In this case, it looks like a shopping cart with a handle.

“So the form factor we use is not humanoids walking around — even though I've built and delivered more humanoids than anyone else. They look like shopping carts,” he said. “It has a handlebar, so if there's a problem with the robot, someone can grab the handlebar and do whatever they want with it,” he said.

After all these years, Brooks has learned that it's all about making technology accessible and fit for purpose. “I always try to make the technology easy for people to understand, and so we can deploy it at scale, and always look at the business case. Return on investment is also very important.

Even with that, Brooks says we have to accept that when it comes to AI there will always be hard-to-solve outliers, which could take decades to solve. “Without careful boxing in how an AI system is deployed, there will always be a long tail of special cases that will take decades to discover and fix. Paradoxically, all those fixes are AI itself complete. .

Brooks added that it's a misconception, largely thanks to Moore's Law, that there's always rapid progress when it comes to technology — the idea that if ChatGPT 4 is so good, imagine ChatGPT 5, How about 6 and 7? He sees the flaw in this logic, despite Moore's law that technology does not always grow rapidly.

He uses the iPod as an example. For a few iterations, it actually doubled the storage size from 10 to 160 GB. If it continued at this pace, he thought we'd have an iPod with 160TB of storage by 2017, but of course we didn't. Models sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody needed more.

Brooks acknowledges that LLMs could help at some point with home robots, where they can perform specific tasks, especially with an aging population and not enough people to care for them. But even so, he says, it can come with its own set of unique challenges.

“People say, 'Oh, big language models are enabling robots to do things they couldn't do.' “That's not where the problem is. The problem of being able to do things is about control theory and all kinds of other hard mathematical optimizations,” he said.

Brooks explains that this could eventually lead to robots with language interfaces useful to people in caregiving situations. “It's not useful in a warehouse to ask a human robot to go out and get an item for an order, but it could be useful in home care for the elderly so that people can talk to the robots,” he said. can do

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment