For artificial intelligence, is there a long-term future? | News

NBC anchor Lester Holt interviewed two of Silicon Valley's biggest CEOs during the Aspen Ideas Festival on Wednesday. OpenAI's Sam Altman and Airbnb's Brian Chesky have been friends since they met in 2008.
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence is not a new technology, as Vivian Schiller — executive director of the Aspen Institute's Aspen Digital — pointed out multiple times during the Aspen Ideas Festival, which ended Saturday.

AI models have been around for decades, but they recently captured the public imagination in a new way after one model, OpenAI's ChatGPT, made the tech accessible and usable to the masses. Is. The result has been a “hype cycle,” as Airbnb co-founder/CEO and Ideas guest Brian Chesky called it, that has gotten more attention since the advent of the Internet.

The hype cycle can be overwhelming, Chesky said, explaining that AI isn't even an essential component of most phone apps. Similarly, Schiller frequently mentions “Amara's Law,” which states that humans overestimate the short-term effects of new technology and underestimate the long-term effects.

But with the rapid pace of AI development and overall technology innovation, ideas experts said the long-term effects may not be so far-fetched.

During a panel discussion on Monday, Schiller asked University of Manchester professor and historian David Olusuga about how quickly new technologies lead to widespread societal disruption. Olusuga agreed that technologies could take a long time to reach the masses and change the world – for example, James Watt's steam engine was invented in the 1760s but did not change the world until the 1830s. . However, now Olusuga said new technologies are adopted faster.

“We can see a narrowing of the gap between innovation and disruption in the 20th century,” Olusuga said, arguing that adoption of electricity and the Internet moved faster than the steam engine.

Despite his skepticism of the hype, Chesky pointed out in his panels that 21st-century Internet platforms have rapidly moved from innovation to mass disruption, changing the way Silicon Valley operates. Is. Chesky argues that attitudes surrounding recent technological revolutions in the 2000s have already shifted from starry-eyed naivety to wary caution.

Behavioral changes

When they first met at Silicon Valley startup accelerator Y Combinator in 2008, Chesky said he and OpenAI co-founder and CEO Sam Altman were part of a fast-paced, move-first, think-later culture. Which was largely naive. About the negative impact on big tech companies.

“When I came to Silicon Valley, the word 'technology' was also the dictionary definition of 'good,'” Chesky said. “Facebook was a way to share photos with your friends, YouTube was cat videos, Twitter was Was talking about what you are doing today. I think it was normal innocence.

Now, Chesky said, that culture has changed. In the decade since the two tech titans' time at Y Combinator, the world has seen social media overthrow governments in the Middle East and interfere in elections in the United States. American politicians regularly talk about the mental health effects of social media on children today, and governments have passed sweeping regulations on big tech firms.

“I think over time we've realized that when you put a tool in the hands of millions of people, they're going to use it in ways you didn't intend,” Chesky said. Chesky said.

Hard-nosed tech journalist Cara Swisher agreed on her panel that attitudes are changing in Silicon Valley. Swisher said she's enjoyed meeting young tech entrepreneurs in recent years who often “have a better understanding of the risk in the world we live in.”

These attitudes have translated into consternation and controversy surrounding the advent of publicly accessible large language models.

Altman, who spoke during an “afternoon of conversation” on Wednesday, was fired from OpenAI in November because board members at the time were concerned about how fast their AI was developing. is developing Former board members have since said that Altman repeatedly lied to them about the company's security practices. Altman later returned to the company, which now has a new board.

He called the ordeal “extremely painful” at an Ideas audience Wednesday, but said he understood the former board members. He described them as “terrified by the continued development of AI”. Altman disagreed that technology was advancing too quickly.

“Even though I strongly disagree with their views, what they've said since then and how they've acted, I think they're generally good people who are nervous about the future,” Altman said. said.

'Lots of confidence to earn'

Whether “too” fast or not, ideas experts certainly agreed that technology is moving fast. Government officials and private sector actors alike claim that technology is moving faster than governments can regulate it.

“Policy doesn't keep pace with technology,” said Karen McCluskey, deputy director of technology at the United Kingdom's Department for Business and Trade. “If technology is all about moving fast and breaking things, diplomacy is all about moving slowly and fixing things. They are opposite ideas. But that has to change.”

Tech is moving so fast, some experts said, that many technologists worry they'll run out of data to train AI models (Altman doubts that's a big problem will). The dilemma is serious enough that some experts have proposed using “synthetic data” to train models. And while the computing power and electricity required to run the models make them prohibitively expensive, experts say that cost is certain to decrease in the near future, potentially making development faster and more competitive. makes

Tech leaders claim to be meeting unprecedented speed with unprecedented caution. Instead of fighting to speed up the slow adoption of their new tech, Ideas executives said they are deliberately delaying product releases while they run security checks. Altman said that OpenAI has sometimes not released products or taken a “long time” to review them before releasing them.

“What will our lives be like when it's not just that a computer understands us and knows us and helps us do these things, but we can ask it to discover physics or start a great company?” Altman said. “That's a lot of confidence that we have as stewards of this technology. And we're proud of our track record. If you look at the systems that we've designed and the generally accepted robustness of them. And the time and care we've taken to bring it to a level of safety, so it's far beyond what people thought we'd be able to do.

Chesky compared speed in tech to driving.

“If you imagine you're in a car, the faster the car goes, the more you need to look ahead and anticipate the corners,” he said.

Some of those corners are already flying out the window, Ideas officials said. In a session on the role of AI in elections, Schiller pointed to several examples of voter fraud or election interference using AI-generated fake information and media. So-called “bad actors” have used AI to defraud voters in Slovakia, Bangladesh and New Hampshire.

The Russian government has also used AI to create fake documentary ads mocking the Olympic Committee and the upcoming Paris Olympics, which banned Russia, said Ginny Badane, general manager of Microsoft's Democracy Forward program. Is. The video uses Tom Cruise's voiceover as a narrator.

NBC anchor Lester Holt – who interviewed Chesky and Altman – used a different vehicle metaphor than Chesky, saying “Most of us are passengers on this bus, you see people doing these incredible things. Kar and hear you compare it to the Manhattan Project and wonder, 'Where is this going?'

Michigan Secretary of State Jocelyn Benson discusses the role of artificial intelligence in elections at the Aspen Ideas Festival on Friday. Michigan has launched a campaign to educate voters about the possibility of bad actors using fake videos and photos to influence elections. Jason Charm/Aspen Daily News

Some successes

Despite its rapid progress, experts say AI is still far from the revolution it promises.

While the breakthroughs have been significant—one company, New York-based EvolutionaryScale, can now use AI to develop specialized proteins for personalized cancer care—AI still doesn't play a significant role in most of our lives. does. For a technology that has been compared to the Internet and even fire control, experts say we're only seeing the beginning of its potential impact.

“If you look at your phone, and you look at your home screen and ask which apps are fundamentally different because of generative AI, I would say basically none. Algorithms are a little different,” Chesky said.

But while AI may not have changed the world yet, executives said it certainly has changed the world for some individuals.

“One of the most exciting parts of the job is getting an email every day from people who are using these tools in amazing ways,” Altman said. “People say, like, 'I was able to diagnose this health problem that I had for years but didn't know, and it was making my life miserable, and I just typed my symptoms into ChatGPT. And the idea came, to see the doctor and now I am completely cured.”

Holt asked Altman where he would like to be in the next five years.

“Still on the same path,” he replied.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment