IT pioneers examine the future impact of AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

An exciting project spanning nearly a quarter of a century is now exploring ways to disrupt the future of artificial intelligence.

From 2004 to 2023, hundreds of Elon University students and faculty—along with Pew Research staff members—surveyed experts to collect tens of thousands of predictions about “the challenges and opportunities of the digital evolution…”

But this year his “Imagining the Digital Future” research center announced an “expanded research agenda” gathering predictions about the impact of AI through 2040 — from more than 300 technology experts. “We understand that the proliferation of AI systems has profound implications for individuals and organizations,” Lee Rennie, the research center’s new director, said in an email interview with The New Stake.

Rainey joins the Pew Research Center after 24 years directing its Internet and Technology Branch — so this year’s report was also augmented by a poll of 1,021 Americans (conducted Oct. 20-22). “So we think we’ve asked different questions that are new,” Rainey said in a recent interview with Elon University President Connie Buck, “but we’ve done it in a new way.”

According to this interview, the resulting report took nine months to produce, and the effort represents an attempt to focus serious academic attention on the sudden technological change facing the world.

But it tries not only to understand artificial intelligence, but to anticipate its destructive effects – both good and bad.

Hustle and benefits

Both experts and poll respondents see “tremendous upheaval” ahead, while many experts also “accept the idea that the spread of AI will bring significant benefits,” according to the center’s news release.

The great upheaval took many forms. “Global experts predict that as these tools advance we will have to rethink what it means to be human,” the center said in a statement, adding that experts also predicted “We must reinvent or transform large institutions to achieve the best possible future,” says Gui.

According to a public statement from Rainey, both experts and poll respondents expressed concerns about wealth inequality, politics and elections, and the level of civility in society and personal relationships with others. “At the same time, there are more promising ideas about how AI can make life easier, more efficient and safer in some important respects.” For example, even polling found more Americans to have a vision. positive Impacts from AI in healthcare systems and quality of medical treatment by 2040 (36%) — and in their daily tasks and activities (31%).

But public opinion polls also found many Americans concerned that AI would have a negative By 2040 there will be an impact in multiple ways.

  • Further erosion of personal privacy (66%)
  • Their employment opportunities (55%)
  • How these systems can change our relationships with others (46%)
  • Potential impact of AI applications on fundamental human rights (41%)

In his February interview, Rainey said Americans seem to be “all in” for AI-powered health care assessment and public data analysis tools that are much faster than humans. And sometimes very accurate… so there’s a really mixed picture, that’s part of it. Really interesting story we are telling. It is not an ‘all good’ or ‘all bad’ situation with public opinion. It’s subtle, sometimes scary — but there are plenty of ways people are showing some signs of hope.

“There is no prevailing view of the overall impact of AI,” according to the center’s announcement. “On a broad question about AI ethics, 31% say it is possible to design AI programs that can consistently make decisions in the best interest of people in complex situations, while exactly the same proportion say that It’s not possible. Some 38 percent say they’re not sure.

Even the title of the report captures this “inconclusive” assessment. (“A New Age of Enlightenment? A New Threat to Humanity?: The Impact of Artificial Intelligence by 2040…”)

Wint Cerf and Esther Dyson

Contributions came from a variety of experts, from investor/founder Esther Dyson to the “Father of the Internet” Vint Cerf (now Vice President of Google).

In a lengthy interview, Cerf made some specific recommendations, touching on the need for transparency and visibility into the data used to train AI, and additional issues such as data being intentionally falsified with AI and disclosed. More need to do. when AI generated output is being used.

Esther Dyson even introduced the concept of the “information supply chain”—which includes knowing who chooses to produce content (so that readers can better understand their motivations)—and argued that That AI may have a role in identifying these sources.

In a lengthy interview, Dyson noted that “this is a war that will never be won,” and he addressed the role of humanity. “I just wrote a piece, basically, ‘Don’t worry about training your AIs. Train your kids.’ Because we need to train people to be skeptical, but not cynical—to be self-aware, and also to be aware of the motives of others…”

Digital warrior

Perhaps the most dire warning came from William L. Schrader (an Internet Hall of Fame inductee who co-founded one of the world’s first Internet service providers).

“Wake up and smell the bullets,” Schrader wrote, warning that AI “adds more speed to the incitement of humanity’s troubles. Almost all governments will be dominated by fascists. AI will lead to dangerous military operations and intelligence. Schrader’s dystopian vision was surprisingly succinct, predicting a worsening pandemic and warning that “AI will make the rich richer, the poor poorer.” And by 2040, this gap will widen considerably.

And in the same document, T-Mobile’s director of privacy and data security, Chuck Cosen, also predicted an increase in “misinformation and other deadly corruption,” resulting in “challenging what we know. will be done…”

Further down the line, Devin Fidler, founder of consulting company Rethinkery, agreed that while we worry about AI getting out of control, there are far more pressing challenges to it — including “new kinds of Potentially fueling the growth of digital warriors.

“It’s like worrying about an asteroid impact when your house is in the path of an incoming wildfire.”

But not all predictions were wrong. Futurist author Jonathan Colbert envisions AI eliminating the need for most work, with humans enjoying “unlimited material abundance” from asteroid mining (and clean energy). Colbert sees fully immersive VR offering all the experiences that used to require physical equipment, while benevolent self-aware AI saves mankind from nuclear war. (Colbert’s book A Celebration Society expands on his vision in more detail.)

What should be done now?

How do we get out of dystopias? The problems may be at least partly organizational, says Amy Semple-Ward, CEO of the nonprofit NTEN, which offers training programs in equitable use of technology. In his contribution to the report, Ward suggested “redirecting” the current movement towards “reducing power to less and less systems, governments and individuals”.

Or, as Dyson said in his interview, “Where we’re going is up to us.”

In his February interview, Rainey expressed a hope that “in this age of artificial intelligence, people will be people, and engage with others, ask questions, offer their expertise, and learn from those things.” That’s what we’re trying to put together for them.”

And Rennie told The New Stack that he sees the report as a start. The reasons we do research like this and ask questions like this are to help start public and policy conversations about the findings. So while there is no specific policy agenda to drive their work, “we hope that new data on new topics will push some of these issues into the public sphere…

Our view is that the best conversations—and policies—are made when they are informed by data and input obtained through public opinion surveys and the views of diverse experts.

Rainie ultimately sees her research being used by the policy-making community as well as the tech communities that are building and creating tools, the broader business community, and “scholars and other analysts who are concerned about the appropriate use of AI.” Trying to think how AIs can be deployed in educational settings and classrooms.” And of course, he’s also hoping it will see use by “interested workers and citizens who will be affected by the deployment of AI systems in their jobs and their communities.”

And the future that awaits may be stranger than we can imagine. In a February interview, Rainey was asked what the center would explore next — and he said AI-powered augmented reality was already “in our minds.” But beyond that, Rainey said that “the integration of artificial intelligence with things in your body is absolutely going to happen, even experts don’t think it’s worth thinking about because it’s so obvious that it’s going to happen.” Is.

“The other thing that’s going to happen is how we communicate in all the ways that it changes us as human beings.”

The group Created with Sketch.
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment