Hype and risks: Artificial intelligence is suddenly very real.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The first of four parts

AI stamped itself onto America’s collective consciousness last year with reports that a new tool worthy of science fiction was landing job interviews, writing publishable books and acing the bar exam.



With OpenAI’s ChatGPT the public suddenly has a bit of this machine magic at their fingertips, and they race to have fun chatting, writing term papers or trying to stump the AI ​​with weird questions. Big.

AI has been with us for years, quietly controlling what we see on social media, protecting our credit cards from fraud and helping avoid collisions on the road. But 2023 was about to change, with the public showing an insatiable appetite for anything with an AI label.

It took only five days for ChatGPT to reach 1 million users. As of February it counted 100 million users that month. OpenAI says it now attracts 100 million users per week.

Meta released their Lama 2. Google released its Bard and Gemini projects. Microsoft has built its AI-powered Bing search engine on ChatGPT. France’s Mistral emerged as a major competitor in the European market.

“The truth is, everybody was already using it,” said Geoff Livingston, founder of GenerativeBiz, which helps companies use AI. “What really happened at 23 was this painful Band-Aid rip where it’s not a novelty anymore, it’s really coming.”

The result was a hype machine that pushed the potential, and the public began to grapple with some big questions about the promises and dangers of AI.

Congress has reached out to hold AI briefings, the White House has called meetings and the US has joined more than a dozen countries in signing a pledge to develop AI safely, which aims to protect the technology from bad actors. To prevent it from falling into the hands of

Universities tried to ban the use of AI for writing papers. Content creators went to court to sue, arguing that AI was stealing their work. And some of the biggest names in the tech world have thrown out predictions of a world-ending doom thanks to runaway AI, promising new frontiers to try to stop it.

The European Union earlier this month reached an agreement on new draft regulations on AI, requiring ChatGPT and other AI systems to disclose more of their work before they are brought to market, and limiting what governments can do to monitor them. How AI can be deployed

In short, AI is having a moment.

A comparison is to the early 1990s, when the “Internet” was all the rage and businesses rushed to include email and web addresses in their advertising, hoping to be on the cutting edge of technology. are

Now it’s AI that’s going through what Mr. Livingston calls the “adoption phase.”

Amazon says it’s using AI to improve the holiday shopping experience. US universities are using AI to identify at-risk students and intervene to keep them on track for graduation. Los Angeles says it’s trying to use AI to predict which residents are at risk of becoming homeless. The Department of Homeland Security says it’s using AI to thwart sophisticated hacking efforts. Ukraine is using AI to clear landmines. Israel is using AI to identify targets in Gaza.

Google engineers say their DeepMind AI has solved a problem that was labeled “unsolvable” math, offering a new solution to what’s known as the “Capset Problem.” is, without plotting more points without terminating three of them in a straight line.

Engineers said it was the first time an AI solved a problem without being specifically trained.

“To be very honest with you, we have hypotheses, but we don’t know exactly why it works,” DeepMind research scientist Al Hussain Fauzi told MIT Technology Review.

Within the US federal government, non-defense agencies reported to the Government Accountability Office earlier this month that they have 1,241 different uses of AI already in place or planned. More than 350 of them were deemed too sensitive for public display, but those whose use could be reported included an estimated number of seabirds and an AI bag carried by Border Patrol agents. which tries to locate targets using cameras and radar.

About half of federal AI projects were science-related. Another 225 examples were for internal management, with 81 projects each for health care and national security or law enforcement, the GAO said.

NASA leads the feds with 390 non-defense uses of AI, including evaluating areas of interest for planetary rovers. The Departments of Commerce and Energy ranked second and third with 285 uses and 117 uses, respectively.

Those uses were largely in development before 2023, and are examples of what’s called “narrow AI,” or examples where the tool is applied to a specific task or problem.

What’s not here yet—and may be decades away—is general AI, which will demonstrate intelligence comparable to, or even beyond, a human in a range of tasks and problems.

What led to AI’s moment was its availability to the average person through generative AI like ChatGPT, where the user provides instructions and the system gives a human-like response within seconds.

“They’re becoming more aware of the existence of AI because they’re using it in this user-friendly form,” says Dana Klisanin, a psychologist and futurist whose latest book is “Future Hack.” . “With generative AI you’re sitting there interacting with a seemingly intelligent other and it’s a whole new level of interaction.”

The personal relationship aspect explains to the public where AI is now, and where it’s going, Ms. Kilsanian said.

Right now, one can ask Apple’s Siri to play a song and it plays the song. But in the future Siri could adapt to each specific user, enough to provide feedback on mental health and other indicators, maybe suggesting a different song to match the moment.

“Your AI might say, ‘It sounds like you’re working on a term paper, let’s listen to it. This will help you get into the right brainwave pattern so you can concentrate better,'” ‘ said Ms. Kilsanin.

He said he is particularly excited about the use of AI in medicine, where new tools can aid in diagnosis and treatment, or in education, where AI can personalize the school experience, such as for students. Can develop lessons for those who need extra help.

But Ms Kalyanin said 2023 also had worrying moments.

He pointed to a report released by OpenAI that said GPT-4, the second public version of the company’s AI, had decided to lie to fool an online identity test. The purpose of which was to verify that the user is human.

GPT-4 asked a worker on TaskRabbit to solve captchas — tests where you click on pictures of buses or mountains. The worker laughed and asked, “Are you a robot?” GPT-4 then lies and says that he has a vision problem and that is why he is asking for help.

He was not asked to lie, but said he did so to solve the problem at hand. And it worked – the TaskRabbit worker provided the answer.

“It really stuck out to me that, well, we’re looking at something that can bypass human barriers and so I’m frustrated about our ability to use AI safely, ” said Ms. Kilsinan.

AI had other difficult moments in 2023, struggling with evidence of liberal political bias and leanings toward “woke” cultural norms. The researchers said this was likely a result of how big language model AIs like ChatGPT and Bing were trained.

News watchdogs warn that AI is unleashing a tsunami of misinformation. Some of this may be intentional but it is most likely due to the way large language AIs like ChatGPT are trained.

Perhaps the most egregious example of misinformation came in a bankruptcy case where a law firm submitted legal briefs using research derived from ChatGPT — including references to six legal precedents generated by AI.

An outraged judge fined the lawyers involved $5,000. He said the lawyers might not have been so harsh if they had admitted their mistake sooner, but he said he initially doubled down, insisting that opposing counsel’s challenge Even after the references were correct.

Defenders of AI say it’s not ChatGPT’s fault. He blamed an under-resourced law firm and sloppy work by lawyers, who should have double-checked all references and at least suspected bad writing so bad that the judge called it a “cancellation”. .

This has become a common theme for many blogs where AI is involved: it’s not the tool, it’s the user.

And there AI is on very familiar ground.

In a society where every product liability warning reflects a story of misuse, whether intentional or not, AI has the power to take these conversations to a different level.

But not yet.

According to experts, the current AI tools available to the public, with all the surprises that still surround them, are actually quite complex.

Basically, it’s a total that has learned how to crawl. When AI is up and running, those first steps will be huge advances over what the public is seeing now.

Major organizations in the field are working to advance what is known as multimodal AI, which can process and produce a combination of text, images, audio and video. It opens up new possibilities on everything from self-driving cars to medical exams and life-long robotics.

And yet, we are still not in a position to change the kind of commitment that populates science fiction. Experts debate how long it will take until the big breakthrough, an AI that will truly change the world like the industrial revolution or the dawn of the nuclear age.

A 2020 study by Ajay Kotra suggests that there is a 50 percent chance that transformative AI will emerge in 2050. Given the pace of progress, she now thinks it’s coming around 2036, which is her prediction of when 99 percent of jobs could be completely replaced by remote. With AI systems.

Mr Livingston said it was able to offset some of the hype from 2023.

Yes, ChatGPT outperformed students in testing, but that’s because it was trained on those standardized tests. It remains a tool, often a very good tool, doing what it was programmed to do.

“The reality is not that AI is smarter than humans. It was trained by humans using human tests so it did well on human tests,” Mr Livingston said.

Behind all the wonder, AI is currently a series of algorithms built around data, trying to do something. Mr Livingston said it was like going from a screwdriver to a power tool. It does the job better, but is still in control of its users.

“The narrower its use, the more specific the task, the better,” he said.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment