AI is not ready for prime time.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

AI tools like ChatGPT have become mainstream, and the companies behind the technologies are pouring billions of dollars into the bet that they will change the way we live and work.

(CNN) — AI tools like ChatGPT have become mainstream, and the companies behind the technologies are pouring billions of dollars into the bet that they will change the way we live and work.

But with that promise comes a steady stream of headlines, some highlighting AI’s ability to make biases or mistakes when responding to our questions or commands. Generative AI tools including ChatGPT have been accused of copyright infringement. Some, disturbingly, have been used to portray non-consensual intimacy.

More recently, the concept of “deepfakes” came into focus when pornographic, AI-generated photos of Taylor Swift spread across social media, creating malicious images created by mainstream artificial intelligence technology. Clarify the possibility.

During his 2024 State of the Union address, President Joe Biden urged Congress to pass legislation to regulate artificial intelligence, including banning “AI voice imitations and more.” He said lawmakers need to “seize the promise of AI and protect us from its threat,” warning of the technology’s dangers to Americans if left unchecked.

His statement came after a recent fake robocall campaign that impersonated his voice and targeted thousands of New Hampshire primary voters in what officials described as an AI-powered election meddling attempt. Even as disinformation experts warn of AI’s dangers to elections and public discourse, some expect Congress to pass legislation to rein in the AI ​​industry during a divisive election year.

That’s not stopping big tech companies and AI firms, which continue to connect consumers and businesses with new features and capabilities.

More recently, OpenAI, creator of ChatGPT, introduced a new AI model called Sora, which it claims can create “realistic” and “imaginative” 60-second videos from a quick text prompt. Microsoft introduced its AI assistant, Copilot, which runs on the technology that integrates ChatGPT into its suite of products, including Word, PowerPoint, Teams and Outlook, software used by many businesses around the world. And Google introduced Gemini, an AI chatbot that has begun to replace the Google Assistant feature on some Android devices.

Relevant experts

Artificial intelligence researchers, professors and legal experts worry about mass adoption of AI before regulators have the ability or desire to rein it in. Safety reasons and liability.

“Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned research aimed at holding them accountable, risking legal action, cease and desist.” Living letters, or with other methods of imposing cooling effects. On research, “said the letter.

It added that some creative AI companies have suspended researcher accounts and changed their terms of service to prevent certain types of evaluations, noting that “disempowering independent AI researchers It’s not in the companies’ own interest.”

The letter comes less than a year after some big names in tech, including Elon Musk, called on artificial intelligence labs to shut down for at least six months, citing “profound threats to society and humanity.” So stop training the most powerful AI systems. (No pause).

“One of the most worrisome things I see around AI is the continuing gap between promise and practice,” Brown University computer scientist and professor Suresh Venkata Subramanian told CNN. “Companies keep promising the moon when it comes to AI and still deliver loose green cheese.”

Venkatasbramanian, who was appointed to the White House Office of Science and Technology Policy in 2021 to help advise on AI policy, is among the experts who signed the latest letter.

“Access to large generative AI systems in widespread use is controlled by a few companies,” said Venkata Subramaniam, noting that these systems easily make mistakes and can produce harmful content. “Without the ability to independently evaluate AI and the fear of legal risks, it will be very difficult for researchers to do the important work of evaluating the safety, security, and reliability of generative AI systems so that policymakers can better How can you be informed. Take action.”

He said he looks to the work being done by policymakers and the White House Office of Management and Budget, which oversees President Biden’s vision in the executive branch, to clarify and set the “rules of the road.” can be given

Arvind Narayanan, a professor of computer science at Princeton who also signed the letter, agreed, telling CNN that he is particularly concerned about the speed at which AI is accelerating. A pace much faster than we are capable of adapting to.

“Tech companies have gotten rich with a business model where they profit from the benefits of new technologies, while society bears the costs of those technologies,” Narayanan said, acknowledging that this predates AI. It was the case.

“There are guardrails needed for specific vulnerabilities but they won’t fix the underlying business model,” he added.

They believe bolder reforms may also be necessary, such as taxing AI companies to fund a social safety net.

For now, today’s productive AI users must understand the limitations and challenges of using products that are still far from where they should be.

Asking AI about AI

When CNN asked ChatGPT if it (and other creative AI tools) were ready for mass adoption, it replied: “Yes.” But added a caveat: “Ongoing efforts to address ethical, social, and regulatory challenges are critical to responsible and beneficial mass adoption.”

Google’s Gemini AI tool — named Bard — gave a similar answer but with a bit more caution: “Generative AI is having a moment, but there are mixed signals about mass adoption.

“Despite widespread use, studies have yet to show a major increase in productivity,” Gemini wrote. “Employees may need more training to fully utilize creative AI.”

Gemini also nods to ethical issues: “Bias in training data can lead to biased AI output,” he wrote. “[And] There are concerns about responsible use and accountability.”

CNN’s Brian Fung contributed to this report.

CNN Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment