OpenAI seems awfully defensive about its AI voice engine.

Is OpenAI on the defensive about its new text-to-speech tool?
Jap Arians/Getty
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

  • OpenAI on Friday released a statement on its security efforts for Voice Engine, its text-to-speech model.
  • The voice engine generates natural-sounding speech that can be used for deep faxing.
  • The technology has raised concerns among lawmakers.

For the second time in a few months, OpenAI has found itself explaining its text-to-audio tool, reminding everyone that it's not widely available, and may never be.

“It's important that people around the world understand where this technology is headed, whether or not we ultimately use it widely ourselves,” the company said in a statement posted on its website Friday. The company said in a statement posted on its website on Friday. “That's why we want to explain how the model works, how we use it for research and education, and how we're implementing our safeguards around it.

Late last year, OpenAI shared its voice engine, which relies on text inputs and 15-second audio clips of human voices to produce “natural-sounding speech” with a small group of users outside the company. can be produced.” This tool can create voices that sound convincingly human in many languages.

At the time, the company said it was choosing to preview the technology but not release it broadly against the risk of “ever more convincing production models” to “strengthen social resilience”. .

As part of these efforts, OpenAI said it plans to eliminate voice-based authentication to access bank accounts, explore policies to protect the use of human voices in AI, educate the public about the risks of AI. , and is actively working on accelerating progress on tracking. Audio-visual content so that users know whether they are interacting with real or artificial content.

But despite all these efforts, the fear of technology persists.

Bruce Reid, President Joe Biden's AI chief, once said that voice cloning is something that keeps him up at night. And the Federal Trade Commission said in March that fraudsters are using AI to enhance their work, using voice-cloning tools that make it difficult to distinguish between AI-generated voices and human ones. are

In its latest statement on Friday, OpenAI sought to address these concerns.

“We continue to engage with US and international partners in government, media, entertainment, education, civil society and beyond to ensure we are incorporating their feedback,” the company said. As we create.”

He also noted that once Voice Engine is equipped with its latest model, GPT4o, it will also introduce new threats. Internally, the company said it is “actively red-teaming GPT-4o to identify and address known and unexpected threats in various areas such as social psychology, bias and fairness, and misinformation.” to deal with”.

The big question, of course, is what will happen when the technology is mass-released. And OpenAI seems to be developing itself as well.

OpenAI did not immediately respond to Business Insider's request for comment.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment