FCC to consider rules on AI-generated political ads on TV and radio, but may not touch streaming

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

FILE PHOTO: The logo of the Federal Communications Commission (FCC) is seen before an FCC net neutrality hearing on February 26, 2015 in Washington, DC. Photo Yuri Grepas/Reuters

NEW YORK (AP) – The head of the Federal Communications Commission on Wednesday introduced a proposal that would require political advertisers to disclose when they use artificial intelligence-generated content in broadcast television and radio ads.

If adopted by the five-member commission, the proposal would add a layer of transparency to what many lawmakers and AI experts believe is the lifetime of images, videos and footage of rapidly advancing generative AI tools. Demanding to produce audio clips that threaten to mislead voters in the future. American elections.

Yet the country's top telecommunications regulator will only have jurisdiction over TV, radio and some cable providers. The new rules, if adopted, will not cover the huge growth in advertising on digital and streaming platforms.

“As artificial intelligence tools become more accessible, the Commission wants to ensure that consumers are fully informed when using the technology,” FCC Chair Jessica Rosenworcel said in a statement Wednesday. be done.” “Today, I shared a proposal with my colleagues that makes it clear that consumers have a right to know when AI tools are being used in the political ads they see, and I hope That they will work on this issue expeditiously.”

The proposal marks the second time this year that the commission has taken significant steps to tackle the growing use of artificial intelligence tools in political communications. The FCC previously confirmed that AI voice cloning tools in robocalls are prohibited under current law. The decision follows an incident in New Hampshire's primary election when automated callers used voice-cloning software to impersonate President Joe Biden to prevent voters from going to the polls.

Read more: Net neutrality was restored as the FCC passed measures to regulate Internet providers.

If adopted, the proposal announced Wednesday would require broadcasters to verify with political advertisers whether their content has been produced using AI tools, such as text-to-image creators or voice-cloning software. . The FCC authorized political advertising on broadcast channels under the Bipartisan Campaign Reform Act of 2002.

Several details of the proposal are up for debate by commissioners, including whether broadcasters would have to display AI-generated content in an on-air message or only in a TV or radio station's political files, which are public. They will also be tasked with agreeing on the definition of AI-generated content, a challenge that retouching tools and other AI advancements are increasingly embedding in all kinds of creative software.

Rosenworcel hopes the regulations will be in place before the election.

Jonathan Uriarte, a Rosenworcel spokesman and policy adviser, said she wants to define AI-generated content as anything that's produced using computational technology or machine-based systems, “including, in particular, AI-generated content.” Voices that sound like human voices, and AI-generated actors that appear to be human actors.” But he said his draft definition would likely change through the regulatory process.

The proposal comes as political campaigns have already experimented heavily with generative AI, from building chatbots for their websites to creating videos and photos using the technology.

Last year, for example, the RNC released an entirely AI-generated ad intended to depict a dystopian future under a second Biden administration. It featured fake but realistic images of storefronts, armored military patrols on the streets and panicked waves of migrants.

Read more: FCC Fines Insurance Telemarketers $225 Million for One Billion Robocalls

Political campaigns and bad actors have also weaponized highly realistic images, videos and audio content to deceive, mislead and disenfranchise voters. In India's elections, recent AI-generated videos misrepresenting Bollywood stars as critical of the prime minister is an example AI experts say is gaining ground in democratic elections around the world.

Rob Weissman, president of the advocacy group Public Citizen, said he was pleased to see the FCC “taking steps to address the threats posed by artificial intelligence and deepfakes, including specifically to election integrity.”

He urged the FCC to require on-air disclosure for the public benefit and criticized another agency, the Federal Election Commission, for delaying it as it also considers whether AI in political ads. To regulate deep faxes produced by

As creative AI becomes more affordable, accessible and easy to use, bipartisan groups of lawmakers have called for legislation to regulate the technology in politics. With just over five months until the November elections, they have yet to pass a bill.

A bipartisan bill introduced by Sen. Amy Klobuchar, Democrat of Minnesota, and Sen. Lisa Murkowski, Republican of Alaska, would require opt-outs from political ads if they are created using AI or significantly altered. have gone This will require the Federal Election Commission to respond to the violations.

Uriarte said Rosenworcel realizes the FCC's ability to act on AI-related threats is limited but wants to do what it can before the 2024 election.

“This proposal sets forth the maximum standards of transparency that the commission can enforce within its jurisdiction,” Uriarte said. “It is our hope that government agencies and legislators can take this important first step toward establishing standards of transparency on the use of AI in political advertising.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment