Going after the dark side of AI

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

OpenAI CEO Sam Altman speaks remotely during a keynote address with Atlantic CEO Nicholas Thomson during the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva on Thursday. (Photo by FABRICE COFFRINI/AFP via Getty Images)

New Hampshire voters received a barrage of robocalls from computer-generated impersonations of President Biden, discouraging them from voting in the January primary. While the acknowledged mastermind was slapped with serious charges and proposed FCC fines, his work is the only wound left by modern technology law enforcement agencies: to capture artificial intelligence.

The world of computer-generated “deepfakes” can not only impersonate one's voice and face, but contribute to the manipulation and manipulation of sexuality and reputation of individuals and the public at large.

BOSTON, MA – Acting U.S. Attorney Joshua Levy speaks during a roundtable discussion with the media at the federal courthouse. (Nancy Lane/Boston Herald)

“I think AI will impact everything on a daily basis for everyone in this room, and it certainly will,” Massachusetts Acting U.S. Attorney Joshua Levy said during a reporter roundtable in his office on Wednesday. But it will affect the work of the Department of Justice.” . “How exactly that will play out, time will tell.”

Of particular concern to Levy was the technology's ability to introduce new “suspicions” to forensic evidence time-tested during trial.

“We rely heavily on audio tape, video tape in prosecution cases,” he said. “We have to convince 12 strangers (juries) beyond a reasonable doubt of someone's guilt. And when you introduce AI and the skepticism that creates, that's a challenge for us.

Legislators across the country and around the world are trying to keep up with the rapidly growing technology, and its legal analysis has become a hot academic topic.

Advanced moves

“We're going to see more technological change in the next 10, maybe the next five years than we've seen in the last 50 years and that's a fact,” President Biden said before signing an executive order in October. ” Technology “The most consequential technology of our time, artificial intelligence, is accelerating this transformation.”

“AI is all around us,” Biden continued. “To fulfill the promise of AI and avoid risk, we need to govern this technology.”

Among many other regulations, the order directs the Commerce Department to develop a system for labeling AI-generated content to “protect Americans from AI-driven fraud and deception” and in these sectors. Efforts should be made to strengthen privacy protections through funding research.

In February, the U.S. Department of Justice — of which Levy's office is a regional part — appointed its first “artificial intelligence officer” to lead the department's understanding of and efforts toward rapidly emerging technologies.

“The Department of Justice must keep pace with rapidly advancing scientific and technological advances to fulfill its mission of upholding the rule of law, keeping our nation safe, and protecting civil rights,” Attorney General Merrick Garland said in the announcement. should be maintained.”

AI Officer Jonathan Mayer, an assistant professor at Princeton University, DOJ explained, will join a team of technical and policy experts who will advise leadership on technical areas such as cybersecurity and AI.

Across the Atlantic, the European Union passed its AI regulation framework, the AI ​​Act, in March, which has spent five years in development.

Romanian lawmaker Dragos Teodorche, one of the legislative leaders on the issue, said before the vote that the act “pushes the future of AI in a human-centric direction, in a direction where humans are in control of the technology.” are”. According to the Associated Press.

Sam Altman, CEO and cofounder of OpenAI — the creator of the hugely popular ChatGPT service powered by AI big language models — called on Congress to regulate the industry in May of last year.

“There should be limits on what the deployed model is capable of and then what it actually does,” he told a Senate hearing, calling for an agency to license large AI operations, develop standards and audit compliance. demanded.

State level initiatives

Biden's executive order is not permanent legislation. In the absence of federal-level laws, states are making their own moves to adapt the technology.

Software industry advocacy group BSA The Software Alliance tracked 407 AI-related bills in 41 US states as of February 7 this year, more than half of which were introduced in January alone. While the bills contained a smattering of AI-related issues, about half of them — 192 — were related to regulating “deepfake” issues.

In Massachusetts, Attorney General Andrea Campbell issued an “advisory” in April to guide “developers, suppliers, and users of AI” on how to make their products work within the current regulatory and legal framework in the commonwealth. should, including its consumer protection, anti-discrimination. and data security laws.

“There is no doubt that AI has tremendous and exciting potential to benefit society and our commonwealth in many ways, including fostering innovation and increasing efficiency and cost savings in the marketplace,” Campbell said in the announcement. Campbell said in the announcement. “However, those benefits do not outweigh the real risk of harm, for example, any bias and lack of transparency within the AI ​​system, could cause our residents.”

The Herald asked the offices of both Campbell and Gov. Maura Haley about new developments on the AI ​​regulation front. Haley's office referred the Herald to Campbell's office, which did not respond by deadline.

On the other coast, California is trying to lead the way in regulating the technology that's proliferating in virtually every sector at light speed—but not so tightly that the state's wealth leads the charge. Bad for tech firms.

“We want to dominate this space, and I'm too competitive to suggest otherwise,” California Gov. Gavin Newsom said Wednesday in San Francisco, announcing a summit where the state will address issues such as homelessness. Will consider AI tools to tackle thorny issues. “I think the world looks to us in many respects for leadership in this space, and so we feel a deep sense of responsibility to get that right.”

Risks: Manipulation

The New Orleans Democratic Party consultant who said he was behind the voice-cloning robocalls mimicking Biden reportedly did it very cheaply and without elite technology: a New Orleans street magician to make the voice on his laptop. By paying $150.

The plot of the novel did not directly involve criminal codes. New Hampshire's attorney general on May 23 indicted mastermind Steven Kramer on 13 counts of voter suppression and candidate misconduct. The Federal Communications Commission that same day proposed a $6 million fine for violating the “Truth in Caller ID Act” because the calls spoofed the number of a local party operative.

Just a day earlier, FCC Chairwoman Jessica Rosenworcel announced proposals to add transparency to AI-manipulated political messaging, but stopped short of banning the content.

The announcement states that “AI is expected to play a significant role in the creation of political advertising in 2024 and beyond” and that the public interest obligates the commission to “protect the public from false, misleading or deceptive advertising.” Protect from programming”.

A look at the academic literature on the subject over the past several years abounds with examples of the manipulation of foreign actors operating in foreign countries or here in the United States.

“While deep falsification technology will bring some benefits, it will also introduce many disadvantages. The marketplace of ideas is already suffering from truth decay as our networked information environment interacts in toxic ways with our cognitive biases.” Authors Bobby Chesney and Daniel Citron wrote in the California Law Review in 2019.

“Deep counterfeiting will significantly exacerbate this problem. Individuals and businesses will face new forms of exploitation, intimidation and personal sabotage. The threats to our democracy and national security are also profound,” he said. The paper “Deep Fax: A Growing Challenge to Privacy, Democracy and National Security” continued.

Since 2021, a TikTok parody account called @deeptomcruise has illustrated how powerful technology has become by splitting Hollywood superstar Tom Cruise's face onto the bodies of others and cloning his voice. The playful experience still required sophisticated graphics processing and rich footage to train the AI ​​on Cruise's face.

“Over time, such videos will become cheaper to produce and require less training footage,” wrote author Todd Helms in a 2022 RAND Corporation primer on the technology and the disinformation it spreads.

“Tom Cruise came out with a series of deepfake deepfake videos featuring, for example, a 2018 deepfake of Barack Obama using profanity and a 2020 deepfake of a Richard Nixon speech. Fake — a speech Nixon never gave,” Helms wrote. “With each passing iteration, the quality of the videos becomes increasingly lifelike, and artificial components are more difficult to detect with the naked eye.”

As for the risks of the technology, Helms says, “the answer is only limited by one's imagination.”

“Given the extent of trust that society places on video footage and the infinite number of requests for such footage, it is not difficult to imagine the many ways in which deep faxing affects not only society but also national security. can.”

Chesney and Citron's paper included a long bulleted list of possible manipulations, including a “fake videos (in which) government officials accept bribes, act racist, or engage in adultery” similar to the Biden-appropriating robocalls. ” or officials and leaders may be shown discussing the war. Crimes

Risks: Sexual privacy

In a separate article for the Yale Law Journal, Citron, then a professor at Boston University, examined the harms of deepfake pornography.

“Machine learning technologies are being used to create 'deep fake' sex videos – where people's faces and voices are inserted into real pornography,” he wrote. “The end result is a realistic-looking video or audio that's increasingly difficult to finish.”

“Yet deeply fake videos do not depict the actual genitalia (and other private parts) of the people featured,” he continued, “they hijack people's sexual and intimate identities. … They There are insults to the sense that people's intimate identities are their own to share or keep to themselves.

His paper included some horrifying examples, in which celebrities such as Gal Godot, Scarlett Johansson and Taylor Swift were subjected to AI-generated obscenities, sometimes in very dirty contexts. Others were asking for detailed help to create such a picture of their former close associates. The fake porn was created by an Indian journalist and was widely circulated to destroy her reputation because the creators didn't like her coverage.

Citron concludes with a survey of legal measures that could be evaluated, but notes that “traditional privacy law is ill-equipped to deal with some of today's invasions of sexual privacy.”

At Wednesday's roundtable, U.S. Attorney Levy found the technology's obscene effects as troubling as other connotations.

“I'm not an expert on child pornography law, but if it's a synthetic image, I think it raises serious questions about whether it's actionable under federal law,” he said. “I'm not going to comment on that, but it's a concern I think about.”

Photo by OLIVIER DOULIERY/AFP via Getty Images

In this photo illustration, a phone screen shows a statement from the head of security policy at META showing a fake video of Ukrainian President Volodymyr Zelensky calling for his troops to surrender. (Photo by OLIVIER DOULIERY/AFP via Getty Images)

Photo by DREW ANGERER/AFP via Getty Images

OpenAI, creator of ChatGPT and image generator DALL-E, said it is testing “Sora,” seen here in a February illustration, which allows users to create realistic videos with a simple prompt. will give. (Photo by DREW ANGERER/AFP via Getty Images)

Photo by Chip Somodiola/Getty Images

University of Maryland Law School Professor Daniel Citron and Director of OpenAI Policy Jack Clark discuss 'deepfakes', digitally diamond, during a hearing at the Longworth House Office Building on Capitol Hill on June 13, 2019 in Washington. Testified before the House Intelligence Committee about manipulated video and still images. , DC. (Photo by Chip Somodiola/Getty Images)

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment