Algorithms are driving AI-generated lies to alarming heights. How do we stop it?

Generative artificial intelligence (AI) tools are supercharging the problem of misinformation, disinformation and fake news. OpenAI’s ChatGPT, Google’s Gemini, and various image, sound, and video generators have made it easier than ever to produce content, while making it harder to tell what’s real or not.

Malicious actors seeking to spread misinformation can use AI tools to automate the creation of mass persuasive and misleading text.

This raises important questions: How much of the content we consume online is authentic and how can we determine its authenticity? And can anyone stop it?

This is not an idle concern. Organizations secretly trying to influence public opinion or influence elections can now take their operations to an unprecedented level with AI. And their content is being widely disseminated through search engines and social media.

Read more: What is Surah? A new creative AI tool could change video production and increase the risks of disinformation.

Fake everywhere

Earlier this year, a German study on search engine content quality noted a “trend towards simple, repetitive and possibly AI-generated content” on Google, Bing and DuckDuckGo.

Traditionally, news media readers could rely on editorial control to uphold journalistic standards and verify facts. But AI is rapidly changing this space.

In a report published this week, Internet trust organization NewsGuard identified 725 untrustworthy websites that publish AI-generated news and information “without any human oversight.”

Last month, Google released an experimental AI tool to a select group of independent publishers in the United States. Using creative AI, a publisher can summarize articles pulled from a list of external websites that produce news and content relevant to their audience. As a condition of the trial, users have to publish three such articles per day.

Platforms host content and develop creative AI that blurs the traditional lines that enable trust in online content.

Can the government step in?

Australia has already seen a tussle between the government and online platforms over the exposure and moderation of news and content.

In 2019, the Australian government amended the Criminal Code to mandate the rapid removal of “abhorrent violent content” via social media platforms.

An Australian Competition and Consumer Commission (ACCC) inquiry into the power imbalance between Australian news media and digital platforms led to the enactment of a bargaining code in 2021 that allowed platforms to sell media their news content. Forced to pay.

While these may be considered partial successes, they also reflect the scale of the problem and the difficulty of taking action.

Our research shows that these conflicts saw online platforms initially open to changes and later resist them, while the Australian government held off on implementing mandatory measures in favor of voluntary actions. gave

Ultimately, the government realized that relying on platforms’ “trust us” promises would not yield the desired results.

Read more: Why Google and Meta owe news publishers more than you think — and billions more than they’d like to admit

The takeaway from our study is that once digital products become integral to millions of businesses and daily lives, they will become a tool for platforms, AI companies and big tech to push back against expectations and government. Works as

With that in mind, early calls for regulation of generative AI by tech leaders like Elon Musk and Sam Altman are right to be skeptical. Such calls are over as AI takes over our lives and online content.

One challenge lies in the sheer speed of change, which is so rapid that safeguards are not yet in place to mitigate potential risks to society. Accordingly, the World Economic Forum’s 2024 Global Risks Report has identified misinformation and misinformation as the biggest threats over the next two years.

The problem is compounded by AI’s ability to create multimedia content. Based on current trends, we can expect an increase in deepfake incidents, although social media platforms like Facebook are responding to these issues. They aim to automatically identify and tag AI-generated images, video and audio.

Read more: The story of OpenAI shows how big corporations dominate shaping our technological future

what can we do?

Australia’s eSafety Commissioner is working on ways to control and mitigate the potential harm caused by generative AI while balancing its potential opportunities.

One key idea is “security by design,” which requires tech firms to put these security considerations at the core of their products.

Other countries, such as the US, are further ahead in regulating AI. For example, US President Joe Biden’s recent executive order on the safe deployment of AI requires companies to share security test results with the government, conduct red team testing (simulated hacking attacks), and on content. Watermarking needs to be guided.

We call for three steps to help protect against the risks AI poses with misinformation.

1. Regulation needs to lay down clear principles without allowing for “best effort” objectives or “trust us” methods.

2. To protect against mass disinformation operations, we need to teach media literacy the same way we teach math.

3. Safety tech or “safety by design” needs to be a non-negotiable part of every product development strategy.

People are aware that AI-generated content is on the rise. In theory, they should adjust their information habits accordingly. However, research shows that consumers generally underestimate their own risk of believing fake news compared to the perceived risk to others.

Finding trustworthy content doesn’t have to involve sifting through AI-generated content to figure out what’s real.

Leave a Comment