Google researchers publish paper on how AI is ruining the internet

Google is trying to find the man responsible for all this.

Isn't that ironic?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Google researchers have come out with a new paper that warns that creative AI is ruining vast swaths of the Internet with fake content — painfully ironic since Google is using the same technology to power its vast customer base. Working hard to get to the base.

study, a yet-to-be-peer-reviewed paper viewed by 404 media, found that the vast majority of creative AI users are using the technology to “blur the lines between authenticity and fraud” by posting fake or doctored AI content, such as photos or videos, on the Internet. The researchers also looked at previously published research on generative AI and nearly 200 news articles reporting on the misuse of generative AI.

The researchers concluded that “manipulating human likeness and falsifying evidence are the most prevalent tactics in real-world abuse cases.” “Most of them were deployed with the discernible intent to influence public opinion, enable scam or fraudulent activity, or make a profit.”

Compounding the problem is that creative AI systems are rapidly being developed and readily available—requiring “minimal technical expertise,” according to the researchers, and making it difficult for people to “understand sociopolitical reality or scientific knowledge.” distorting the collective understanding of consensus”.

Missing from the paper, as far as we can tell? Any reference to Google's own embarrassing missteps using tech — which, as one of the biggest companies on Earth, is sometimes massive.

Forecast: Cloudy

If you read the paper, you can't help but conclude that the “misuse” of generative AI often looks like the tech is working as intended. People are using generative AI to create a lot of fake content because it's so good at the job, and as a result the internet is flooded with AI slop.

And this situation is enabled by Google itself, which has allowed or even been the source of the spread of this fake content, be it fake images or misinformation.

According to the researchers, the glitch is also testing people's ability to distinguish real from fake.

He writes, “Similarly, the mass production of low-quality, spam-like, and malicious artificial content threatens to exacerbate public skepticism about digital information altogether and overload users with verification tasks.” “

And chillingly, as we continue to be inundated with fake AI material, the researchers say there are instances when “high-profile individuals are able to explain inappropriate AI-generated evidence, making the burden of proof expensive and transfer in inefficient ways.”

As companies like Google continue to roll AI into every product, expect more of all of them.

More on Google: Google took down the weird AI answers manually.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment