How Seniors Are Falling for AI-Generated Photos on Facebook

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

As AI-generated content proliferates online and clutters social media feeds, you may have noticed more images being produced that evoke the uncanny valley effect—relatively ordinary scenes in which the unreal There are also details such as extra fingers or nonsense words.

In these misleading posts, young users saw some clearly false images (for example, Skiing dogs and toddlers, astonishing “hand-carved” ice sculptures and massively crocheted cats). But AI-generated art isn’t obvious to everyone: It seems that older consumers—generally those of Generation X and above— Fall For those visuals on social media. The evidence isn’t just from a quick glance at TikTok videos and your mom’s Facebook activity—there’s data behind it.

The platform has become increasingly popular among seniors to find entertainment and companionship as younger users leave for flashier apps like TikTok and Instagram. Recently, Facebook’s algorithm pushed useless AI images to sell products and collect followers on users’ feeds, according to a preprint paper announced March 18 by researchers from Stanford University and Georgetown University. Is.

Take a look at the comments section of any of these AI-generated images and you’ll find them filled with comments from older users saying they’re “beautiful” or “amazing,” often captioning the posts with heart and prayer emojis. Decorate with Why do older adults not only fall for these pages—but seem to enjoy them?

Currently, scientists don’t have definitive answers about the psychological effects of AI art because generators like DALL-E, Midjourney and Adobe Firefly — which run on artificial intelligence models trained on millions of images — are only publicly available. are available. last two years.

But experts believe — and it’s not as simple an explanation as you might expect. Still, knowing why our elderly friends and relatives may be confused can provide clues to prevent them from falling prey to scams or harmful misinformation.

Why AI images fool older adults.

Cracking that code is especially important because tech companies like Google ignore older users during internal testing, said Björn Herrmann, a cognitive neuroscientist at the University of Toronto who studies the effects of aging on communication. told The Daily Beast.

“Often in these types of places things move forward, not with an aging perspective in mind,” he said.

Although cognitive decline seems like a plausible explanation for this machine-driven mismatch, preliminary research suggests that lack of experience and familiarity with AI may explain comprehension differences between younger and older audiences. can help For example, in an August 2023 AARP and NORC survey of nearly 1,300 American adults age 50 and older, only 17 percent of participants said they had read or heard “a lot” about AI.

So far, the few experiments conducted to analyze AI perception of the elderly seem to follow Facebook’s trend. In a study published last month in the journal Scientific reportsThe scientists showed 201 participants a mix of AI- and human-generated images and gauged their responses based on factors such as age, gender and attitudes toward technology. The team found that older participants were more likely to believe that AI-generated images were created by humans.

“This is something that I don’t think anyone else has found before, but I don’t quite know how to interpret it,” study author Simon Grassini, a psychologist at the University of Bergen in Norway, told The Daily Beast. Go.”

Although overall research on people’s perceptions of AI-generated content is sparse, researchers have found similar results with AI-generated audio. Last year, Björn Hermann of the University of Toronto reported that older subjects were less able to discriminate between human and AI-generated speech than younger subjects.

“I didn’t really expect that,” he said, because the purpose of the experiment was to determine whether AI speech could be used to study how older people perceive background noise. How do you feel the sound between

Overall, Grassini believes any type of AI-generated media — whether it’s audio, video, or images — can trick older viewers more easily through a broader “blanket effect.” both of them Herman and Grassini suggest that older generations may not be aware of the characteristics of AI-generated content and may not encounter it in their daily lives, leaving them more vulnerable when it appears on their screens. become Cognitive decline and hearing loss (in the case of audio) may play a role, but Grassini saw the effect in people in their late forties and early fifties.

In addition, young people have grown up in an era of misinformation online and are accustomed to photos and videos of doctors, Grassini added. “We live in a society that is constantly becoming more and more fake.”

How to Help Friends and Relatives Spotbots

Despite the challenges in identifying fake content that accumulates online, older people generally have a clearer view of the bigger picture. In fact, they may recognize the dangers of AI-powered content more commonly than younger generations.

A 2023 MITER-Harris poll of more than 2,000 people indicated that a greater proportion of Baby Boomers and Gen X participants are concerned about the results of deep-fake than Gen Z and Millennial participants. Older age groups had higher shares of participants who called for regulation of AI technology and more investment from the tech industry to protect the public.

Research has also shown that older adults can more accurately distinguish between false headlines and stories than younger adults, or at least recognize them at comparable rates. Older adults also consume more news than their younger peers and may have accumulated more knowledge on specific topics over their lifetimes, making them harder to fool.

“The context of the content is really important,” Andrea Hickerson, a dean at the University of Mississippi’s School of Journalism and New Media who has studied cyberattacks on seniors and deepfake detection, told The Daily Beast. “If it’s a topic that someone knows a lot about, older adults or not, you’re going to bring up your ability to look up that information.”

Regardless, scammers have increasingly used sophisticated generative AI tools to target older adults. They can use deepfake audio and photos from social media to call for bail money from jail, or to fake the appearance of a relative on a video call.

Fake video, audio and images can also influence older voters before the elections. This can be especially damaging because people fifty and over make up the majority of all American voters.

To help older people in their lives, Hickerson said it’s important to spread awareness about creative AI and the dangers it poses online. You can start by teaching them about the telltale signs of bad photos, such as overly smooth textures, odd-looking teeth, or perfectly repeating patterns in the background.

She adds that we can clarify what we know and don’t know about social media algorithms and ways to target older people. It is also helpful to clarify that misinformation can also come from friends and loved ones. “We need to convince people that it’s okay to trust Aunt Betty and not her content,” he said.

No one is safe.

But as DeepFax and other AI creations become more advanced by the day, even the most savvy techies may find it difficult to spot them. Even if you think of yourself as particularly knowledgeable, these models may already stump you. The website This Person Does Not Exist features eerily convincing images of fake AI faces that can’t keep subtle traces of their computer origins.

And although researchers and tech companies have built algorithms to automatically detect fake media, they can’t work with 100 percent accuracy — and rapidly evolving generative AI models will eventually outpace them. (Midgerny, for example, struggled for a long time to create realistic hands before he wised up. (in a new version released last spring).

To combat the rising tide of unimaginable fake content and its social consequences, Hickerson said regulation and corporate accountability are key. As of last week, there were more than 50 bills in 30 states aimed at curbing deepfake threats. And starting in 2024, Congress has introduced a number of bills to address deep-seated defects.

“It really comes down to regulation and responsibility,” Hickerson said. “We need to have a more public conversation about this — including all races.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment