Generative AI is making an old problem much, much worse.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Earlier this year, sexually explicit photos of Taylor Swift were repeatedly shared. The images were almost certainly created with generative-AI tools, demonstrating the ease with which the technology can be pushed to nefarious ends. The case mirrors many other seemingly similar examples, including fake photos purporting to show the arrest of former President Donald Trump, AI-generated photos of black voters supporting Trump, and the fabrication of Dr. Anthony Fauci. Pictures

There is a tendency to focus on media coverage. Source Of that imagery, because generative AI is a new technology that many people are still trying to wrap their heads around. But that fact obscures the reason the images are relevant: they spread across social media networks.

Facebook, Instagram, TikTok, X, YouTube, and Google Search determine how billions of people experience the Internet every day. This fact has not changed in the generative-AI era. Indeed, the role of these platforms as gatekeepers is becoming more apparent as it becomes easier for more and more people to produce text, videos and images on command. For artificial media to reach millions of views—as Swift Images did in just hours—they need large-scale, aggregated networks, which allow them to identify and then expand to an initial audience. As the amount of content available increases with the widespread use of creative AI, social media’s role as a curator will become even more important.

Online platforms are marketplaces for individual consumer attention. A user may be exposed to as many posts as they have time to view. On Instagram, for example, Meta’s algorithms choose from countless pieces of content for each post that actually appear in a user’s feed. With the rise of creative AI, there may be an order of magnitude more potential options for platforms to choose from—meaning that each individual video or image creator must compete more aggressively for the audience’s time and attention. will After all, consumers won’t have much time to spend even as the volume of content available to them grows exponentially.

So what is likely to happen as creative AI becomes more widespread? Without major changes, we should expect more cases like Swift Images. But we should also expect more. everything. The shift continues, as an abundance of artificial media is tripping up search engines like Google. AI tools can lower barriers for content creators by making production faster and cheaper, but the reality is that most people will struggle even more to get noticed on online platforms. For example, media organizations will increasingly not have more news to report even if they adopt AI tools to speed up delivery and reduce costs. As a result, their content will take up proportionally less space. Already, a small subset of content gets a lot of attention: on TikTok and YouTube, for example, most views are concentrated on a small percentage of uploaded videos. Generative AI can only widen the gulf.

To address these issues, platforms can explicitly change their systems to favor human creators. It sounds easier than it sounds, and tech companies are already under fire for their role in deciding who gets attention and who doesn’t. The Supreme Court recently heard a case that will determine whether radical state laws in Florida and Texas can practically require platforms to treat all content equally, even That’s when it means actively forcing platforms to publish inaccurate, low-quality, or otherwise objectionable political content. Most users want. Central to these disputes is the concept of “free access,” the right to promote your speech through platforms like YouTube and Facebook, although there is no such thing as “neutral” algorithms. Even chronological feeds—which some advocate—prioritize recent content over user preferences or other topics of importance. News feeds, “next” default recommendations, and search results are what make the platform useful.

Past responses by platforms to similar challenges have not been encouraging. Last year, Elon Musk replaced X’s authentication system with one that allows anyone to buy a blue “Verification” badge to gain more exposure, to prevent impersonation of high-profile users. For this fulfills the first basic role of the blue checkmark. The immediate result was predictable: opportunistic abuse by influence peddlers and fraudsters, and a crappy feed for consumers. My own research suggested that Facebook failed to limit activity among abusive superusers who weighed heavily in algorithmic promotion. (The company disputed part of that finding.) TikTok puts much more emphasis on the viral engagement of specific videos than on account history, making it easier for new accounts with less credibility to gain significant attention.

So what to do? There are three possibilities.

First, platforms can reduce their strong focus on engagement (the amount of time and activity spent each day or month). Whether through regulation or different choices by product leaders, such a shift will directly reduce spam and bad incentives to upload low-quality, AI-generated content. Perhaps the easiest way to achieve this is to give more priority to direct user reviews of content in the ranking algorithm. Another is to rank externally verified creators, such as news sites, and take down the accounts of abusive users. Other design changes would also help, such as cracking down on spam by imposing stronger rate limits for new users.

Second, we should use public health tools to regularly assess how digital platforms affect at-risk populations, such as adolescents, and recall products when harms are too great. And insist on changes. This process will require more transparency around the product design experiences that Facebook, TikTok, YouTube, and others are already running—which will give us insight into how platforms are evolving and meeting other goals. Trade between. Once we have more transparency, experiments can be done to include metrics such as mental health assessments, among others. Proposed legislation such as the Platform Accountability and Transparency Act, which would allow qualified researchers and academics to access more platform data in partnership with the National Science Foundation and the Federal Trade Commission, offers an important starting point. .

Third, we can consider direct product integration between social media platforms and larger language models—but we must do so with our eyes open to the risks. One approach that has gained attention is the focus on labeling: a claim that distribution platforms must publicly display any posts created using LLM. Just last month, Meta indicated it was moving in that direction, with automatic labels for posts it suspects were created with generative-AI tools, as well as incentives for posters to They should self-disclose whether they used AI to create content. But this is a losing proposition over time. The better LLMs get, the less anyone—including platform gatekeepers—will be able to distinguish the real from the synthetic. Indeed, what we consider “real” will change, just as the use of tools ranging from Photoshop to airbrushed images has become loosely accepted over time. Of course, future walled gardens of distribution platforms like YouTube and Instagram may require verified provenance of content, including labels, to be easily accessible. It seems certain that some form of this approach will occur on at least some platforms, catering to users who want a more refined user experience. At scale, however, what would that mean? This would mean an equal. more Emphasis and reliance on the decisions of distribution networks, and even greater reliance on their gatekeeping.

All of these perspectives come back to a fundamental reality we’ve experienced over the past decade: In a world of nearly unlimited production, we can only hope for more power in the hands of consumers. But because of the impossible scale, users actually face choice paralysis that puts real power in the hands of platform defaults.

While there will undoubtedly be attacks that demand immediate attention—by state-created networks of coordinated unauthentic users, by producers affiliated with profitable news, by prominent political candidates—this is not the moment to think about this big. Our attention to the dynamics that should be ignored.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment