CAMBRIDGE, MASS. (AP) — Post a comment on Reddit, answer coding questions on Stack Overflow, edit a Wikipedia entry or share a photo of a baby on your public Facebook or Instagram feed and you could be helping train the next generation of artificial intelligence. I am also helping. .
Not everyone is fine with that — especially as the same online forums they've spent years contributing to are increasingly filled with AI-generated commentary that mimics what real humans might say.
Some longtime users have tried to delete their past contributions or rewrite them in vulgar terms, but the protests haven't had much effect. A handful of governments have also tried to step in, including Brazil's privacy regulator on Tuesday.
“A significant portion of the population just feels kind of helpless,” said Reddit volunteer moderator Sarah Gilbert, who also studies online communities at Cornell University. “There's nowhere to go but to go completely offline or not contribute in ways that have value for them and value for others.”
Platforms are responding — with mixed results. Take Stack Overflow, a popular hub for computer programming tips. Earlier, it banned text responses from ChatGPT due to repeated errors, but now it is partnering with AI chatbot developers and has punished some of its users who protested. I tried to erase my past partnership.
It's one of several social media platforms grappling with user ingenuity — and occasional revolts — as they try to adapt to the changes brought about by creative AI.
Andy Rotering, a software developer in Bloomington, Minnesota, has used Stack Overflow daily for 15 years and says he fears the company “could be inadvertently harming its greatest resource” – Contributors community of people who have given their time to help other programmers.
Encouraging contributors to provide commentary should be paramount, he said.
Stack Overflow CEO Prashant Chandrasekhar said the company is trying to balance the growing demand for instant chatbot-generated coding assistance with the desire for a community “knowledge base” where people can still post. want and want to “get recognition” for their contributions.
“Fast forward five years – there will be all kinds of machine-generated content on the web,” he said in an interview. “There will be very few places where there is truly authentic, original human thought. And we are one of those places.”
Chandrasekhar simply describes the challenges of stack overflow as something he learned at Harvard Business School about how a business survives — or doesn't — after a disruptive technological change.
For more than a decade, users typically landed on Stack Overflow after typing a coding question into Google, and then looking up the answer, copying and pasting it. The responses they were most likely to see came from volunteers who had scored points measuring their reputation — which in some cases could help land them a job.
Now programmers can easily ask an AI chatbot — some of which are already trained on everything posted on Stack Overflow — and it can respond instantly.
The debut of ChatGPT in late 2022 threatened to put Stack Overflow out of business. So Chandrasekhar built a special team of 40 people in the company to join the race to launch his special AI chatbot called Overflow AI. After that, the company signed deals with Google and OpenAI, maker of ChatGPT, allowing AI developers to tap into Stack Overflow's Q&A archive to further refine their AI big language models. Enabled.
Maria Roche, an assistant professor at Harvard Business School, said such a strategy makes sense, but it may be too late. “I'm surprised Stack Overflow wasn't working on this earlier,” he said.
When some Stack Overflow users tried to delete their past comments after the OpenAI partnership was announced, the company responded by suspending their accounts due to the terms that all partnerships are “permanent and irrevocable.” are licensed for Stack Overflow.”
“We quickly addressed it and said, 'Look, this is not acceptable behavior,'” Chandrasekhar said, describing the protesters as a small minority in the “low hundreds” of the platform's 100 million users. He said.
Brazil's National Data Protection Authority took action on Tuesday to ban social media giant MetaPlatforms from training its AI models on Brazilians' Facebook and Instagram posts. It established a daily fine of 50,000 reais ($8,820) for non-compliance.
Meta called it “a step backwards for innovation” in a statement and said it has been more transparent than many industry peers training similar AI on public content, and that its Practices are in accordance with Brazilian law.
Metta has also faced resistance in Europe, where it recently halted its plans to start feeding people's public posts into training AI systems – which was due to begin last week. In the US, where there is no national law protecting online privacy, such training is already taking place.
“The vast majority of people have no idea their data is being used,” Gilbert said.
Reddit has taken a different approach – partnering with AI developers such as OpenAI and Google, while also clarifying that without commercial entities approving the platform “without regard to user rights or privacy”. “Materials cannot be taken in bulk. The deals helped Reddit bring in the money it needed to debut on Wall Street in March, valuing the company near $9 billion the second investors began trading on the New York Stock Exchange.
Reddit has not tried to punish protesting users — nor can it easily do so, as volunteer moderators say what happens in their special forums known as subreddits. But what worries Gilbert, who helps moderate the “AskHistorians” subreddit, is the growing flow of AI-generated commentary that moderators must decide to allow or ban.
“People come to Reddit because they want to talk to people, they don't want to talk to bots,” Gilbert said. “There are apps where they can talk to bots if they want to. But historically Reddit has been about connecting with humans.
He said it was ironic that the AI-generated content that threatened Reddit was derived from the comments of millions of human Redditors, and that “there is a real risk that it could eventually push people out.”