Facebook’s AI failure wipes out Kansas Reflector links Even Facebook doesn’t know what went wrong. • Kansas Reflector

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

TOPEKA — Facebook’s unproven artificial intelligence misclassified a Kansas Reflector article about climate change as a security risk, in a flurry of failures. Blocked domains of news sites. According to technology experts who published the article, interviewed for this story and Facebook’s public statements.

That assessment is consistent with an internal review by the Kansas Reflector’s parent organization, Status Newsroom, which faulted Facebook for its AI flaws and lack of accountability for its mistakes.

It’s unclear why Facebook’s AI identified the structure or content of an article as a threat, and experts say Facebook doesn’t actually know what features cause misfires.

“It appears that Facebook used overzealous and unreliable AI to incorrectly identify the Reflector article as a phishing attempt,” said Chris Fitzsimons, president and publisher of Status Newsroom. “Facebook’s response to this incident has been confusing and difficult to understand. Just as the misinformation provided to our readers was troubling, our content somehow exposed a security risk, damaging our reputation and credibility. , they still haven’t made it right with our followers on their platform.”

On April 4, Facebook refused to share the Kansas Reflector. An opinion column written by Dave Kendall About his climate change documentary, each user then removed every post on the Kansas Reflector website referring to either story. Facebook restored the posts about seven hours later, but continued to block the column.

The next day, the Kansas Reflector attempted to share the column as it was published. News from the states.operated by the Status Newsroom, and Hand basket, a newsletter run by freelance journalist Marisa Kubas. Facebook rejected those posts, then removed all links pointing to the two sites, as it had done with the Kansas Reflector a day earlier.

In removing the posts, Facebook notified users who misidentified news sites as cybersecurity threats.

Meta – the company behind Facebook, Instagram and Threads – issued a public apology for the “security error”. But a Meta spokesperson said Facebook will not pursue users to correct incorrect information.

The Kansas Reflector continues to hear from readers who are confused about the situation. Facebook’s actions also disrupted the Kansas Reflector’s newsgathering operations during the final days of the legislative session and had a chilling effect on other news media.

Daniel Kahn-Gilmore, a senior staff technologist at the American Civil Liberties Union, said Facebook’s actions show the danger of society relying too much on a single communication platform to determine what’s worth discussing. What is it.

“It’s just not their core competency,” Gilmore said. “On some level, you can look at Facebook as someone who’s outgrown their skis. Facebook was originally a hookup app for college students, and all of a sudden we’re calling it fiction. Asking to help sort out the facts.

The social media posts reflect Facebook’s moves to block news sites that published Dave Kendall’s column. (Illustration by Sherman Smith/Kansas Reflector)

‘Welcome to AI’

Meta’s head of Instagram, Adam Mosseri, attributed the glitch to machine learning classifiers, a specific type of AI that’s trained to recognize features associated with phishing schemes that trick people into revealing personal information. try to

Classifiers review millions of pieces of content every day, Musri said. In the threads post, and sometimes they get it wrong. Mossiri did not respond. A thread post Looking for more details

Jason Rogers, KCEO Invarya cybersecurity company that uses and has ties to NSA-licensed technology. University of Kansas Innovation ParkReviewed the Kendall column as it appeared in the Kansas Reflector, News from the States and The Hand Basket.

Rogers said Facebook’s sensors could be sensitive to things like the large amount of hyperlinks that were included in a column or the resolution of the images that appeared on the page. Still, he said, it’s “strange that it would be flagged as a ‘cyber’ threat by AI.”

“Welcome to AI, and why it’s not as ‘ready’ as some people make it out to be,” said Rogers.

He said it’s possible that the Kansas Reflector’s efforts to circumvent Facebook’s filter — prompting people to read Kendall’s column. Kansas Reflector.comAnd then trying to share the same column with other sites — the AI ​​may have been tipped off that this was the behavior of a phishing scam, causing it to block the domains of all three sites.

Sagar Samtani, Director of the Data Science and Artificial Intelligence Lab at the Kelly School of Business at Indiana University“It’s common for this type of technology to produce false positives and false negatives,” he said.

Facebook is going through a “learning process,” he said, trying to evaluate how people around the world view different types of content and protect the platform from bad actors. .

“Facebook is just trying to figure out what’s going to be appropriate, appropriate content,” Samtani said. “So in the process, there’s always an ‘oh,’ like ‘we shouldn’t have done that.’ “

And, he said, Facebook can’t say why its technology misclassified the Kansas reflector as a threat.

“Sometimes it’s actually very difficult for them to say something like that because sometimes the models don’t necessarily say what the features are that might raise an alarm,” Samtani said. “It may be something that is not within their technical ability to do.”

On April 4, the Kansas Reflector noticed that all of his Facebook posts had been deleted. The platform also prevented users from sharing links to the site. The disruption expanded to include two other sites on Friday. (Sherman Smith/Kansas Reflector)

Where is the accountability?

Kendall’s column was critical of Facebook because the platform refused to allow her to buy ads to promote her climate change film. Facebook told him the topic was too controversial.

In a pair of phone calls on April 5, Meta spokesman Andy Stone insisted that Facebook’s actions against the three news sites that published the Kendall column had nothing to do with the column’s content.

ACLU technologist Gilmore questioned this explanation.

“They’re acting as a filter for their readers and trying to keep readers away from what they perceive to be side effects, whatever that means,” Gilmore said. “I would be deeply shocked if there was nothing that a normal human would consider ‘material’ that could trigger these detectors.”

It would actually be difficult, he said, to program an AI to ignore the words of an article.

“They know how people react to the media they read,” Gilmore said. “They know the amount of time people spend on an article. They know a lot of information. I don’t know how or why they would put that out of their rankings.

He also said that AI systems may not be able to provide an explanation that “any normal human would understand” why it rejected Kendall’s column and blocked the domains of news sites that published it. .

Stone, a Meta spokesman, declined to answer questions for this story, including: How does Facebook think it should be held accountable for its mistake? Does Facebook actually know what caused the error? What changes were made to prevent the error from happening again? It belongs to Facebook. Board of Supervisors Assessing the situation?

Gillmor’s work with the ACLU focuses on the ways technology can affect civil liberties such as freedom of speech, freedom of association, and privacy.

“This is a great example of one of the big problems of relying too heavily on an ecosystem to distribute information,” Gilmore said. “And the explanation you’re getting from them is, ‘Well, we screwed up.’ Well, you messed up, but there are consequences for everyone.”

“Where is the accountability here?” he added. “Is Facebook going to hold AI systems accountable?”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment