AI expert says Princess Kate photo scandal shows our “shared sense of reality” is eroding.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

London — The European Parliament passed the world’s first comprehensive law regulating the use of artificial intelligence on Wednesday, as the controversy swirled. An edited image Catherine, Princess of Wales, which experts say illustrates how awareness of new AI technologies is affecting society.

“The reaction to this image, had it been released before the big AI boom we’ve seen in the last few years, would probably be: ‘That’s a really bad job with editing or Photoshop,'” Henry Ajder, a Eye specialist and Deep Fax, told CBS News. “But because of the talk about Kate Middleton’s absence from the public eye and the kind of conspiracy thinking it encourages, when it comes to this new, broader awareness of AI-generated images Comes together… the conversation is very, very different.”

Princess Kate, as she is often known, admitted to “making amends”. Image about himself and his three children which was posted on his official social media accounts on Sunday. Neither she nor Kensington Palace provided any details on what had been changed in the photo, but a A royal watcher told CBS News It could have been a composite image made from multiple images.

Ajdar said AI technologyand the rapid increase in public awareness of it means that people’s “sense of shared reality, I think, is eroding as much or faster than ever before.”

Countering that, he said, will require work from companies and individuals.

What’s in the EU’s new AI Act?

The EU’s new AI Act adopts a risk-based approach to technology. For low-risk AI systems such as spam filters, companies may choose to follow a voluntary code of conduct.

For technologies deemed high-risk, where AI is involved in power networks or medical devices, for example, the new law will have stricter requirements. Some uses of AI, such as police scanning people’s faces using AI technology in public places, will be banned outright except in exceptional circumstances.

The EU says the law, which is expected to enter into force by early summer, will “guarantee the protection and fundamental rights of people and businesses when it comes to AI.”

Losing “our trust in content”?

Millions of people view dozens of photos every day on their smartphones and other devices. Especially on smaller screens, it can Very difficult to detect Inconsistencies that may indicate tampering or AI use, if they can be detected at all.

“It shows our vulnerability in terms of content and how we create our own realities,” Ramik Moulvi Vasi, a digital rights advocate and senior researcher at the Mozilla Foundation, told CBS News. “If we can’t trust what we see, that’s really bad. Not only do we already have a lack of trust in institutions. We have a lack of trust and in the media, we have a lack of trust, Even for big tech… and for politicians. So this part is really bad for democracies and can be destabilizing.”

Vasi co-authored a recent report that looked at the effectiveness of different methods of marking and determining whether a piece of content has been generated using AI. He said there are many possible solutions, including educating users and technologists and watermarking and labeling images, but none of them are perfect.

“I’m afraid that the pace at which progress is made is too fast. We can’t understand and really govern and control the technology that kind of, doesn’t create the problem in the first place, the speed does.” escalating and dividing the problem,” Vasse’ I told CBS News.

“I think we have to rethink the whole information ecosystem that we have,” he said. “Societies are built on trust at a private level, at a democratic level. We need to rebuild our trust in content.”

How do I know if what I’m seeing is real?

Beyond the broader goal of working toward ways to create transparency around AI in our technologies and information ecosystems, it’s hard to say at an individual level whether AI will transform any part of the media, Ajdar said. or used to make.

This makes it critical for media consumers to identify sources that have clear quality standards, he said.

“In a landscape where this kind of legacy media is increasingly distrusted and dismissed, it’s time when traditional media is actually your friend, or at least it’s more likely to be your friend, than ever before. That your news comes from random people tweeting. Things or, you know, TikTok videos where you’ve got a guy in his bedroom giving you an analysis of why this video is fake,” Edger said. “This is where trained, rigorous investigative journalism will be better resourced, and will generally be more reliable.”

Building a “lie detector” for DeepFax

He said that AI’s suggestions for identifying imagery, such as seeing how often someone blinks in a video, could quickly become outdated because technologies are advancing at lightning speed.

His advice: “Try to recognize the limits of your knowledge and your ability. I think some humility about information is necessary at this point.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment