Scarlett Johansson's AI row echoes the bad days of Silicon Valley.

image source, Getty Images

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Scarlett Johansson's AI row echoes the bad days of Silicon Valley.

  • the author, Zoe Kleinman
  • the role, Technology Editor
  • Twitter,

“Move fast and break things” is a slogan that continues to haunt the tech sector, nearly 20 years after it was coined by a young Mark Zuckerberg.

Those five words came to symbolize Silicon Valley's worst—a combination of ruthless ambition and a breathtaking arrogance—profit-driven innovation without fear of consequence.

I was reminded of that phrase this week when actor Scarlett Johansson clashed with OpenAI. Ms Johansson claimed that both she and her agent had refused to be the voice of her new product for ChatGPT – and then it sounded like her anyway when it was unveiled. . OpenAI denies that it was intentionally copied.

This is a classic example of exactly what the creative industries are so worried about – being imitated and eventually replaced by artificial intelligence.

Last week, the world's largest music publisher, Sony Music, wrote to Google, Microsoft and OpenAI, demanding to know whether any of its artists' songs were used to develop AI systems. Saying that they are not allowed to do so.

All this has echoes of the macho Silicon Valley giants of old. Ask for forgiveness rather than permission as an unofficial business venture.

But the tech firms of 2024 are eager to distance themselves from that reputation.

OpenAI was not created from this template. It was originally created as a non-profit organization that would reinvest any excess profits back into the business.

In 2019, when he created the for-profit arm, he said the not-for-profit side would be led by the for-profit side, and there would be a cap on returns to investors.

Not everyone was happy with the change – it was said to be a major reason for original co-founder Elon Musk's decision to leave.

When OpenAI CEO Sam Altman was abruptly fired by his own board late last year, one theory was that he wanted to move further away from the original mission. We never know for sure.

But even if OpenAI has become more profitable, it still faces its responsibilities.

In the policymaking world, almost everyone agrees that clear boundaries are needed to keep companies like OpenAI in line before disasters strike.

So far, the AI ​​giants have largely played ball on paper. Six months ago at the world's first AI Safety Summit, a group of tech giants signed a voluntary pledge to build responsible, safe products that maximize the benefits of AI technology and minimize its risks. .

The dangers, originally identified by event organizers, were the stuff of nightmares. When I asked people at the time about the more down-to-earth risks of AI tools discriminating against them, or taking their jobs, I was told quite strongly that the gathering was only the worst. was dedicated to discussing the situation. – It was Terminator, Doomsday, AI-goes-rogue-and-destroys-humanity territory.

Six months later, when the summit was held again, the word “security” was removed from the conference title entirely.

Last week, a draft UK government report by a group of 30 independent experts concluded that there is “so far no evidence” that AI can create biological weapons or launch sophisticated cyber attacks. It said the possibility of humans losing control over AI was “highly controversial”.

Some in the field have long argued that the more immediate threat posed by AI tools was that they would replace jobs or not recognize skin colors. These are “real issues,” says AI ethicist Dr. Roman Chowdhury.

The AI ​​Safety Institute declined to say whether it has security tested any of the new AI products launched in recent days. Specifically OpenAI's GPT-4o, and Google's Project Astra, both of which are among the most powerful and advanced publicly available generative AI systems I've seen so far. Meanwhile, Microsoft has unveiled a new laptop with AI hardware — the start of AI tools being physically embedded in our devices.

The independent report also states that there is currently no reliable way to understand why AI tools produce the output they do – even among developers – and that the establishment of red-teaming A deliberate security testing exercise, in which reviewers deliberately attempt to acquire an AI tool. For misbehaving, it has no best-behavior guidelines.

At a follow-up summit this week co-hosted by the UK and South Korea in Seoul, firms have pledged to withdraw a product if it does not meet certain safety thresholds – but these will not be set until the next gathering. . 2025.

Some fear that all these promises and promises don't go far enough.

“Voluntary contracts are essentially a means for firms to mark their homework,” says Andrew Street, associate director of the Ada Lovelace Institute, an independent research organization. “It is essentially no substitute for legally binding and enforceable laws that are needed to encourage the responsible development of these technologies.”

OpenAI just published its 10-point security process that it says it's committed to — but one of its senior security-focused engineers recently resigned, writing on X that it The department was “sailing against the wind” internally.

“Over the years, safety culture and processes have overtaken shiny products,” posted John Lake.

Of course, OpenAI has other teams that continue to focus on safety and security.

Currently, though, there is no official, independent oversight of what any of them are actually doing.

“We have no guarantee that these companies keep their promises,” says Professor Dame Wendy Hall, one of Britain's leading computer scientists.

“How do we hold them accountable for what they say, like we do with drug companies or other sectors where there's a lot of risk?”

We may also find that these powerful tech leaders become less viable once push comes to shove and voluntary agreements become a little more viable.

When the UK government said it wanted the power to block the rollout of security features from major tech companies if they were likely to compromise national security, Apple was slammed by lawmakers as an “unprecedented overreach”. Threatened to withdraw services from the UK. .

Legislation passed and so far, Apple is still here.

The EU's AI Act has just been signed and is the first and most stringent piece of legislation out there. There are also stiff penalties for firms that fail to comply. But that creates more legwork for AI users than the AI ​​giants themselves, says Gartner VP analyst Nader Heinen.

“I would say the majority [of AI developers] Overestimate the impact of this act on them,” he says.

He points out that any company using AI tools will have to rate them and score their risk – and give the AI ​​firms that have provided the AI ​​enough information to be able to do this. have to do

But that doesn't mean they're off the hook.

“We need to move towards legal regulation over time but we cannot rush it,” says Professor Hall. “Setting up global governance rules that everyone signs up to is really hard.”

“We also have to make sure it's truly worldwide and not just the Western world and China that we're protecting.”

Attendees at the AI ​​Seoul Summit say they found it useful. It was “less flashy” than Bletchley but with more debate, one participant said. Interestingly, the closing statement of the event was not signed by China but by 27 countries, although its representatives were there in person.

The dominant problem, as always, is that regulation and policy move much more slowly than innovation.

Professor Hall believes the “stars are aligning” at government level. The question is whether the tech giants can be persuaded to wait for them.

BBC Depth The website and app is the new home for the best analysis and expertise of our top journalists. Under a distinctive new brand, we'll bring you fresh perspectives that challenge assumptions, and in-depth reporting on the biggest issues to help you make sense of a complex world. And we'll also be showcasing exciting content from BBC Sounds and iPlayer. We're starting small but thinking big, and we want to know what you think – you can send us your feedback by clicking the button below.

Stay in touch

InDepth is the new home of the best BBC News analysis. Let us know what you think.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment