Elon Musk vs. OpenAI: Tech Companies Fueling Existential Fears to Avoid Scrutiny Kenan Malik

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

In 1914, on the eve of World War I, HG Wells published a novel with even greater conflict potential. World Seat Free Imagine, 30 years before the Manhattan Project, the creation of nuclear weapons that “one man [to] Carry in a handbag enough latent energy to destroy half a city. A world war breaks out, resulting in a nuclear holocaust. “Establishment of world government” is required to establish peace.

What Wells was concerned with were not just the dangers of a new technology, but also the dangers of democracy. Wells’ world government was not created by democratic will but imposed as a benign dictatorship. “Sovereigns will quietly consent,” remarked King Egbert of England threateningly. For Wells, the “common man” was “a violent idiot in social and public affairs.” Only an educated, scientifically minded class can “save democracy from itself”.

A century later, another technology provokes similar awe and fear – artificial intelligence. From the boardrooms of Silicon Valley to the backrooms of Davos, political leaders, tech moguls and academics are excited about the immense benefits that AI will bring, but fear that it could also herald the demise of humanity as super Intelligent machines will come to rule the world. And, as a century ago, at the heart of the debate are questions of democracy and social control.

In 2015, journalist Steven Levy interviewed Elon Musk and Sam Altman, the two founders of OpenAI, a tech company that rose to public consciousness two years earlier with the release of ChatGPT, an apparently humanoid chatbot. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit charitable trust aimed at developing technology in an ethical manner to benefit “humanity as a whole”.

Levy questioned Musk and Altman about the future of AI. “There are two schools of thought here,” Musk mused. “Do you want many AIs, or a small number of AIs? We think many are probably good.

“If I’m Dr. Evil and I use it, won’t you empower me?” Levi asked. Dr. Evil is more likely to be empowered, Altman replied, if only a few people control the technology: “Then we’re in a really bad place.”

In reality, that “bad place” is being created by the tech companies themselves. Musk, who resigned from OpenAI’s board six years ago to develop his own AI projects, is now suing his former company for breach of contract, alleging that it diverted profits to the public good. and failed to develop AI “for the benefit of humanity”.

In 2019, OpenAI created a non-profit subsidiary to raise money from investors, particularly Microsoft. When it released Chat GPT in 2022, the inner workings of the model were kept under wraps. Another OpenAI founder and the company’s chief scientist at the time, Ilya Sotskiur, was less open, claiming in response to criticism that it prevented those with corrupt intentions from “doing a lot of damage” using it. go Fear of technology has become a cover for shielding from scrutiny.

In response to Musk’s lawsuit, OpenAI last week published a series of emails between Musk and other board members. It makes clear that from the beginning all board members agreed that “OpenAI” could not be truly open.

As AI develops, Sutskever wrote to Musk, “It would make sense to start being less open. Open in Open AI means that everyone should then benefit from the fruits of AI.” [sic] Made, but it’s perfectly fine not to share the science. “Yes,” Musk replied. Whatever his case may say, Musk is no more open to openness than other tech moguls. The legal challenge to OpenAI is more a power struggle than an attempt at accountability within Silicon Valley.

Wells wrote World Seat Free At a time of great political crisis, many questioned the wisdom of enfranchising the working class.

“Was it desirable, was it even safe to surrender? [the masses]Fabian Beatrice Webb wondered, “With the ballot box, creating and controlling the government of Great Britain with its immense wealth and its far-reaching dominions?” At the heart of Wells’ novel was the same question – to whom can the future be entrusted?

A century later, we are once again debating the merits of democracy. For some, the political turmoil of recent years is the result of too much democracy, allowing irrational and uneducated people to make important decisions. “It is unfair to put the onus on unqualified simpletons to make historical decisions of great complexity and sophistication,” as Richard Dawkins put it after the Brexit referendum, a sentiment with which Wells would have agreed.

For others, it is precisely this disdain for ordinary people that has helped create a democratic deficit in which large sections of the population feel deprived of the way society is run.

It’s a hate that also speaks to technology. Like me The world has become free. The AI ​​debate centers around questions not just about technology, but also about openness and control. Despite the alarm, we are far from “superintelligent” machines. Today’s AI models, such as ChatGPT, or Claude 3, released last week by another AI company, Anthropic, are so good at predicting what the next word in a sequence should be that they give us the concept They can be fooled into thinking that they can think like humans. However, they are not intelligent in any human sense, have negligible understanding of the real world and are not extinguished by humanity.

The problems that AI creates are social, not existential. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines might one day exercise power over humans, but that they already do. Act in ways that reinforce inequality and injustice, providing tools through which those in power can strengthen their authority.

This is why what we might call the “Egbert Maneuver”—the insistence that some technologies are so dangerous that they must be controlled by a select few above democratic pressure—is so dangerous. The problem is not just Dr. Evil, but people who use fear of Dr. Evil to shield themselves from scrutiny.

Kenan Malik is an Observer columnist.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment