Can AI work for everyone in the future?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

On June 5, 1944, a courier from Britain’s Bletchley Park code-breaking center interrupted a D-Day planning session and delivered a top secret message to General Dwight Eisenhower. After reading the slip of paper, the Supreme Allied Commander in Europe announced: “Tomorrow we go.”

The message contained a decrypted German radio transmission from Adolf Hitler telling his superior commander in France that the Allied invasion of Normandy had been a failure. A subsequent delay in the redeployment of German troops proved crucial in allowing the Allies to secure their beaches. The technology that enabled decryption was the world’s first electronic programmable computer. Called Colossus, it was designed by Tommy Flowers, an extraordinary English Post Office engineer.

Nigel Toon, in his new book on artificial intelligence, suggests that this event was the first instance of computers having a decisive impact on the history of the world.how AI thinks.. But it was only a prophecy that followed. In the eight decades since, computers have become exponentially more powerful and extended their reach into almost every aspect of our lives.

A revolution in computer hardware has been followed by a similar revolution in software – especially, more recently, the rapid development of AI. Since San Francisco startup OpenAI launched its ChatGPT chatbot in November 2022, millions of users have experienced the near-magical powers of creative AI. At the click of a mouse, it is now possible to conjure up an intelligible Shakespearean sonnet about a goldfish, or create fake images of the Pope in a puffer jacket, or translate one computer code into another. All three books reviewed here highlight the great promise of technology, but also warn of the dangers of its misuse.

The tone card-carrier is an enthusiast, arguing for benefits in fields as diverse as weather forecasting, drug discovery and nuclear fusion. “Artificial intelligence is the most powerful tool we’ve ever created,” he writes. “Those who take the time to understand. How Artificial intelligence is believed to inherit the Earth.

As co-founder of British semiconductor startup Graphcore, which designs chips for AI models, Toon works at the forefront of technology, yet he acknowledges how quickly things can evolve. They are constantly surprised by it. How AI Thinks Provides a quick and easy introduction to how AI has evolved since the term was first coined in 1955, how it is used today and how it can be controlled.

Modern semiconductor devices are the most advanced products ever created by humans. Since the invention of the first integrated circuit in 1960, the number of transistors that can fit on a single chip has increased 25bn times. “If your car had improved so much, you could now easily travel 200 times the speed of light,” Toon writes.

He is also adept at explaining the coming software revolution that enabled AI researchers to move beyond rules-based computing and expert systems to the pattern recognition and “learning” capabilities of neural networks that are used today. Power our AI models. When left to the vast amount of data generated since the creation of the World Wide Web, these models can do amazing things. By 2021, all of our connected devices will be generating about 150 times more digital information a year than before 1993.

Yet no matter how powerful computers have become, they still struggle to match the extraordinary processing power of the 86bn neurons in the average human brain. Humans have an uncanny ability to generalize bits of data and contextualize seemingly random information. Tone recalls sitting in the back of a London black cab in 2021 when the driver said to him, “Have you heard about Ronaldo? He’ll play a lot better now, don’t you think? I’ve heard that The city should be really upset if the governor did it.”

Intuitively, Toon understood that Driver was referring to world-famous footballer Cristiano Ronaldo who had just been lured back to Manchester United by former manager Sir Alex Ferguson to upset the club’s rivals Manchester City.

At least for the moment, an AI system would struggle to make sense of such a joke. As Tone points out, driving itself is another great example of the flexibility of human intelligence. A learner driver takes around 20 hours of tuition to become consciously competent. By contrast, Waymo, Alphabet’s autonomous driving company, logged 2.9mn driving miles in California in 2022 and has yet to match its competition’s performance.

Where the tone is less certain is in exploring the regulatory and policy debates that surround the use of AI. This is where Verity Harding took up the baton AI needs you. A former special adviser to Nick Clegg when he was UK Deputy Prime Minister and former head of policy at Google DeepMind, Harding is bilingual in politics and technology. It aims to examine how we have managed key technologies in the past to guide us on how to better manage AI in the future.

The three high-profile international examples she chooses—the Cold War space race, in vitro fertilization, and the spread of the Internet—all contain important lessons, shedding interesting light on how we should approach AI. Harding praised the 1967 UN Outer Space Treaty, which established space as the “province of all mankind”, as a remarkable example of international cooperation. Signed at a time when tensions between the United States and the Soviet Union were nearing their peak, the treaty was dubbed the “Magna Carta of Space,” banning militarism and ensuring that no nation could It cannot claim sovereignty over a celestial body.

For Harding, the deal holds three lessons. Political leadership matters, and courageous politicians can negotiate mutually beneficial international agreements, even in times of geopolitical tension. Rival powers can set limits to the worst excesses of war. And science can and should be used to encourage international cooperation. In this sense, AI researchers should work on projects that benefit humanity as a whole and not just advance “techno-national fence-building.”

The debates about embryo research and in vitro fertilization in the 1970s and 1980s raised very different issues. But in many cases, Harding argues, he anticipated many of the moral, ethical and technical issues surrounding AI. Philosopher Mary Warnock, who chaired a committee to consider these dilemmas, did a remarkable job in her report published in 1984 in laying out clear moral lines and practical paths to regulation. , and encouraged the development of a vibrant life science industry. Contrary to the familiar trope that regulation stifles innovation, Harding argues that in fact the political, moral and legal clarity provided by the Warnock Commission promoted investment and economic growth.

Harding’s third example is the extraordinarily influential but little-known technocratic organization called the Internet Corporation for Assigned Names and Numbers (Icann). By maintaining the Internet’s “plumbing” and resisting interference by nation-states and powerful private companies, Icann has preserved the World Wide Web as an open and dynamic space. “It is a trust-based, consensus-based, global institution with limited but absolute power. In an age of cynicism and bitter, divisive politics, it is a marvel,” she writes.

Calling his book a “love letter” to the unglamorous, laborious work of policymaking in a democracy, Harding urged politicians and civil society to engage in debates about the use of AI and Help shape the future in a positive way. Martin Luther King, Jr.’s 1964 Nobel Peace Prize acceptance speech on the need for moral intervention should be taped to every tech CEO’s wall: “When scientific power surpasses moral power, we will use guided missiles and end up with lost men.”


The authors of As if human Also concerned with the human dimension of technology and making sure that machines do our bidding and don’t get out of control. Sir Nigel Shadbolt, professor of computer science at Oxford University, and economist and former civil servant Roger Hampson explore the ethics of AI in their elegant and scholarly book. His claim is that we should always treat machines as if they were connected to humans and hold them to the same, if not higher, standards of accountability: “We should judge them morally as Like they’re human.”

The pair argue that we need better technological tools to manage our personal data, as well as new public institutions, such as data trusts and cooperatives, that can act as stewards of the common good. “It is outrageous that a potentially civilization-changing technology has been launched at the behest of large corporations, with no consultation with the public, governments or international agencies,” he writes.

Concluding with seven “proverbs,” he suggests how we should approach our AI future, emphasizing the need for transparency, respect, and accountability. Their basic principle is that “a thing must say what it is and be what it says” and must always be accountable to humans. But one maxim in particular encapsulates the spirit of his book: “Decisions that affect many people should involve many people.”

Although these three books differ in focus, tone, and emphasis, they reach similar conclusions. All these authors emphasize the benefits that AI can bring if used wisely, but are concerned about the societal pressures that will result from rapid or careless deployment of the technology. All of them fear, if not dismiss, the existential threat that some AI researchers have flagged, seeing them for the moment as a speculative concern rather than a here-and-now concern. Of greater concern to them is the excessive, and unprecedented, concentration of corporate power in the hands of a small group of West Coast executives.

The overwhelming message that emerges from these books, ironically as it may seem, is a new appreciation of the collective powers of human creativity. We rightly marvel at the wonders of AI, but even more astonishing are the capabilities of the human brain, which weighs 1.4 kilograms and uses only 25 watts of power. For good reason, it has been called the most complex organism in the known universe.

As the authors acknowledge, humans are also deeply flawed and capable of great stupidity and perverse cruelty. For this reason, Silicon Valley’s techno-evangelical wing actively welcomes the rise of AI, believing that machine intelligence will soon replace humanity and create a more rational and harmonious universe. will lead to But misunderstanding can, paradoxically, be associated with intelligence. As computer pioneer Alan Turing noted, “If a machine is expected to be wrong, it cannot be intelligent.” How intelligent do we want our machines to be?

How AI Thinks: How We Built It, How It Can Help Us, and How We Can Control ItBy Nigel Toon, Penguin ÂŁ22, 320 pages

AI Needs You: How We Can Change the Future of AI and Save OurselvesBy Verity Harding, Princeton University Press ÂŁ20, 288 pages

Like Humans: Ethics and Artificial Intelligenceby Nigel Shadbolt and Roger Hampson, Yale University Press ÂŁ20, 272 pages

John Thornhill is the FT’s innovation editor.

Join our online book group on Facebook. FT Books Cafe And subscribe to our podcast. Life and art Wherever you listen

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment