Among the AI ​​Doomsayers | The New Yorker

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Katja Grace’s apartment, in West Berkeley, is in an old machine factory, with roofs and windows at odd angles. It has terra cotta floors and no central heating, which can give the impression that you’ve stepped out of the Californian sun into a dark place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. High-capacity air cleaners pulsate in the corners. Items that disappear in the pantry. A sleek white machine that performs lab-quality RNA tests. The kind of objects that could indicate a future of technology-driven convenience, or constant vigilance.

Grace, a lead researcher at a nonprofit called AI Impacts, describes her work as “thinking about whether AI will destroy the world.” She spends her time writing theoretical papers and blog posts on complex decisions related to a subfield known as AI safety. She has a nervous smile, an oversharer, a bit of a grumbler. She’s in her thirties, but looks almost like a teenager, with a midsection and a round, open face. The apartment is full of books, and when a friend of Grace’s comes over, one November afternoon, she stares for a while, worried but undecided, at some of the spines: “The Morality of Jewish Divorce,” “Death.” I Jewish way. And lament, “Death of death.” Grace, as far as she knows, is neither Jewish nor about to die. She let the ambiguity linger for a moment. Then she explained: Her The landlord wanted the previous occupant, his recently deceased ex-wife’s property, to be retained. “Kind of a relief, honestly,” Grace said. “A set of decisions I don’t have to make.”

She was spending the afternoon preparing dinner for six: a yogurt and cucumber salad, improbable beef. On one corner of the whiteboard, he had painstakingly broken down his pre-party tasks into small steps (“Cop Salad,” “Mix Salad,” “Mold Meat,” “Cook Meat”); On other parts of the whiteboard, she wrote more organic clues (“food area,” “objects,” “substances”). His friend, an android cryptographer named Paul Crowley, wore a black T-shirt and black jeans, and had dyed his hair black. I asked how they knew each other, and he replied, “Oh, we’ve crossed paths over the years as part of the scene.”

It is understood that “scene” refers to a few interconnected subcultures known for their exhaustive discussions of recombinant problems (conserved DNA synthesis, shrimp welfare). Which the members think is important, but most of the common people don’t know anything about it. For two decades or more, one of those issues has been whether artificial intelligence will elevate or destroy humanity. Pessimists are called AI safetists, or procrastinators — or, when they’re feeling particularly nervous, AI domers. They find each other online and often live together in group houses in the Bay Area, sometimes co-parenting and home-schooling their children. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian rows, were associated with stable domesticity. Last year, referring to AI “hacker houses,” the San Francisco Standard semi-ironically called the area Cerebral Valley.

A camp of technological optimists dismisses AI doomsday with old-fashioned libertarian boomerism, insisting that all the handwringing about existential threat is a kind of mass hysteria. They call themselves “effective accelerators,” or e/accs (pronounced “e-acks”), and they believe AI will usher in a utopian future — interstellar travel, the end of disease — while troubling. Get out of the way. On social media, they troll doomsayers as “decisives”, “sypes”, “basically terrorists” or, worst of all, “regular bureaucrats”. “We must steal the fire of intelligence from the gods. [and] Use it to propel humanity to the stars,” a well-known e/acc recently tweeted. (And then there are the rules anywhere but the Bay Area or the Internet, which has reduced much of the discussion to sci-fi foam. (has ended this debate by attributing it to huffing or corporate hot air.)

Grace’s dinner parties, semi-underground meetings for Dumar and Dumar Crevis, have been described as “the nexus of the Bay Area AI scene”. At such gatherings, it’s not unusual to hear someone start a conversation by asking, “What are your timelines?” or “What’s your p (punishment)?” Timelines are predictions of how quickly AI will pass certain benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, at which point a machine can perform any cognitive task. Can do what anyone can do. what (Some experts believe AGI is impossible, or decades away; others expect it to arrive this year.) P(doom) is the possibility that, if AI becomes smarter than people, it will be on purpose. Or by accident, will perish. Everyone on the planet. For years, even in Bay Area circles, such speculative discourse was pushed aside. Last year, when OpenAI released ChatGPT, a language model that can seem remarkably natural, they suddenly became mainstream. Now there are a few hundred people working full-time to save the world from AI destruction. Some advise governments or corporations on their policies. Some work on the technical aspects of AI safety, approaching it as a set of complex mathematical problems; Grace works at a think tank that researches “high-level questions,” such as “What role will AI systems play in society?” and “Will they pursue ‘goals’?” When they’re not lobbying in DC or meeting at an international conference, they often cross paths in places like Grace’s living room.

The rest of his guests arrived one by one: an authority on quantum computing; A former OpenAI researcher; Head of an institute that predicts the future. Grace offered wine and beer, but most people opted for non-alcoholic canned drinks that defied easy description (a fermented energy drink, a “hopped tea”). They took their improbable gyros to Grace’s couch, where they talked until midnight. They were polite, non-aggressive and surprisingly patient about rethinking basic assumptions. “You can reduce the essence of the problem, I think, to a really simple two-step argument,” Crowley said. “Step one: We’re building machines that might become smarter than us. Step two: It sounds pretty dangerous.

“Are we sure, though?” said Josh Rosenberg, CEO of the Forecast Research Institute. “About intelligence being dangerous?”

Grace notes that not all intelligent species are threatened: “There are elephants, and yet mice are still doing fine.”

Cartoon by Erika Sjoll and Nate Odenkirk

“Rabbits are definitely smarter than myxomatosis,” said quantum computing expert Michael Nelson.

Crowley’s p (punishment) was “above eighty percent.” Others, wary of committing to a number, deferred to Grace, who said, “Given my deep confusion and uncertainty about this—which I think almost everyone has, less At least everyone who is honest,” she can only limit her p(p). doom) to “between ten and ninety percent.” Still, she continued, “If you take it seriously, there’s obviously a ten percent chance of humans becoming extinct, unacceptably high.”

He agreed that among the thousands of reactions on ChatGPT, one of the most refreshing came from Snoop Dogg during an on-stage interview. Crowley pulled out the copy and read it aloud. “It’s not safe, ’cause the AIs got their brains, and these motherfakers will start doing their shit,” Snoop said, describing the AI-safety argument. “Shit, what’s the matter?” Crowley laughed. “I have to admit, it captures the emotional state much better than my two-step argument,” he said. And then, as if to justify Luty’s moment, he read another quote, this one from a 1948 essay by CS Lewis: “If we are all going to be destroyed by an atomic bomb, then Let them find us when they come. Doing sensible and humane things — praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting with friends over a pint and a game of darts — afraid. Not gathered together like sheep.

Grace worked for Eliezer Yudkowski, a bearded guy with a fedora, a stoic demeanor, and a ninety-nine percent AP(Dom). Raised as an Orthodox Jew in Chicago, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two millennia, made his way to the Bay Area. His most famous works include “Harry Potter and the Methods of Rationality”, a piece of fan fiction of over six million words, and “The Sequences”, a collection of essays on how to sharpen one’s thinking. There is a huge chain. The informal collective that grew up around these writings—first in commentaries, then in the physical world—became known as the rationalist community, a small subculture dedicated to avoiding “the common failings of human reason.” is, often arguing from first principles or anticipating potential risks. Nathan Young, a software engineer, told me, “I remember hearing Eliezer, who was known as a heavy man, on stage at a rationalist event, make a prediction to the crowd. He was wondering if he could lose a bunch of weight. Then the big reveal: he took off the fat suit he was wearing. He had already lost weight. I think his appearance point was something. It was about how predicting the future is difficult, but mostly I remember thinking, what an absolute myth.

Yudkowsky was a transhumanist: during his lifetime human brains were about to be uploaded into digital brains, and that was great news. He told me recently that “Eleazer ages sixteen to twenty” assumed that AI “would be a lot of fun for everyone forever, and wanted it to be built as soon as possible.” In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help accelerate the AI ​​revolution. Still, he decided to do some due diligence. “I didn’t see why an AI would kill everyone, but I felt compelled to study the question systematically,” he said. “When I did, I went, oh, I guess I was wrong.” He wrote white papers detailing how AI could wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or MIRI.

The existential threat posed by AI has always been a central concern of rationalists, but it emerged as a dominant theme around 2015, following rapid advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, founders of the efficient-altruism movement, who studied how to do the greatest good for humanity. (and by extension, how to keep it from ending up). The boundaries between movements are increasingly blurred. Yudkowsky, Grace, and a few others have traveled the world to EA conferences, where you can talk about the threat of AI without being laughed out of the room.

Doomsday philosophers rely on broad sci-fi-inspired assumptions. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about “conspiracy AIs” that could convince their human handlers that they were safe. Then step in to take charge. He smiled shyly as he described a thought experiment in which a hypothetical person is forced to stack bricks in the desert for a million years. “It could be a lot, I realize,” he said. Yudkowsky argues that a superintelligent machine could see us as a threat, and decide to kill us (by commanding existing autonomous weapons systems, say, or building its own). Or our demise could be “in transit”: you ask a supercomputer to improve its processing speed, and it concludes that the best way to do so is to turn all nearby atoms into silicon, Including the atoms that are currently people. But basic AI-security arguments don’t require imagining that the current crop of Verizon chatbots will suddenly turn into Skynet, the digital supervillain from “Terminator.” To be dangerous, AGI does not need to be sentient, or wish for our destruction. If his goals conflict with human flourishing, even in subtle ways, then, say the tormentors, we are depraved.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment