AI is arguably the biggest technology topic of the past decade, with companies including Google, OpenAI and Microsoft bringing intelligent resources to the fore. However, the results so far have been somewhat mixed. Google's AI answers are often downright dumb (and incidentally are behind a 50% increase in the company's greenhouse gas emissions over the past five years), AI images and videos are full of glaring errors, and chatbots… well Yes, they're a little better, but they're still chatbots.
However, one man predicted this level of interest and some elements of AI development. The Guardian has. A new interview with Ray Kurzweil, a futurist and computer scientist best known for his 2005 book The Singularity is Near, in which the “Singularity” is the convergence of human consciousness and AI. Kurzweil is an authority on AI, and his current job title is notable: he's “Principal Researcher and AI Visionary” at Google.
The Singularity is Near predicts that AI will reach the level of human intelligence by 2029, while the massive integration of our brains with AI will occur around 2045. Not much explanation needed. Brace yourself for a dose of what some might call techno-futurism, while others might prefer the term dystopian madness.
Kurzweil stands by his 2005 predictions, and believes that 2029 is “a valid date for both human-level intelligence and artificial general intelligence (AGI)—which is slightly different. Human-level intelligence.” Generally means AI that has reached a level of competence in a particular domain and by 2029 that will be achieved in most cases.” He believes that it may be a few years before AI “can't overtake humans at the top of a few critical skills like writing Oscar-winning screenplays or generating deep new philosophical insights,” but eventually “it will happen.” “
The real nightmare fuel comes with Kurzweil's concept of singularity, which he sees as a positive thing and makes some absolutely wild claims about it. “We're going to be a combination of our natural intelligence and our cybernetic intelligence all rolled into one. Making this possible will be brain-computer interfaces that will eventually be nanobots — robots the size of molecules. By 2045 we will increase intelligence a million times and this will deepen our awareness and consciousness.”
Claiming that your field is going to “increase intelligence a millionfold” is the kind of utter hubris at the beginning of a bad science fiction novel, and strikes me as so abstract as to be essentially pointless. There is meaning. We don't even understand how our own brains work, so the idea that both of them can be copied and changed to suit the whims of people like Kurzweil strikes me as highly offensive. It should be clear that we are talking about altering people's brains and physiology by injecting them with nano-machines. I somehow don't think it's all as rosy as some advocates claim.
The AI visionary admits that “people say 'I don't want this'” and then argues “They thought they didn't even want a phone!” Kurzweil returns to the theme of phones when talking about access, and the idea that AI advances will disproportionately benefit the wealthy: “When [mobile] The phones were new they were very expensive and it worked terribly. […] Now they are very affordable and very useful. About three-quarters of people in the world have one… the problem goes away over time.”
Live forever
Hmmm Kurzweil has a chapter on “dangers” in the new book, but he seems pretty relaxed about the possibility of doomsday scenarios. “We must be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the benefits are profound. All major companies should do more to ensure that Their systems are safe and aligned with the human values they are in creating new developments, which is positive.”
I don't directly believe it, nor do I trust the big tech companies or their research teams to prioritize security over AI development. Nothing in tech has ever worked this way, and although it now seems to somewhat encapsulate the current AI obsession with Silicon Valley's philosophy of “move fast and break things” Is.
Kurzweil's life and work are intertwined with this technology, of course, so you'd expect him to make an optimistic case. Nevertheless, the following is where I check: Immortality.
“As early as the 2030s we can expect to reach the longevity escape velocity where every year we lose in life, we gain back from scientific progress,” says Kurzweil. “And as we go forward we'll actually get more years back. It's not a solid guarantee of living forever — accidents still happen — but your chance of dying won't increase year by year. Back The ability to bring humans digitally will raise some interesting social and legal questions.”
AI is going to resurrect the dead! I've really heard it all now. As for Kurzweil himself: “My first plan is to survive, to reach escape velocity from longevity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is a fallback. I Also planning to make myself a replicator. [an afterlife AI avatar], which I think is an option we will all have in the late 2020s. I did something similar with my father, collected everything he had written in his life, and it was like talking to him.”
The phrase “somewhat” is carrying a lot of weight there, because Kurzweil implies that his father's imitator was, in fact, nothing like his father. The interview ends on the note that “It won't be us vs. AI: AI is going in on itself.”
Okay fine. Kurzweil is a highly respected figure, and a significant influence in the AI field. I'm just blown away by how much of this he thinks is desirable, never mind attainable, and the rapid way in which multiple potential problems with this technology are dismissed. In 10 years we'll be increasing our life expectancy with nanobots, and in 20 we'll all be human-hardware hybrids with brains dominated by software we don't understand or control on a personal level. are Oh, and we'll resurrect the dead as digital avatars.
AI is a technology currently defined not by what it can do, but by what its proponents promise it can do. And who knows, Kurzweil might be right about everything. But personally, I love being mine, and I have no real desire to bring dead relatives to life near hideous software. Some may call this game God, but I like to put it another way. This whole philosophy is as mad as a badger in a cake shop, and will end that way.