SALT LAKE CITY – What is intelligence? As it turns out, the rapidly developing technology of artificial intelligence and large language models can not only be an asset in advancing new scientific discoveries, but also an important resource from which we can understand the origins of intelligence. Can find nature.
This is according to David Egleman, a neuroscientist who teaches at Stanford University.
“The question is, will things get weird when we enter a world with another intelligence?” Eagleman, the best-selling author and author of the TV series “The Brain with David Eagleman,” reflected with an audience at the University of Utah’s Kingsbury Hall on Tuesday. He spoke as part of the Natural History Museum of Utah’s 2024 lecture series centered around the nature of intelligence.
Eagleman explained to the audience that AI appears to be smart only because it is capable of instantly synthesizing information from any document on the Internet and answering questions eloquently through the use of large language models. generates from – a form of artificial neural network that allows a program to synthesize information into accurate statements that mimic natural speech through a learned understanding of language and syntax.
“What you should consider is that ingesting text, ingesting trillions of pages of text and running massive statistical models on them is not intelligence or passion in itself, because ChatGPT has no idea of that. Not what’s saying,” Eagleman said, explaining that programs like ChatGPT collect. Information at an incredible rate using a near unlimited library of sources.
To illustrate this point, Eagleman uses the thought experiment of a Chinese room, which states that artificial intelligence cannot actually manifest intelligence no matter how human-like its programmer makes it.
The premise of the “Chinese Room Thought Experiment” involves a person who is a fluent Chinese speaker and a person who knows no Chinese at all. Someone who doesn’t know Chinese is stuck in a room full of books with instructions on what to do with Chinese symbols. When a fluent Chinese speaker gives messages in Chinese to the person in the room, the person who does not speak Chinese looks for matching symbols and which symbols are appropriate responses to those messages.
Eggleman argued that the choice of sources in the room is vast and, by representing the full Chinese written language, can convince a non-fluent person that they know Chinese. But that doesn’t mean that a non-fluent person actually knows Chinese, it just means that they can probably convince another person that they can access that information fluently in Chinese. can get
“The question is: Have we proven that computers can have intelligence? In fact, these computers are just playing statistical games,” Egelman said, explaining that previous intelligence tests such as the Turing Test or the Lovelace Tests examine major language models. Skills they have long qualified for.
“What I proposed to the literature a few months ago is a new test for intelligence – if a system is truly intelligent, it should be able to make scientific discoveries,” he said. “One of the most important things humans do is science, so the day our AI can make real discoveries, that’s the day I’ll call it genius.”
One of the most important things humans do is science, so the day our AI can make real discoveries is the day I’ll call it genius.
– David Eagleman, neuroscientist and author
Eagleman further elaborated his point by classifying two types of intelligence: Level 1 Scientific Discovery and Level 2 Scientific Discovery. The former involves eliminating pre-existing facts or concepts in order to find a solution that works, and the latter involves arriving at a scientific discovery by conceptualizing original ideas.
Examples of Level 2 scientific discoveries, as cited by Egelman, were Albert Einstein’s discovery of the theory of relativity and Charles Darwin’s theories on evolution.
However, Eagleman also said that he believes AI has reached a point where it has proven itself capable of being creative.
“The human brain is very good at recombining information, bending, breaking down and combining ideas – the point is that what we’ve seen is that these LLMs (large language models) are quite good at it,” Egelman said. are.” “I feel like, at this point, they’re probably as creative as we are. What they’re not good at is filtering out things that humans would care about.”
As for the near future of a technology that seems to grow in sophistication every week, Egleman said he sees a bright future where AI will make scientific discoveries alongside its human cohorts and provide humans with invaluable personal information. Will provide resources – for example, an AI therapist tailored to your needs and available for consultation at all hours of the day.