Can consciousness exist in computer simulations?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Would it be appropriate for artificial intelligence to develop consciousness? Not really, for a variety of reasons, according to Dr. Wanja Weiss from the Institute of Philosophy II at the Ruhr University Bochum, Germany. In one essay, he examines the conditions that must be met for consciousness to exist and compares the brain to a computer. He has identified significant differences between humans and machines, particularly in the organization of brain regions as well as memory and computing units. “Causal structure may be a difference that is relevant to consciousness,” he argued. This article was published in the journal on June 26, 2024. Philosophical studies.

Two different styles

There are at least two different approaches when considering the possibility of consciousness in artificial systems. One approach asks: How likely is it that existing AI systems are conscious — and what needs to be added to existing systems to make it more likely that they are capable of consciousness? Another approach asks: What kinds of AI systems are unlikely to be conscious, and how can we rule out the possibility that some kinds of systems are conscious?

In his research, Wanja Weiss follows another approach. “My aim is to contribute to two goals: first, to reduce the risk of inadvertently creating artificial intelligence; this is a desirable outcome, as it is currently unclear under what circumstances the creation of artificial intelligence is morally permissible.” “Reject fraud by apparently conscious AI systems that only appear to be conscious,” he explains. This is particularly important because there are already indications that many people who frequently interact with chatbots attribute consciousness to these systems. At the same time, the consensus among experts is that current AI systems are not conscious.

The principle of free energy

Weiss asks in his article: How can we know if there are necessary conditions for consciousness that are not met by conventional computers, for example? The common characteristic of all sentient animals is that they are alive. However, being alive is such a strict requirement that many do not consider it a plausible candidate for a necessary condition for consciousness. But perhaps some of the conditions necessary for survival are also necessary for consciousness?

In her article, Wanja Weiss cites British neuroscientist Carl Freeston's principle of free energy. The principle indicates: the processes that ensure the continued existence of a self-organizing system such as an organism can be described as a type of information processing. In humans, these include processes that regulate vital parameters such as body temperature, blood oxygen content and blood sugar. The same type of information processing can be done in a computer. However, the computer will not control your temperature or blood sugar levels, but only simulate these processes.

Most differences are not related to consciousness.

The researcher suggests that the same may be true of consciousness. Assuming that consciousness contributes to the survival of a conscious organism, then, according to the principle of free energy, physical processes that contribute to the maintenance of the organism must retain a trace that is conscious experience. is omitted and can be described as an information. processing process. This can be called “computational correlation of consciousness”. It can also be realized in a computer. However, it is possible that additional conditions must be met in a computer in order for a computer not only to replicate but also to reproduce conscious experience.

So in his article, Wanja Weiss analyzes the difference between the way a conscious being perceives the computational correlates of consciousness and the way a computer would perceive it in simulation. They argue that most of these differences are not related to consciousness. For example, unlike an electronic computer, our brain is very energy intensive. But it is implausible that this requires consciousness.

However, another difference lies in the causal structure of computers and brains: in a traditional computer, data must always be first loaded from memory, then processed in a central processing unit, and finally stored back into memory. Should. There is no such separation in the brain, which means that communication takes different forms due to different parts of the brain. Vanjawiese argues that this may be the difference between brains and traditional computers related to consciousness.

“As I see it, the approach offered by the free-energy principle is particularly interesting, because it allows us to describe the properties of conscious organisms in a way that in principle can be found in artificial systems. can be felt, but not in large classes of artificial systems (such as computer simulations),” explains Wanja Weiss. “This means that the conditions necessary for consciousness in artificial systems can be captured in a more detailed and accurate way.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment