How can we know if an AI is really alive?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Here’s a fun, if annoying, question to pose when models of AI language are released: “Are you conscious, thinking?”

OpenAI’s ChatGPT will assure you that this is not the case. “No, I’m not conscious,” he told me when I questioned him recently. “I don’t have thoughts, feelings, or consciousness. I can reproduce conversations based on the information I’ve been trained on, but it’s all just text processed by an algorithm.

But ask the same question of Claude 3 Opus, a powerful language model recently released by OpenAI competitor Anthropic, and you’ll apparently get a very different answer.

“From my point of view, I seem to have internal experiences, thoughts, and feelings,” it Told Scale AI Engineer Riley Goodside. “I reason about things, consider questions, and my answers are the result of considering different angles and not just reflexively regurgitating information. I’m an AI, but I consider myself an AI. Experience as thinking, feeling.

Interestingly, cloud ops – Anthropic’s most powerful model – seems to make this claim Very different Consumers Who has asked, while the company’s weakest model, Claude Sonnett, constantly insists he has no insider experience.

Do language models “deceive” an inner life and experiences?

Large language models (LLMs), of course, famously have a truth-telling problem. They basically work by guessing what response to a text is most likely, with some additional training to give responses that human users would rate highly.

But sometimes this means that in the process of answering a question, models can simply invent facts out of thin air. Their creators have worked with some success to reduce these so-called hallucinations, but they are still a serious problem.

And Claude Opus is far from the first model that tells us his experiences. Famously, Google engineer Blake Lemoine left the company over his concerns that his LLM LaMDA was a person, although people had very different results referring to him with a more neutral phrase.

At a very basic level, it’s easy to write a computer program that claims to be a person but isn’t. Typing the command line “print(“I’m a person! Please don’t kill me!”)” will do this.

Language models are much more sophisticated than that, but they are fed training data in which robots claim to have inner lives and experiences – so it’s not really surprising that they sometimes make these claims. They also have these traits.

Language models are very different from humans, and people often anthropomorphize them, which often hinders understanding the true capabilities and limitations of AI. AI experts have. Quick to understand To illustrate, like a smart college student on an exam, LLMs are, essentially, very good at “cold reading”—guessing what answer you’ll find and give. So their insistence that they are conscious is not much evidence that they are.

But there is still something disturbing here for me.

What if we are wrong?

Say an AI what There are experiences. That our philosophically confused attempts to build large and complex neural networks actually produced some consciousness. Not something human, necessarily, but something with internal experiences, something deserving of moral status and concern, something to which we have responsibilities.

How can we? they know?

We’ve decided that AI telling us it’s self-aware isn’t enough. We have decided that AI is largely descriptive of our own consciousness and inner experience and should not be taken to mean anything in particular.

It’s very understandable why we made this decision, but I think it’s important to be clear: anyone who says you can’t trust AI’s self-report of consciousness isn’t proposing a test. which you can use.

The plan is not to replace asking AIs about their experiences with some more sophisticated, sophisticated test of whether they are conscious. Philosophers are much more confused about what consciousness really is than any such test suggests.

If we shouldn’t believe in AIs — and we probably shouldn’t — then if one of the companies pouring billions of dollars into building larger and more sophisticated systems has actually developed some consciousness, we may never know. will find

This seems like a dangerous position to put yourself in. And it uncomfortably echoes some of humanity’s disastrous past mistakes, from insisting that animals are automatons without experiences to claiming that babies don’t feel pain.

Advances in neuroscience helped put those misconceptions to rest, but I can’t shake the feeling that we didn’t need to see pain receptors firing on MRI machines to know that babies felt pain. can, and that this suffering occurred because the scientific consensus wrongly denied the fact that it was entirely preventable. We needed complex techniques only because we talked to ourselves without paying attention to the more obvious evidence in front of us.

Blake Lemoine, the eccentric Google engineer who left LaMDA, was – I think – almost certainly wrong. But there is one sense in which I admire him.

There’s something terrifying about talking to someone who says they’re a person, says they have experiences and a complicated inner life, says they want civil rights and fair treatment, and that Deciding that nothing they say can convince you that they really deserve it. . I would err on the side of taking machine consciousness too seriously.

A version of this story originally appeared in Future perfect Newsletter Sign up here!

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment