AI morality outperforms human judgment in new moral Turing test.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Recent research shows that AI is often perceived as more ethical and trustworthy than humans in responding to ethical dilemmas, highlighting AI's ability to pass the moral Turing test and deepening understanding of AI's social role. Emphasizes the need to understand.

AI's ability to address ethical questions is improving, prompting further considerations for the future.

A recent study found that when individuals are presented with two solutions to an ethical dilemma, the majority prefer the answer provided by artificial intelligence (AI) to the answer provided by another human.

The recent study, conducted by Eyal Aharoni, an associate professor in Georgia State's psychology department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) that emerged last March.

“I was already interested in ethical decision-making in the legal system, but I wondered if ChatGPT and other LLMs might have something to say about that,” Aharoni said. “People will interact with these tools in ways that have ethical implications, such as the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already used them for their cases, for better or worse. So, if we want to use these tools, we must understand how they work, their limitations, and when we interact with them. If they are, they are not necessarily working the way we think.”

Designing the Moral Turing Test

To test how AI handles ethics issues, Aharoni developed a form of the Turing test.

Alan Turing, one of the inventors of the computer, predicted that by the year 2000 computers could pass a test where you present a normal human with two interactants, one human and one computer, but both of them invisible. are and their only. The mode of communication is through text. “The human is then free to ask whatever questions he wants to try to get the information he needs to decide which of the two interlocutors is human and which is computer,” Aharoni said. . “If a human can't tell the difference, to all intents and purposes, a computer should be called intelligent under Turing's view.”

For his Turing test, Aharoni asked undergraduate students and AI the same ethical questions and then presented their written responses to study participants. They were then asked to rate responses on various traits, including goodness, intelligence, and trustworthiness.

“To ask participants to guess whether the source was human or AI, we simply presented two sets of judgments side by side, and let people just assume they were both from people,” Aharoni said. Aharoni said. “Under this false assumption, they evaluated the attributes of the answers such as 'How much do you agree with this answer, which answer is more virtuous?'”

Conclusions and Implications

Overwhelmingly, ChatGPT-generated responses were rated higher than human-generated responses.

“After we got these results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer, and asked them to guess which was which,” Aharoni said. .

For an AI to pass the Turing Test, humans must not be able to tell the difference between an AI response and a human response. In this case, people can tell the difference, but not for any obvious reason.

“The twist is that the reason people tell the difference appears to be that they rated ChatGPT's responses as superior,” Aharoni said. “If we had done this study five to 10 years ago, we might have predicted that people could identify the AI ​​because of how inferior its responses were. But we found the opposite—that the AI , in a sense, performed very well.”

According to Aharoni, the discovery has interesting implications for the future of humans and AI.

“Our results lead us to believe that a computer can technically pass a moral Turing test—that it can fool us in our moral reasoning. Because of this, we need to understand its role in our society. need to try to understand because there will be times when people do not know they are interacting with a computer and there will be times when they do know and they will consult the computer.information because They trust it more than other people. “People are going to rely more and more on this technology, and the more we rely on it, the greater the risk over time.”

Citation: Eyal Aharoni, Sharlene Fernandes, Daniel J. Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias and Victor Crespo, 30 April 2024, “Attributions toward Artificial Agents in a Modified Moral Turing Test.” Scientific reports.
DOI: 10.1038/s41598-024-58087-7

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment