Our brains respond differently to human and AI-generated speech, but we still struggle to tell them apart.

Spectrogram to show similarities between human and AI voices. Credit: FENS Forum / Christine Skjestad

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

According to research presented today (Tuesday) at the Federation of European Neuroscience Societies (FENS), people aren't very good at distinguishing between human voices and voices generated by artificial intelligence (AI), but our brains are better at distinguishing between human voices and voices created by artificial intelligence (AI). And the AI ​​responds differently to sounds. Forum 2024.

The study was presented by doctoral researcher Christine Skjestad, and conducted by Skjestad and Professor Sascha Farholz, both from the Department of Psychology at the University of Oslo (UiO), Norway.

“We already know that AI-generated voices have become so advanced that they are almost indistinguishable from real human voices. It is now possible to clone a person's voice with just a few seconds of recording,” Skjegstad said. , and fraudsters are using the technology to imitate a loved one in distress and victims of money transfer scams while machine learning experts develop technical solutions to detect AI voices. However, little is known about the human brain's response to these sounds.

The study involved 43 people who were asked to listen to human and AI-generated voices that expressed five different emotions: neutral, anger, fear, happiness, joy. They were asked to identify sounds as artificial or natural while their brains were studied using functional magnetic resonance imaging (fMRI). fMRI is used to detect changes in blood flow within the brain, indicating which parts of the brain are active. Participants were also asked to rate the characteristics of the voices they heard in terms of naturalness, credibility, and authenticity.

Participants correctly identified human voices only 56% of the time and AI voices 50.5% of the time, meaning they were equally bad at identifying both types of voices.

People were more likely to correctly identify a 'neutral' AI voice as AI (75% compared to 23% who could correctly identify a neutral human voice as human), These suggest that people assume neutral voices are more AI-like. Female AI neutral voices were correctly identified more often than male AI neutral voices. For happy human voices, the correct recognition rate was 78%, while for happy AI voices the rate was only 32%, suggesting that people associate happiness as more human-like.

AI and human neutral voices were perceived as the least natural, believable and authentic while human happy voices were perceived as the most natural, believable and authentic.

However, looking at brain imaging, the researchers found that human voices elicited stronger responses in parts of the brain associated with memory (right hippocampus) and empathy (right inferior frontal gyrus). AI sounds elicited strong responses in regions related to error detection (right anterior medial cingulate cortex) and attention regulation (right dorsolateral prefrontal cortex).

“My research shows that we are not very accurate in identifying whether a voice is human or AI-generated,” Skjegstad said. How difficult was it for them? It turns out that current AI voice technology can mimic human voices to the point where it becomes difficult for people to reliably distinguish them.

“The results also indicate a bias in perception where neutral voices are more likely to be identified as AI-generated and happy voices are more likely to be identified as human, regardless of whether they actually are. This was especially the case with neutral female AI voices, which may be why we are familiar with female voice assistants like Siri and Alexa.

“Although we're not very good at identifying humans from AI voices, it seems that there is a difference in the brain's response. AI voices can create greater alertness while human voices can create a sense of connection.”

The researchers now plan to study whether personality traits, for example extraversion or empathy, make people more or less sensitive to the difference between human and AI voices.

Professor Richard Roche is Chair of the FENS Forum Communications Committee and Deputy Chair of the Department of Psychology at Maynooth University, Maynooth, County Kildare, Ireland and was not involved in the research. “Investigating the brain's response to AI voices is critical as this technology continues to advance. This research will help us understand the potential cognitive and social implications of AI voice technology, which will inform policies and May support ethical guidelines.

“The dangers of this technology being used to deceive and fool people are obvious. However, it also has potential benefits, such as providing a voice replacement for people who have lost their natural voice. Health the circumstances of.”

More information:
PS07-29AM-602, “Neural dynamics of natural and digital emotional voice processing”, by Christine Skjstad, Poster Session 07 – Late Breaking Abstracts, Saturday, June 29, 09:30-13:00, Poster Area, fenstractservers2024. .com/pr … s/presentations/4774

Provided by the Federation of European Neuroscience Societies

Reference: Our brains respond differently to human and AI-generated speech, but we still struggle to tell them apart (2024, June 24) June 24, 2024 https://medicalxpress.com/ Retrieved from news/2024-06-brains-differently-human-ai-generated.html

This document is subject to copyright. No part may be reproduced without written permission, except for any fair dealing for the purpose of private study or research. The content is provided for informational purposes only.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment