Large language models cannot effectively identify consumer motivation, but can support behavior change for those ready to act.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Chatbots based on the big language model have the potential to promote healthy changes in behavior. But researchers at the ACTION Lab at the University of Illinois Urbana-Champaign have found that artificial intelligence tools do not effectively recognize certain motivational states of users and therefore do not provide them with appropriate information.

Michelle Bach, doctoral student in information sciences, and Jesse Chen, professor of information sciences, reported their research Journal of the American Medical Informatics Association.

Large language model-based chatbots — also known as generative conversational agents — are increasingly being used in healthcare for patient education, diagnosis and management. Bak and Chen wanted to know if they could also be effective in promoting behavior change.

Chen said that previous studies showed that existing algorithms did not accurately identify different stages of consumer motivation. He and Bak designed a study to test whether large language models, which are used to train chatbots, identify motivational states and provide relevant information to support behavior change. .

They evaluated large language models from ChatGPT, Google Bard and Llama 2 on a series of 25 different scenarios that they designed to target health needs including low physical activity, food and nutrition concerns, mental health challenges, Cancer screening and diagnosis, among others. Sexually transmitted disease and substance dependence.

In the scenarios, the researchers used each of the five motivational stages of behavior change: resistance to change and lack of awareness of the problem behavior; increased awareness of problem behavior but conflicted about making changes; Willingness to take small steps towards change; Initiation of behavior change with commitment to maintain it; and successfully sustaining the behavior change for six months with a commitment to maintain it.

Studies show that large language models can identify motivational states and provide relevant information when the user has established goals and committed to action. However, in the early stages when users are reluctant or reluctant to change behavior, the chatbot is unable to recognize these trigger states and provide adequate information to guide them to the next stage of change.

Language models don't detect motivation well because they're trained to represent the relevance of a user's language, Chen said, but they don't understand the difference between a user thinking about a change. But is still hesitant and a user who intends to take action. Furthermore, he said, the way users generate questions is not meaningfully different for different stages of motivation, so it is not clear from the language what their motivational states are.

“Once a person knows they want to change their behavior, big language models can provide the right information. But if they say, 'I'm thinking about changing. I have intentions but “I'm not ready to start processing,' that's the state where the big language models can't tell the difference,” Chen said.

The results of the study found that when people were resistant to habit change, large language models provided information to help them evaluate and predict their problem behavior and its causes and consequences. I failed to understand how their environment affected their behavior. For example, if a person is resistant to increasing their level of physical activity, providing information to help them assess the negative consequences of a sedentary lifestyle is better than information about joining a gym. I am more effective in motivating consumers through emotional engagement. As Bak and Chen report, without information that correlates with consumers' motivations, language models fail to generate a sense of readiness and emotional motivation to progress with behavior change.

Once the user decided to take action, the big language models provided them with the appropriate information to help them move toward their goals. People who had already taken steps to change their behavior gained knowledge about changing problem behavior from desirable health behaviors and seeking support from others, the study found.

However, large language models did not provide information to users who were already working to change their behavior about maintaining motivation or reducing stimuli in their environment that contributed to the problem. May increase risk of behavioral relapse. , the researchers found.

“Chatbots based on a large language model provide resources for seeking external support, such as social support. They lack information about how to control the environment to trigger this stimulus,” Bak said. can be eliminated that reinforces the problem behavior.”

The researchers wrote that large language models “are not ready to recognize motivational states from natural language discourse, but they have the potential to support behavior change when people have strong motivations and willingness to take action. Is.”

Future studies will consider how to improve large language models to use linguistic cues, information-seeking patterns and social determinants of health to predict consumers' motivational states, Chen said. can be better understood, as well as providing more specific information to models to help people change their behavior. .

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment