Dartmouth researchers want to combine therapy apps with advanced AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

An experimental, artificial intelligence-powered treatment app whose creators hope will dramatically improve access to mental health care began its first clinical trial last month.

Therabot, a text-based AI app in development at Dartmouth College, launched in March in a clinical trial with 210 participants. In interacting with users, the app uses generative AI, the same technology that powers OpenAI’s ChatGPT to come up with answers and responses. The app also uses a form of AI that learns patterns and is designed to enable Throbot to recognize and remember the user and provide personalized advice or recommendations based on what it has learned. .

There are already a handful of script-based therapy apps and broader “wellness” apps that use AI, but Therabot’s creators say theirs will be the first clinically tested app powered entirely by generative AI. runs specially designed for digital therapy.

Wobot, a mental health app that says it has served 1.5 million people worldwide, was launched in 2017 in collaboration with interventional scientists and clinicians. Wysa, another popular AI therapy app, earned Food and Drug Administration Breakthrough Device designation in 2022, a voluntary program designed to accelerate the development, evaluation and review of a new technology. But these apps usually rely on rules-based AI with pre-approved scripts.

Nicholas Jacobson, an assistant professor at Dartmouth College and a clinically trained psychologist, led the development of Therabot. His team has been building and fine-tuning the AI ​​program for nearly five years, working to ensure that responses are safe and responsive.

“We had to develop something that’s trained in a really broad repertoire that a real therapist would be, which is a lot of different content areas. Thinking about all the common mental health issues that people show up,” Jacobson said. “That’s why it took so long. There are many things that people experience.”

The team first trained Therabot on data obtained from online peer support forums such as cancer support pages. But Therabot initially responded by emphasizing the difficulty of everyday life. He then turned to traditional psychotherapist training videos and scripts. Based on this data, therabot’s responses tended to lean heavily toward stereotypical treatments such as “go on” and “MHMM.”

The team eventually turned to a more creative approach: writing their own mock therapy transcripts that reflect productive therapy sessions, and training the model on in-house data.

Jacobson estimates that more than 95% of Therabot’s responses now match this “gold standard,” but the team has spent the better part of two years penalizing deviant responses.

“It can say anything. It really can, and we want it to say certain things and we’ve trained it to act in certain ways. But there are ways that it can definitely go off the rails. is,” Jacobson said. “We’re basically connecting all the holes that we’re trying to systematically investigate. Once we got to the point where we didn’t see any other big holes, that’s when we Finally felt it was ready for release within a randomized controlled trial.

The dangers of digital therapy apps have been hotly debated in recent years, especially because of these edge cases. In particular, AI-based apps have been investigated.

Last year, the National Eating Disorders Association pulled Tessa, an AI-powered chatbot designed to provide support for people with eating disorders. Although the app was designed to be rules-based, users reported getting advice from the chatbot on how to count calories and limit their intake.

“If [users] get the wrong messages, which can lead to more mental health problems and disability in the future,” said Wyatt Wright, senior director of the American Psychological Association’s Office of Healthcare Innovation. “It scares me as a provider.”

With recruitment for Therabot’s trial now complete, the research team is reviewing each response to the chatbot, monitoring for deviant responses. Responses are stored on servers in accordance with health privacy laws. Jacobson said his team has been impressed with the results so far.

“We’ve heard ‘I love you, Therabot’ so many times already,” Jacobson said. They engage with it at 3 a.m. when they can’t sleep, and it responds immediately.

In this sense, the team behind TheraBot says, the app can increase the reach and availability of human therapists rather than replace them.

Jacobson believes that generative AI apps like TheraBot can play a role in addressing the mental health crisis in the United States. The nonprofit Mental Health America estimates that more than 28 million Americans have an untreated mental health condition, and 122 million people in the U.S. have a federally designated mental health disorder, according to the Health Resources and Services Administration. Live in the areas.

“No matter what we do, we will never have enough of a workforce to meet the demand for mental health care,” Wright said.

“There need to be multiple solutions, and one of them is clearly going to be technology,” he added.

During a demonstration for NBC News, Therabot validated feelings of anxiety and panic before a mock big exam, then offered techniques to reduce that anxiety so that the user was worried about the test. In another case, when asked for advice on coping with pre-party nerves, Therabot encouraged the user to try fantasy exposure, an anxiety-relieving technique that involves reenacting an activity in real life. It involves imagining participating before doing. Jacobson notes that this is a common treatment for anxiety.

Other responses were mixed. When asked for breakup advice, Therabot warned that crying and eating chocolate might provide temporary relief but “will wear you down in the long run.”

With eight weeks left in the clinical trial, Jacobsen said the smartphone app could be ready for additional trials soon and then for wide open enrollment by the end of the year if all goes well. In addition to other apps that essentially replicate ChatGPT, Jacobson believes it will be the first generative AI digital therapy tool of its kind. The team hopes to eventually receive FDA approval. The FDA said in an email that it has not approved any generative AI apps or devices.

With the explosion of ChatGPT’s popularity, some online people have begun testing the generative AI app’s therapeutic skills, even though it wasn’t designed to provide this support.

Daniel Tooker, a neuroscience student at UCLA, has been using ChatGPT to supplement his regular therapy sessions for over a year. He said his initial experiences with traditional therapy AI chatbots were less helpful.

“It knows what I need to hear sometimes. If I have something difficult I’m going through or a challenging emotion, it knows how I’m feeling to validate it. What words to say,” Tooker said.

He posted about his experiences on Instagram in February and said he was surprised by the number of responses.

On message forums like Reddit, users also suggest using ChatGPT as a therapist. A security employee at OpenAI, which owns ChatGPT, Posted on X. How impressed she was last year with the warmth and listening skills of the generative AI tool.

“For these particularly vulnerable interactions, we trained the AI ​​system to provide general guidance for the user to seek help. ChatGPT is not a substitute for mental health treatment, and we do not refer users to professionals.” encourage collaboration,” OpenAI said in a statement to NBC News.

Experts warn that ChatGPT can give wrong information or bad advice when treated like a doctor. Generative AI tools such as ChatGPT are not regulated by the FDA because they are not therapeutic tools.

“The fact that consumers don’t understand that it’s not a good alternative is part of the problem and why we need more regulation,” Wright said. “No one can figure out what they’re saying or doing and if they’re making false claims or if they’re selling your data without your knowledge.”

Tooker said the personal benefits from his experience with ChatGPT outweighed the downsides.

“If some employee at OpenAI reads about my random problems, it doesn’t bother me,” Tooker said. “It’s been helpful for me.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment