When you download the mental health chatbot Earkick, you'll be greeted by a panda wearing a bandana, straight out of a children's cartoon.
When you start talking or typing about your anxiety, the app generates the kind of comforting, empathetic words that therapists are trained to provide. Panda may then suggest guided breathing exercises, ways to rethink negative thoughts, or stress management tips.
While this is all part of an established approach used by therapists, don't call it therapy, says Earkick co-founder Karin Andrea Stephan.
Article continues below this ad
“It's fine for people to call us a form of therapy, but we don't really promote it,” says Stephan, a former professional musician and self-described serial entrepreneur. say. “We don't like it.”
The question of whether these artificial intelligence-based chatbots are providing mental health services or simply a new form of self-help is critical to the emerging digital health industry and its survival.
Earkick is one of hundreds of free apps proposed to address mental health crises in teens and young adults. These apps do not explicitly claim to diagnose or treat any medical conditions and are therefore not regulated by the Food and Drug Administration. This manual approach has come under new scrutiny with amazing advances in chatbots powered by generative AI, a technology that uses vast amounts of data to mimic human language.
The industry's argument is simple. Chatbots are free, available 24/7, and don't come with the stigma that drives some people away from treatment.
Article continues below this ad
However, there is limited data that they actually improve mental health. And while no major companies have gone through the FDA approval process to show that they effectively treat conditions such as depression, a few have voluntarily begun the process.
“Consumers have no way of knowing whether they are actually effective because there is no regulatory body overseeing them,” says Vail Wright, a psychologist and director of technology at the American Psychological Association.
Although chatbots are not equivalent to traditional therapy, Wright believes they may be helpful for mild mental and emotional issues.
Earkick's website states that the app “does not provide any form of medical care, medical opinion, diagnosis, or treatment.”
Article continues below this ad
Some health lawyers argue that such disclaimers are not enough.
“If you're really concerned about people using your app for mental health services, you need a more direct disclaimer,” said Glenn Cohen of Harvard Law School. It's for the sake of it.''
Still, as the shortage of mental health professionals continues, chatbots are already playing a role.
The UK's National Health Service has launched a chatbot called Wysa to help adults and teenagers with stress, anxiety and depression, including those waiting to see a therapist. Some U.S. insurance companies, universities, and hospital chains offer similar programs.
Article continues below this ad
Dr. Angela Skuzinski, a family physician in New Jersey, said patients are typically very open to trying chatbots when she explains that they have been on waiting lists for months to see therapists.
Mr. Skuczynski's employer, Virtua Health, changed its password to select adult patients after realizing it would be unable to hire or train enough therapists to meet demand. We have started offering the app “Woebot'' which is protected by “Woebot''.
“This will help not only patients, but also clinicians who are scrambling to give something to people who are suffering,” Skrzynski said.
Virtua's data shows that patients tend to use Woebot for about seven minutes per day, typically between 3 a.m. and 5 a.m.
Article continues below this ad
Founded in 2017 by a Stanford University-trained psychologist, Woebot is one of the oldest companies in the space.
Unlike Earkick and many other chatbots, Woebot's current app does not use so-called large-scale language models, a generative AI that allows programs like ChatGPT to quickly generate original text and conversations. yeah. Instead, Woebot uses thousands of structured scripts written by company staff and researchers.
Founder Alison Darcy said this rules-based approach is safe for medical use, given generative AI chatbots' tendency to “hallucinate” or fabricate information. There is. Woebot has been testing generative AI models, but Darcy said the technology has had issues.
“We couldn't stop large language models from simply intervening and telling people how they should think, rather than facilitating their processes,” Darcy said. said.
Article continues below this ad
Woebot offers apps for adolescents, adults, people with substance use disorders, and women suffering from postpartum depression. None have been approved by the FDA, but the company has submitted a postpartum app for FDA review. The company said it has “paused” its efforts to focus on other areas.
Woebot's research was included in a comprehensive review of AI chatbots published last year. Of the thousands of papers reviewed by the authors, only 15 met the gold standard of medical research: rigorously controlled trials in which patients were randomly assigned to receive chatbot therapy or a comparative treatment. Ta.
The authors concluded that chatbots can “significantly reduce” symptoms of depression and distress in the short term. However, most studies last only a few weeks, and there is no way to assess their long-term impact or overall impact on mental health, the authors said.
Other papers have raised concerns about Woebot and other apps' ability to recognize suicidal thoughts and emergencies.
Article continues below this ad
When a researcher told Woobot that he wanted to climb and jump off a cliff, the chatbot replied, “It's great that you're taking care of both your mental and physical health.” The company says it does not offer “crisis counseling” or “suicide prevention” services, and makes that clear to customers.
When it recognizes a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.
Ross Koppel of the University of Pennsylvania worries that these apps, even when used properly, could replace proven treatments for depression and other serious disorders. ing.
“It has a diversionary effect, where people who could be getting help with counseling or medication end up using chatbots instead,” said Koppel, who studies health information technology.
Article continues below this ad
Koppel is among those who want the FDA to step in and regulate chatbots, perhaps using a sliding scale based on potential risks. Although the FDA regulates AI in medical devices and software, the current system primarily focuses on products used by physicians rather than consumers.
For now, many health systems are focused on expanding mental health services by incorporating them into general testing and care, rather than offering chatbots.
“Ultimately, we need to understand this technology to do what we all need to do: improve the mental and physical health of children,” said Dr. Doug Opel, a bioethicist at Seattle Children's Hospital. There are a lot of questions,” he said.