When it comes to effectively training AI Large Language Models (LLMs), the success of their output is directly related to the quality of the content they are being trained on.
Health & Wellness Design Assistant Professor Edlin Garcia, Ph.D., is co-principal investigator (PI) on a research project titled "Designing Accountable Mental Health Large Language Model Therapy Software" that was recently awarded a U.S. National Science Foundation (NSF) two-year grant in the amount of $299,524.
Garcia will be working directly with Associate Professor of Information Systems at Kelley’s Data Science and AI Lab (DSAIL), Sagar Samtani, Ph.D. (PI), and Department of Sociology Distinguished Professor and Irsay Institute Founding Director Bernice Pescosolido, Ph.D. (co-PI), on developing text-based LLM software that will effectively assist trained mental health professionals on assessing patient needs and prioritizing high-risk cases.

"Therapists in general are so overwhelmed because there is such a high demand for mental health services," says Garcia. "The goal of this technology is not to replace a therapist but rather provide assistance to them and greater accessibility to those in need."
After graduating in May 2023 with her Ph.D. in Public Health, Garcia began her role as assistant professor and met Pescosolido through the IU Enhanced Mentoring Program with Opportunities for Ways to Excel in Research (EMPOWER) as a matched mentor. Pescosolido partnered with Garcia developing a Mental Health Artificial Intelligence Pal (MHAI-Pal) research app that, in tandem with Samtani and his lab, is serving as the springboard for this LLM training software.
"The success of LLMs all depends on what they are being trained by," says Garcia. "For example, if they are being trained on social media posts, results will vary."

A major aim of the research project is to hold LLM software accountable by feeding it standard ethical codes and guiding principles common to mental health professionals to avoid having the LLM give out incorrect— or even life-threatening— advice.
"There is a very real risk in terms of what recommendations are being given, which wouldn’t be happening if you were sitting in front of someone who has the correct qualifications and understands your history," says Garcia. "Making the software more accountable by feeding in rules that humans abide by to ensure the safety of the people who are engaging in these services is our main goal."
Having volunteered for the National Crisis Text Line, Garcia says many of the cases she handled were more common stressors, such as a breakup or overwhelm due to professional and/or academic responsibilities.
"In many cases, people just needed to hear a validation of what they were going through and assistance for getting out of that circumstance, versus someone who has severe symptoms of depression and anxiety," says Garcia.
For cases determined to be more serious, the LLM software would have programming to connect the user to a mental health professional accessible to them, considering factors such as health insurance status. There are also deidentified recorded conversations between a patient and a therapist that could be used to train the LLM to use vernacular that is more human and compassionate, but Garcia says one challenge the team is currently facing is how much of that data will be available to them under HIPAA and other privacy laws. Garcia credits DSAIL’s Ph.D. student Aijia Yuan, whose knowledge of LLMs and deep learning has been invaluable to developing out this mental health resource.
"I will also be responsible for the evaluation portion of the project," says Garcia. "When we are training the LLM and reviewing what its outputs are, we will make sure to take it back to our participating practitioners and have them review the content to ensure it is appropriate feedback for a given situation."

While the program will initially be text-based, Garcia hopes that in the future the LLM can include visuals for a more personalized user experience.
"There is so much that is missing when you don’t have the tonality," says Garcia. "Hopefully for this first round we can refine the text-based software, and accountability is key … there is no perfect solution for everyone but this is just one more option to make mental health resources more readily accessible."
To read more about how the SPH-B community is changing the face of public health, visit go.iu.edu/48bx.

