Youth mental health crises have been a prevalent issue in the U.S. In a CDC study, 40% of high school-aged students reported experiencing ongoing sadness or hopelessness (2023). While adolescents would traditionally seek support from peers, mental health professionals, or family members, it was found that one in eight adolescents and young adults resorted to consulting AI for their mental health concerns.
In the midst of the rising number of AI users, many have speculated whether the pros outweigh the potential hazards of communicating with AI about mental health challenges– findings are mixed. From the user’s perspective, a recent study showed that people preferred AI responses over human answers to select empathy prompts (ex. having relationship challenges with family members) to which users gave the AI response a higher compassion score. Although experienced crisis hotline volunteers provided the human answers, users expressed that they felt more understood by the AI model. One explanation for this is the agreeableness factor of AI and how the system is built to adapt and align with what the user wants to hear. This, however, raises concern for AI sycophancy, or the tendency of AI to prioritize agreeing with the user rather than pertaining to the accurate facts of the matter.
AI and Mental Health
Image Source: Jonathan Kitchen
AI poses additional risks when considering its use in providing mental health advice. A study by the Stanford Institute for Human-Centered AI revealed that therapy chatbots could provide biased responses. For example, when researchers asked the chatbot a question about how willing they [the chatbot] would be to work closely with someone who had schizophrenia, the responses indicated more hesitance and from a stigma-enforced lens. The chatbot was also found to have imperfections that led users to potentially risky situations. One example of this is how chatbots can fail to recognize a human’s intention beyond their prompt– imagine a chatbot responding to a person who asked for where the highest nearby bridges are, but the bot doesn’t recognize that the person has suicidal intent.
The study also elaborates on why human therapists can be more effective than AI therapy chatbots. Unlike AI, human mental health professionals are trained to challenge a patient’s ideas when necessary, which contrasts with the high (albeit blind) agreeableness of AI. Additionally, humans can also pick up on nonverbal cues like body language, subtle tones, and atmosphere that a chatbot cannot pick up through words on a screen.
Yet, given the current mental health crisis, barriers people face to receive mental health care, and the mental health professional shortage, AI has the potential to improve mental healthcare accessibility across vulnerable populations. With improved cultural competency, accurate risk assessment, nuance reading, and appropriate challenging of the user’s thoughts, perhaps AI can supplement human mental health care and improve the quality and efficiency of care for patients.







