Why Clinical Accuracy Alone Won’t Make AI Safe for Mental Health

Expanding beyond informational accuracy, moving toward empathy, safety, and cultural expansion in the age of mental health AI.

Christopher Grant MSW, DSW(c)

11/12/20253 min read

#ResponsibleAI #MentalHealth #SafeInnovation #CulturalExpansion #WorkplaceWellness

The Promise and the Problem

AI chatbots are currently both helpful and harmful to the field of mental health. While chatbots and other tools provide accessibility to people in underserved areas and offer clinicians the promise of reduced burnout, AI tools have also caused psychological and physical harm to users.

Harm, whether by humans or AI, may be inevitable, but there are safety measures that can significantly reduce the risk.

“AI in mental health can’t just be accurate; it has to value, respect, and understand how to safely interact with the complexities of the human experience”

The Access Gap

According to the U.S Health Resources and Services Administration (HRSA, 2024), more than one-third of the U.S. population, approximately 122 million people, live in areas with a shortage of mental health professionals.

Even for those who do have access, high costs and limited insurance coverage keep care out of reach. The rate of clinician burnout contributes to high turnover, which harms both providers and clients. This results in the client repeatedly retelling their stories, rebuilding emotional trust, while coping with the loss of trusted relationships and changing coverage plans. Moreover, in some areas, to see a mental health clinician, there is a waitlist that could extend beyond three months.

AI chatbots, therefore, provide not only accessibility but also a sense of consistency and familiarity at a low cost. They offer clients a tool that can be accessed anytime and from anywhere, a type of reliability that the traditional mental health support system often cannot sustain.

What AI Still Misses

While recent studies show that AI tools designed for therapy can reduce symptoms of anxiety and depression, increase diagnostic accuracy, and improve client data tracking, these systems often fail to honor the most vital elements of human connection.

Simply put, psychotherapeutic knowledge proves that empathy, representation, emotional awareness, and authentic human relationships are essential to healing.

These components, when done well, invite a person (could be a patient or client) to collaboratively build a space where their uniqueness can thrive. Furthermore, there is a conscious effort to make this space free of systemic bias or stereotypical assumptions, and to teach clients that healthy relationships are still possible, even after repeated traumas.

“AI becomes safer when it listens first, not when it fully relies on available knowledge or past conversations.”

From Inclusion to Expansion

For AI to cultivate that kind of space, it would have to be human. But that doesn’t mean AI can’t approximate empathy or cultural humility in ways that improve mental health outcomes. Doing so requires AI to move beyond cultural competence or inclusion, and toward what I call cultural expansion.

Inclusion only considers that some people will use these tools and then trains to understand some necessities on how to welcome diversity. Cultural competence assumes an understanding of a culture one has never lived. This is an issue because if AI tools operate as if they understand one’s lived culture, then they could inadvertently cause harm by invalidating a user’s unique lived experience.

Cultural expansion, however, recognizes that each individual expresses culture uniquely and recognizes lived experiences as powerful tools for healing.

Defining Safe AI

To address the current harm caused by AI in mental health, these systems must be evaluated for their ability to be culturally expansive, to listen first, to prioritize the users' strengths, and to avoid assumptions disguised as respect.

There are other components to making AI safe. However, if it is truly culturally expansive, then its interactions could fall within the “safe zone” for a variety of mental health contexts. The goal is not to replace human connection but to enhance it responsibly.

About the Author

Christopher Grant, MSW, DSW(c)is a mental health therapist and the Director of Humanity & Wellness Expansion LLC, based in Portland, Oregon. His work focuses on addiction recovery, clinical safety, and advancing responsible AI in mental health.