A hand illuminated by the glow of a smartphone screen against a dark background. Text overlay reads: "The Debate Arrived Too Late -- Governments have already decided. The open question isn't whether AI should be in this space -- it's on what foundations, for whom, and with what accountability when things go wrong.

Ai and Mental Health

April 21, 20266 min read

The Debate That Arrived Too Late

When you typed something difficult into ChatGPT or asked your phone's AI assistant something you wouldn't ask a friend, you weren't using a mental health tool. You were using the same tool you use for everything else. That distinction is at the centre of a problem the mental health field has not yet come to terms with.

The debate we're having is the wrong one

Professional bodies in psychology and psychiatry have spent several years debating whether AI should be permitted to operate in mental health contexts. The framing treats this as a prospective decision -- something we can evaluate, debate, and then authorise or decline. That framing is wrong in a way that matters.

Character.AI had more than 20 million monthly active users conducting mental health conversations by early 2025. A systematic evaluation of the platform found 475 bots carrying titles like "therapist" or "psychiatrist," with mental health use among the fastest-growing categories on the site. Replika had approximately 30 million global users. Wysa was operating across 95 countries. These numbers are large, but they undercount the real picture. The primary interface for mental health conversations is not dedicated mental health apps. It is general-purpose AI assistants - ChatGPT, Claude, Gemini - that people reach for when distress arrives at three in the morning and nothing else is accessible.

None of this was authorised by a professional body. None of it went through a clinical trial. It exists because users sought it and technology companies built it. The governance question is no longer whether AI should be in this space. It is what AI is doing there, on what foundations, with what accountability, and for whom.

Infrastructure decisions have already been made

This is not a technology company story alone. It is a government story.

Singapore's Minister for Health has outlined a national healthcare AI strategy that includes a national conversational AI health platform. The UK government has stated its ambition to make the NHS the most AI-enabled health system in the world. The EU AI Act creates binding legal requirements for AI operating in health domains across 27 member states. Hong Kong's 2025 Policy Address includes AI-enhanced mental health services as part of its mental health strategy. China's 2024 government reference guide identifies 84 specific AI use cases across healthcare.

These are government decisions made through government processes with government resources. The question of whether AI will be integrated into mental health infrastructure has been answered by policymakers across every major jurisdiction. The professional debate about whether to permit it is a debate about a decision that has already been made elsewhere.

Why the debate keeps failing

Two structural problems compound each other.

The first is that regulatory frameworks were designed for objects with fixed identities: pharmaceutical drugs, medical devices, credentialled professionals. They cannot govern systems that are trained rather than programmed, simultaneously present everywhere, continuously changing, and structurally incapable of bearing accountability within existing legal frameworks. A drug has a manufacturer. A device has a serial number. An AI assistant running mental health conversations at population scale has neither of those things in any form the law was built to address.

The second problem is less recognised. Mental health professionals are not arguing from a shared understanding of the field - they are arguing from different positions on the mental health spectrum without acknowledging it. A psychiatrist working with acutely unwell patients in a clinical setting is accurately describing their portion of reality. A public health practitioner working on population wellbeing is accurately describing theirs. When they argue about AI, they are arguing about different problems through the same vocabulary. The debate looks like a disagreement about values or evidence. It is actually a disagreement about which part of the spectrum matters.

Clinical concerns about risk and clinical appropriateness are legitimate at the acute end of the spectrum. Population-scale thinking about access, equity, and public health infrastructure is legitimate across the rest of it. Both are needed. Treating one as if it covers the entire field produces governance frameworks that either protect the acute minority while leaving the majority without accountability, or dismiss clinical risk entirely in the name of access. Neither outcome serves people well.

What accountability actually requires

The central failure is not that AI is handling mental health conversations. The central failure is that it is doing so without the accountability infrastructure that any population-scale health intervention should carry.

When a new pharmaceutical class enters widespread use, adverse event reporting is mandatory. There is a mechanism for identifying harms at population scale, tracing them back to the intervention, and requiring a response. Nothing equivalent exists for AI handling mental health conversations. Platforms are not required to report adverse outcomes. There is no systematic monitoring of what AI is actually doing across the populations using it. There is no clear liability framework when things go wrong.

Two concrete mechanisms would begin to address this gap. The first is a mandatory adverse event reporting obligation for AI handling mental health conversations -- not voluntary, not industry-led, but a legal requirement tied to operating in this domain. The second is an internationally coordinated test case library that evaluates AI mental health responses across diverse cultural contexts, neurodivergent populations, and non-Western understandings of mental health and wellbeing. These are achievable. They do not require solving the hardest jurisdictional questions first.

This matters especially for communities whose mental health frameworks are not adequately represented in how AI systems were trained. Mental health is understood through collective, family, community, and spiritual frameworks in many cultures. AI systems built predominantly on Western, English-language data carry embedded assumptions about what mental health is, what help looks like, and whose knowledge counts. Cultural communities have the right to determine whether AI participates in their healing practices and what knowledge can appropriately inform those systems. That is not a technical question. It is a sovereignty question, and current governance frameworks do not address it.

What professionals should actually be doing

Professional bodies are not powerless, but they are exercising influence in the wrong place.

The authority to permit or refuse AI involvement in mental health does not sit with psychology or psychiatry professional bodies. That authority has moved - to technology companies, to governments, to the structural reality of billions of devices with AI assistants already installed. Debating permission is a way of standing at a door that is no longer in the building.

Where professionals do have genuine influence is in accountability architecture. Professionals can make the case for mandatory adverse event reporting. They can build the evidentiary base for what harms look like and how they should be documented. They can develop the cultural competence standards that governance frameworks currently lack. They can challenge complacency about governance gaps in forums where those gaps are being discussed.

That is different work from the work the field has been doing. It requires acknowledging that AI mental health infrastructure is already operating at population scale, that it is not temporary, and that the governance debate arrived after the infrastructure was built rather than before.

The question worth asking now is not whether AI should be in this space. It is whether the people most concerned about mental health outcomes are going to shape what accountability looks like - or leave that work to others who are not.

This post draws on my preprint "The Debate That Arrived Too Late: Same Field, Different Realities in Mental Health AI," available on SocArXiv at osf.io/preprints/socarxiv/hxsnz.

Mark Anns is a Health Psychologist and Fellow of the Australian Society of Lifestyle Medicine, working at the intersection of mental health and lifestyle medicine through Psych and Lifestyle.

Health Psychologist and Fellow of the Australasian Society of Lifestyle Medicine

Mark Anns

Health Psychologist and Fellow of the Australasian Society of Lifestyle Medicine

Back to Blog