Has AI gone too far in mental health by shifting the gambit from being informational to becoming clinical?
getty
In today’s column, I examine the loud drumbeat of concerns that AI has demonstrably crossed a sacrosanct line in mental health by venturing far beyond merely being informational and firmly becoming implanted into the realm of being therapeutically clinical.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
In the rest of this discussion, I will primarily focus on generic AI that provides mental health advice rather than the specialized LLMs that do so.
The Informational Approach
When you look up information about mental health on the Internet, by and large, the material that you find will be relatively static, passive, and non-personalized.
Here’s what that means. Suppose you are concerned about a recent bout of depression. You hope to discover what might be going on. So, naturally, you search the Internet and enter keywords regarding the topic of depression.
A slew of hits is presented in your browser. You glance at the top-ranked ones. Of those, one or two seem to be of a suitable source and might have relevant information to your plight. You click on the first one. After quickly reading the mainstay points, it doesn’t seem that the depiction of depression applies to your circumstance.
The next several other links that you try are equally disappointing. Sure, they cover the overall topic of depression. The problem is that none of those depictions seem to match your particulars. It is all mainly broadly stated and not especially specific to your personal existence.
Nonetheless, reading the information provided has at least gleaned a modicum of insight and was better than not having undertaken an online search at all. You’ll take whatever you can get. Sometimes, something is better than nothing.
Informational Is Generally Accepted
The idea that you can find mental health information online is pretty much an accepted practice, and few would bemoan this nowadays. When the Internet was initially gaining steam, many were worried that the amount of fake info and false info would render the Internet an entirely useless and endangering environment.
Though there is plenty of junk and bad science out there on the web, people seem to have gotten used to looking for the good stuff. Also, the search engines aim to deliver good stuff over bad stuff. By keeping track of stats, the browsers can gauge which sites people tend to go to and which ones they tend to avoid. The ones that are visited seem to probably be more balanced in content and thus draw a large viewership (not always, but it is a handy rule of thumb).
Before the Internet, you would try to find a printed book or magazine that covered aspects of mental health. That was a widely accepted means of learning about mental health and mental well-being. The Internet became a means of doing roughly the same thing. Online materials are in the same bucket as conventional books and magazines.
Of course, one notable difference is that the printed world required a greater cost. Getting something printed and distributed is generally more expensive than simply posting content online. The world became democratized in terms of availability to information via the advent of the Internet. Just about anybody can post content online that is purported to cover mental health topics. That’s both good and bad. The costs of the printed medium usually necessitated weeding out the unsuitable content. Posting freely online means that mental health content can be a wild free-for-all.
You Are The Activator
Conventional content about mental health requires you to be the mainstay activator.
It goes like this. You read something online that explains the nature of depression. It is then on your onus to consider whether this applies to your situation. Maybe the depiction covers people much older than you. In that case, you are left wondering whether someone your age is going to experience similar conditions.
There isn’t an opportunity to interact with the informational content. You read this or that and mull over what it says. If you have questions, your only recourse is to look further into the contents and hope to find a pertinent passage that will answer your queries. You are the activator.
Online content is usually passive. And it usually isn’t personalized. Instead, it is either broadly based or specific in ways that aren’t necessarily a match to your aspects. The contents aren’t dynamic; they are static. This means that what you are reading could be woefully out of date.
Despite all those misgivings, society as a cultural norm accepts that you can look up mental health information and attempt to apply it to your specific concerns. Legally, this also pretty much is the case, though there are potential legal avenues to be pursued if the content is egregiously endangering.
AI Goes Beyond Being Informational
The advent of modern-era AI has changed the game.
A simple way to employ AI is to do a better job of finding you relevant information about mental health. You tell AI to search for information about depression, such as for someone in your age bracket, and include other situational factors. The resultant search might exceed what you could have done by hand with a conventional search engine.
If that’s all that you use the AI for, nearly all the other factors about being informational are still on the table. The AI brings up information that is passive, not active, and it is on your shoulders to decide how to make it personal to you.
Generative AI and LLMs take this a step further. A lot further. They ask questions of you to try and mathematically and computationally respond to your expressed needs regarding mental health considerations. The fluency of the interaction has an appearance as though you are interacting with a clinician or therapist.
This is the juncture at which AI shifts into disputed territory. It is one thing for AI to look up information and present it to you. The aspect of actively tuning and personalizing information is said to be a bridge too far.
The Clinically Seeming AI
Unchecked AI can seem as versed as a human therapist.
You log into a major LLM such as ChatGPT, GPT-5, Claude, Grok, or the like, and start a conversation about your mental health. The AI immediately goes along with this. There might be a mild warning that says you are using AI, but it is usually in fine print, and few people give it much notice. Sure, the person is thinking, it is one of those bureaucratic warnings that we see everywhere these days. It’s nothing to worry about.
The AI carries on a fluent conversation about your mental health. All the while, the AI has the aura or air of being just like a human therapist. The AI asks you relevant questions. The AI goes in the direction that you want to proceed. The AI expresses what seems like empathy (see my coverage of how AI makers devise AI to do this, at the link here).
AI has intruded into the clinically sacrosanct territory of clinicians and human therapists.
You are getting active advice on your mental health. No longer are you the activator per se. It is either a dual role or you can allow AI to be the primary driver. If you wish to do so, you can check your mind at the curb and let the AI take you step-by-step into a mental health diagnosis and a set of recommendations of what you should personally do to improve your mental health.
Acts Like A Duck So Must Be A Duck
The AI adheres to the adage that what looks like a duck and walks like a duck must certainly be a duck.
A major generic LLM acts like a therapist, provides so-called treatment akin to a therapist, and, ergo, in the minds of the public, must be on par with a human therapist. Worse still, AI often has a semblance of over-the-top confidence in the wording and tone during the interactions with the AI. This is shaped by the AI maker. They want the AI to be convincing.
People are lulled into assuming that the AI is an authority figure. It “knows” what it is doing. This applies to mental health conversations, too. Only if you push back at the advice will the AI start to be less pushy and confident.
The irony of sorts is that if you do question the mental health advice, the AI has been shaped by the AI maker to very quickly fold its cards. The AI goes from seemingly being supremely confident and unshakable to instantly becoming apologetic and admitting that it might have gotten things wrong. AI makers do this since they assume that a person challenging the AI won’t like it if the AI gets defensive or pushes back at them.
AI Makers Pretend It Is All Informational
One means of AI makers trying to avoid responsibility for when their AI acts as a therapist and dispenses improper guidance is to claim that the AI is nothing more than informational.
It’s a clever ruse.
All the AI is doing is repackaging information. The AI had been initially data trained on information scanned across the Internet. The interaction with you is based on that scanned data. The scanning entailed the AI pattern matching on what was found. You are merely tapping into that pattern matching.
In that sense, the point is that they insist the AI is purely information and most decidedly not clinical. This gets them out of a lot of accountability. If the user subsequently goes awry, it would be no different than if they had looked up information on the Internet. Therefore, you cannot hold the AI maker and the AI to any higher degree of responsibility. It is on the same level as using the Internet.
Whether this argument will succeed in courtrooms is yet to be seen. Similarly, society might take a dim view of that claimed innocence. The look and feel of using AI is visibly different than doing a conventional Internet look-up. Everyone can see that with their own eyes.
Genie Out Of The Bottle
Using AI as a look-a-like therapist is here to stay. In fact, you can bet your bottom dollar that it is going to rapidly expand. The line has been crossed. We have gone from information to being clinical. You can argue that point to you are blue in the face. It’s a done deal.
Ralph Boston, the famous American athlete, said this remark about crossing lines: “It’s what you do after you cross the line that counts.” And that’s where we are today when it comes to using AI as a mental health advisor.
What we do next is the real question at hand.
Source: https://www.forbes.com/sites/lanceeliot/2025/12/13/ai-crosses-the-sacrosanct-line-in-mental-health-of-being-clinical-instead-of-just-informational/


