In today's column, I examine the emerging phenomenon that generative AI and large language models (LLMs) are handing out mental health therapy micro-bursts, which is new terminology for the fact that you can get real-time snippets of psychological therapy instantaneously when using AI. Some are referring to this as cognitive snacking.
In the modern era of AI, you can access generative AI such as ChatGPT and ask for mental health advice, doing so anywhere and at anytime of the day or night. Instantly, the AI provides you with a small burst of psychological insight. Compare this to seeing a human therapist. With a human therapist, you typically see them for about an hour, once per week, and that's the extent of your direct interaction to get mental health therapy. The gist is that AI provides micro-bursts of mental health guidance whenever you want, while seeing a human therapist requires scheduling and a limited time window of therapeutic dialogue.
Are mental health micro-bursts good for society or potentially adverse, especially since this is happening on a massive global scale?
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS's 60 Minutes, see the link here.
Background On AI For Mental Health
I'd like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of 2025 accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today's generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
The Norm Of Seeing Human Therapists
Let's explore how people tend to utilize therapy that is provided by human mental health professionals. I will contrast this to how we nowadays use AI for mental health purposes.
Suppose you decide to see a human therapist. The odds are that you will do so once per week. A typical therapeutic session is around 45 minutes to perhaps an hour in length. That is the time window in which you actively carry on a dialogue with the therapist. After the session, you might be given readings to do or perhaps write in a diary about your mental status and expect to prepare for the next session a week later.
If you have a mental emergency during the intervening time, therapists often have a special means to access them or put you in touch with an on-call substitute therapist. But, other than that possibility, by and large, you don't have much, if any, interaction with the therapist in between sessions. Some are willing to text with you, although this is usually done sparingly. It isn't the norm.
The mainstay is that you get a session of approximately an hour in length on a once-a-week basis to discuss your mental health and receive therapeutic advice. Period, end of story.
AI Provides Micro-Bursts Of Therapy
How do people typically use AI for mental health purposes?
They do so whenever they want. They do so for as long or as short as they want. There isn't a set timetable. No specific time window restricts their access to mental health advisement. A person can use AI every day of the week. Weekdays are fine. Weekends are fine. Daytime is good. Nighttime is good. It's all the time and anytime.
My analysis suggests that people do not focus on one-hour blocks of time. They tend to get in and get out. A "session" might be a few minutes to perhaps 20-30 minutes in length. I'm not saying that people only do short bursts. There are certainly some that will go longer, possibly up to many hours at a time.
On the whole, I would wager that people usually keep their mental health discussions with AI to a relatively brief interval of time. A typical approach might be like this. A person confers with AI for a few minutes on Monday, doing so a couple of times throughout the day. The same happens on Tuesday. Maybe on Wednesday, they have spare time in the evening and continue their dialogue for an hour or so. On Thursday, the person does a quick check-in with AI. And on it goes.
The crux is that AI usage for mental health looks like this:
* Not just once per week, but instead a multitude of times per week.
* Not just for an hour at a time, but instead highly variable from a few minutes to possibly lengthy interactions.
* Not just during normal daytime work hours (which is the case for access to human therapists), but any moment of the day or night.
* Not restricted to just an hour in total per week, but could amount to many hours in total across the span of an entire week.
I have coined this type of AI usage for mental health as therapy micro-bursts.
The Role Of Cognitive Snacking
Is the use of micro-bursts for mental health a good aspect or a bad aspect?
That's a tough question to answer on an across-the-board basis. For some people, AI providing these micro-bursts can be quite helpful and uplifting mentally. That's the good news. The bad news is that there is also a chance that micro-bursts are not helpful and could undermine mental health. This is the duality associated with contemporary generic AI (if a person is using a specialized AI that is devised for mental health guidance, the presumed impact is that micro-bursts are good for them).
I've had some tell me that they worry that this is a form of cognitive or psychological snacking. Whereas meeting with a human therapist is thought to be a full meal of therapy, the micro-bursts via AI are construed as snacks. They are short. They are easy.
There is a general view that snacking of any kind, such as grabbing a candy bar at work, is bad for you. On the other hand, if the snack is nutritious, there is an argument to be made that snacking can be extremely beneficial. It all depends on what the snack consists of, how often you rely on it, and so on.
Another perspective on psychological snacking is that it might be fine if done under the supervision of a human therapist. I've previously predicted that we are heading away from the traditional dyad of therapist-client and heading to a new triad, the therapist-AI-client relationship. The idea is simply that savvy therapists are incorporating AI into the therapeutic process; see my coverage at the link here.
Imagine then that a therapist assigns the use of AI to a client and empowers the client to use AI on a snacking or micro-burst basis. This would be the equivalent of hiring a dietician or nutritionist who sets up a means for you to make use of balanced snacks. All in all, we probably wouldn't have heartburn about people using AI in micro-bursts if we knew this was being done under the watchful eye of a human therapist.
Comparing Micro-Bursts To One-Hour Sessions
Can empirical studies possibly reveal the therapeutic impacts of AI-used micro-bursts versus one-hour human therapist sessions?
Kind of.
The problem of performing this type of experiment is that you are going to be comparing apples to oranges. The nature and quality of therapy that a human therapist provides is seemingly on a completely different scale than what you would get from using generic AI. Trying to make a conventional head-to-head comparison is problematic.
One approach is that you could set up an experiment whereby the people in the study are only allowed to use AI as a mental health advisor on a once-per-week basis for one hour. Thus, in theory, you have restricted the AI usage to an equivalent time basis as when seeing a human therapist. That appears to even things out.
The thorny issue is that you are forcing AI usage into a box that is unlike how AI usage truly occurs. It is not the real world. Any arising claims about whether AI usage is not as useful as therapist access are based on a fake scenario. You are tying the hands of the AI behind its back. An unfair comparison.
An alternative would be to go in the other direction and have an experiment whereby the people in the study are able to access a therapist at any time of the day and whenever the person wishes to do so. That's closer to the way that AI usage occurs. In theory, this seems to even things out.
Sorry to say that this is once again an unrealistic scenario. Would the ordinary person have unlimited access to a human therapist? I don't think so. The cost is prohibitive. Maybe a wealthy person could afford this type of situation. But not the average person. It just wouldn't make sense to try to extrapolate from an utterly contrived experimental setup.
Broad Basis Comparison
We can at least conceptually contrast the AI micro-bursts to traditional therapy, doing so via three key factors:
* (1) Temporal structure
* (2) Cognitive mode
* (3) Behavioral mode of care
Let's take a look at each of those factors.
On a temporal basis, here's how the two avenues of therapy compare:
* (1a) Traditional therapy: Fixed cadence, fixed duration, prior scheduling, physical or virtual presence.
* (1b) AI therapy micro-bursts: On-demand, asynchronous, immediate, highly variable duration, no scheduling needed, no session limits.
On a cognitive mode basis, here's a mainstay comparison:
* (2a) Traditional therapy: Tends toward deep reflection, narrative reconstruction, and emotional processing that unfolds gradually, futuristic.
* (2b) AI therapy micro-bursts: Tends toward tactical regulation, such as calming or grounding, usually narrow in scope, provides rapid cognitive offloading, here-and-now.
On a behavioral mode of care basis, here's a general comparison:
* (3a) Traditional therapy: the therapist establishes pace and structure, professional gatekeeping is occurring, clear role differentiation of therapist and client, and explicit therapeutic goals are being pursued.
* (3b) AI therapy micro-bursts: Usually user-initiated and user-ended, no commitment, typically based on impulsive self-determined need, blurring of the role of the AI as therapist versus companion.
I have been identifying and showcasing these differences throughout my various analyses on the role of AI in mental health.
The World We Are In
Let's end with a big picture viewpoint.
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and make the upsides as widely and readily available as possible.
A final thought for now.
Albert Schweitzer famously made this remark: "The result of the voyage does not depend on the speed of the ship, but on whether or not it keeps a true course." You might contend that the same is true of using AI for mental health purposes. It's not necessarily whether the pace differs from human-provided therapy; it's a matter of whether it keeps a true course toward mental health and mental wealth. The true course is the metric at hand.