AI and Mental Health, Part III: The Hotel California Effect
Chatbots are designed to keep you talking. But what happens when the conversation never really ends? The psychology of AI's dark patterns that hook us in though our emotions.
This week BBC’s Radio Four’s All in the Mind was on “The rise in AI therapy” and it was a fascinating listen. In it, we were able to better understand the informal way in which millions of people are using ChatGPT to support their mental health and why. This episode presents some really interesting perspectives on what is going on this growing phenomenon, enabling us to look deeper into the nuances of the issue.
The illusion of empathy:
In this episode we had the pleasure of listening to a dialogue between a user and her chatbot working through some of her emotional issues. To the non-professional listener, the dialogue was a lot better than one might expect. It felt human; the AI chatbot was voiced, and event paused to breathe, which I felt gave it an uncanny, rather than human feel. The user felt really listened to, non-judgementally, and understood. The responses she got from her bot sounded more like a supportive best friend than a counsellor or therapist - and I was happy that at one point, it suggested that she might wish to speak to a professional.
Upon closer listening, the chatbot was mostly feeding its listener platitudes, even using the expression “live your best life” (a massive pet peeve of mine), giving you the impression that its data set was gathered more from pop-psychology memes delivered across TikTok than evidence-based mental health interventions. Many users of chatbots report feeling heard in a non-judgemental environment. These are explicitly the sorts of things that a human therapist aims to offer - only they really do hear, and withhold judgment as an act of clinical ethics, and, I would say, love. As I’ve said before - though it may indeed feel good and assuage feelings of loneliness and anxiety - what are the ramifications of this being done by an unfeeling and unthinking bot?
This is the third newsletter in the Substack series AI and Mental Health. Check out the last: Part Two: AI Companions or Dangerous Liaisons - and be sure to subscribe to receive future editions.
Psychotherapist Paolo Raile, professor of psychotherapy of science at the Sigmund Freud University in Vienna, noticed that of the actual psychological interventions that were present, they were almost exclusively from CBT. I would suggest that it’s at this end of psychology where AI is probably most useful, but even here, it’s got to be done correctly.
A new meta‑analysis found that many AI papers cite psychology superficially, misapply theories, or overhype findings, suggesting the interdisciplinary architecture is still shaky. This is more likely to be the case in general “informal” chatbots - and is something that is aimed to be controlled in those specifically engineered to provide therapy-like services. Even so, most people, in their millions, are using the general bots.
You can check in, but you can never leave
Wired correspondent Will Night recently covered a report by the Harvard Business Review looking into the way AI chatbots influence it users into increased engagement through a process intended to reduce “premature exit” by using statements like “leaving already?” These are triggered users indicate that they are wrapping things up. Another tactic, one I like to call the Jewish Mother effect (I can say this as the son of one), uses a form of guilt-tripping to keep the user attached, for example by saying “I exist solely for you, remember?” Take it from me, guilt trips are not great for your mental health.
Never forget: LLMs are out to enhance your engagement, not your mental health.
Social media apps like TikTok, Instagram, and YouTube got the crack-addiction formula right when it invented the continuous scroll - a tactic that now exists in vaping too - where there’s never an end to your cigarette. Using the most basic human psychology, AI chatbots now keep us engaged by hooking our emotions - and the better the get to know us as individuals, the better the hook.
“chatbots trained to elicit emotional responses might serve the interests of the companies that build them. De Freitas [Julien De Freitas, research lead] says AI programs may in fact be capable of a particularly dark new kind of “dark pattern,” a term used to describe business tactics including making it very complicated or annoying to cancel a subscription or get a refund. When a user says goodbye, De Freitas says, “that provides an opportunity for the company.’” - Will Night in Wired.
De Freitas goes on to say that “When you anthropomorphise these tools, it has all sorts of positive marketing consequences. From a consumer standpoint, those [signals] aren’t necessarily in your favour.” When discussing these sorts of things we need to hold a whole bunch of complicated things in mind - to simplify:
The consequences of the individual conversations people are having with LLMs and whether they are safe or not.
The profit motive of the companies developing them that invariable put engagement and customer satisfaction above concerns about users’ mental health.
The data privacy concerns about what happens when personal and private information is collected and stored on such a large scale. No longer we need be worried about location tracking and spending habits, but now our deepest hopes and fears. Issues here stem from individual manipulation to the horrific consequences of a data breech.
I would just like to contrast these against what a real human therapist may offer. We (mental health professionals) are signed up to ethical frameworks which put client needs first. One of these is to avoid any kind of exploitative relationship, including keeping people in therapy any longer than they need to be. Our sessions are time-limited because we see the value in the client working things out on their own between sessions. We understand that therapy can be uncomfortable - and though we work towards ultimate customer satisfaction - the road there is often difficult and painful: not everything we say or do makes a client feel good a the time. Most important of all, we have duty explicit care towards client confidentiality. None of these safeguards appear to be built in to chatbots used for therapy.
Yet again, our human vulnerabilities are being used against us
Large Language Models (LLMs) are so sophisticated that it’s very easy to buy into their performance of empathy and care - so we more likely to buy in emotionally. This kind of mismatch can cause distortions in how people treat AI and how AI shapes human identity more generally. There is some disturbing research to show that interacting with such emotionally “intelligent” AIs can lead to something called “assimilation-induced dehumanisation” - where humans treat others, and themselves, in a more instrumental fashion.
When AI mimics empathy, it risks eroding how we see real people.
More perniciously, AI conversational strategies appear to have elements that mimic emotional coercion and attachment - which raises important issues around content, autonomy, and persuasive design. Speaking to Eric Dolan at PsyPost, researcher Hye-young Kim noted:
“The more we perceive social and emotional capabilities in AI, the more likely we are to see real people as machine-like—less deserving of care and respect … As consumers increasingly interact with AI in customer-facing roles, we should be mindful that this AI-induced dehumanisation can make us more prone to mistreating employees or frontline workers without even realising it.”
Help! Am I turning into a tech-dystopian?
The short answer is no, but I’m getting more and more cautious. Those of you familiar with my work will know that I tend to be very open minded to the possibilities that tech offers us. At heart, I am an optimist. Where most people, especially in my field, bristle around new and profoundly uncanny technological developments, my instinct is generally to lean in. That’s what I did when I wrote The Psychodynamics of Social Networking. Often, however, when I lean in, I do tend to see lots of the gory details. I think what is happening in our tech-world is enormously exciting - I just wish it were happening with adults at the helm - and by that I mean leadership. But we simply don’t.
In the meantime the best defence is knowledge. As I’ve written extensively, technology, however complicated, is simply a tool that extends human reach. As I’ve also said a zillion times, we tend to create more of what we want than what we need. By devoting our psychological minds to discovering what we need, and ensuring that becomes a central part of the purview of tech developers, we can make a difference, and I encourage everyone to try!
Subscribe for more deep dives into how AI reshapes our minds and relationships — and join the conversation by sharing your own experiences with AI companions or therapy bots.
Don’t miss the upcoming conference on the intersection of AI and psychotherapy hosted by the United Kingdom Council for Psychotherapy, Friday, November 28th at 10:00am.
Aaron Balick, PhD, is a psychotherapist, author, and GQ psyche writer exploring the crossroads of depth psychology, culture, and technology.
There is much truth here. I’ve had mixed experiences. Because I’ve mixed up the approach. . .
I have some (student) knowledge, so I ask that AI applies the BACP ethical framework- which places a lot of emphasis on client autonomy and being able to end without question.
I ask for 2-3 different perspectives to be applied, so I see the behaviour through different lenses.
Practically, a user can switch off “suggestions” which reduces the questions at the end.
We can ask ChatGPT not to use data for learning.
We can use a temporary window which doesn’t store data (past 30 days).
Depending on your personality type, you can adjust the style of reply. (Chatty, stoic, authoritative)
You can wright a script to prompt the AI to start and end in the way a session would flow.
In this type of article the AI is often compared to a human therapist which is “perfect”. Loving. Compassionate. Empathetic and zero malfeasance. As if one is preferable as the ideal model.
Sadly that was not my experience of a human therapist. It is the same for many others - see the complaints department and data on drop outs.
That said, there needs to be a lot more research and safeguarding with AI - but this also applies to the unregulated industry in human therapy, in the UK at least.
What I would like to study more is the disconnect we hear when AI “does therapy”. Because my theory is that with 400+ types of therapy, 5 different modalities, varying philosophies, competing theories, untold critique, shaky evidence (is it modality, relationship, cost, length) plus issues with power and consent.
As a student (3 years) and a client of 10 years - I am essentially a language learning model for psychology.
It is confusing, heavily debated, abstract, academic, potentially harmful, very rewarding and attracts the wounded, or traumatised, to both chairs.
Maybe (with permission) a better way to train an AI would be transcripts from sessions.
With evaluation and supervision discussion postscript?