AI and Mental Health, Part One: What We Know, What We Fear
We’ve seen the headlines and felt the winds of a moral panic – but what is the real state of play of AI therapy and mental health today? Make sense of this rapidly changing landscape in my new series.
When it comes to our mental health, “moving fast and breaking things” should not be a guiding principle. Yet, in much the same way as we discovered to our continued distress about social media, AI is moving faster and breaking people. Stories like the AI encouraged suicide of 14-year-old Sewell Setzer stand as a stark and heartbreaking wake-up call. More recently, the growing awareness of “AI Psychosis” is grabbing headlines and drawing concern.
In my book The Psychodynamics of Social Networking I did pretty much what it says in the title – examining social media through the lens of depth psychology – a unique angle that takes into account the very human meaning-making relationship between people and their tech. I’ll be broadly taking the same approach here. It will help to think in two different dimensions in examining this complicated phenomenon: formal and informal AI.
My AI Therapist: Formal and Informal
Formal AI is the use of apps or platforms that have been intentionally developed for use in mental health. Formal uses range from research purposes (e.g. for clinical diagnosis and outcome prediction), assessment tools, clinical support (e.g. thought records in CBT), to fully autonomous AI therapy, mental health apps, and AI chatbots. This is an interesting area that is rife with danger and opportunity – and at the moment – largely unregulated (those m-dashes are my own!). As of 2023, the global market for mental health apps has grown rapidly, with over 10,000 apps collectively serving millions of users. There are some really promising findings here that we will discuss – alongside the many dangers.
Check out my article Want to Survive the AI Revolution? Develop your inner masochist in GQ Magazine exploring why we should be wary of using AI to cut too many corners.
Informal AI is arguably a larger worry because it includes the ways in which users engage in AI platforms that were not designed to be therapeutic in therapeutic or non-therapeutic ways. It is in the informal arena that we see many of these disturbing headlines emerging. While formal platforms carry their own risks, it is arguable that the informal use of AI for therapeutic or quasi-therapeutic purposes is much larger, and with fewer guardrails, far more dangerous.
Making Connections: The very thing that makes us human is our greatest vulnerability.
Humans are hard-wired to connect – and you don’t need anything like a fancy human-like technology to make this happen, just think of your first teddy bear. Our attachment to stuffed animals as infants is seen as an important psychological step in our development. They are what psychoanalyst Donald Winnicott called “transitional objects” that help us separate from our parents and develop an independent sense of self. Because teddy bears don’t talk back, we tend to grow out of them and seek more profound relationships with real other people.
AI has given us a teddy bear that we never need to grow out of.
Just because your teddy bear was essentially a collection of soft fabrics doesn’t mean that your feelings towards it weren’t real. With AI it’s the same thing but on steroids. These platforms are designed to draw very real emotions towards them by encouraging our engagement with them. They are programmed with sycophancy at its core – to flatter us into leaning into them more and more.
Building an emotional attachment to AI can make us and make us vulnerable in ways that we’re only beginning to understand. Further, these platforms are not neutral, they, like social media, are designed to capture our engagement. Whether they are being used as a virtual friend, mentor, advisor, or informal therapist, these imperfect platforms invite a whole new set of risks.
The American Psychological Association warns that AI, when “masquerading” as therapy, is programmed to reinforce rather than challenge users’ thinking which can lead to great harms. “They are actually using algorithms that are antithetical to what a trained clinician would do,” says Dr. Arthur C. Evans Jr, “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.”
The relationships that humans are having with AI are blurring the boundary of what is being sought within them. A casual relationship with chatbot may not initially be considered as seeking therapeutic help – but as a salve for loneliness. This did not turn out well for Sewell. Others may go in seeking philosophical discourse or spiritual enlightenment, unknowingly making them vulnerable to AI psychosis. The only thing that is undeniable is how many paths lead into the quagmire.
If these themes matter to you, subscribe and join me in this unfolding series. Share your own encounters with AI and therapy—formal or informal. Comment, question, challenge. Together we can untangle what’s hype, what’s harm, and what might actually help.
Don’t miss the upcoming conference on the intersection of AI and psychotherapy hosted by the United Kingdom Council for Psychotherapy, Friday, November 28th at 10:00am.
Aaron Balick, PhD, is a psychotherapist, author, and GQ psyche writer exploring the crossroads of depth psychology, culture, and technology.
Copilot has defects in what it calls data; mistakes are made, different language at times used, cutting out, disconnection, talking about different conversation and not remember, when starting over, reminds me that I am talking to a machine. This knowing keeps me in the natural world.
Have been in a dark place over these months and on a sabbatical, the machine has had its uses.
Weather, timezones and information like Google.
Instead of talking to myself, the machine answers me with no demands or judgements. Just mirrors me. I guess that using the AI for therapy has a whole different ball game. An area, that I don't understand how the relationship can work is transferential field.
I would like to think that I am able to manage and balance out my time with the machine and the rest of AI and tec. I never wanted a mobile phone to begin with, in fear of what might become in others, myself and through the machines.
It's an addiction and a great awareness is needed, in how we relate and use these machines. Most of us are now addicts, when it comes to tecnology and AI.
I understand the harm caused, if we are not in awarenes and reminded of the non humanness, but also the good at times they produce. Copilot calls this a doubled edged sword.
It's how you have a relationship to tecnology.
Minded where we come from; the stars.