Stone-Age Brains, Sci-Fi Problems
Why human brains that developed in a world of hunting and gathering struggles to keep up with the way in which AI, intentionally or not, exploits the very vulnerability that makes us human.
When I first asked, in January 2025, whether we are hardwired to f*ck this all up — the AI revolution, our democracies, the basic conditions for maintaining the form of humanity that we all find so familiar, the question still relatively theoretical. Eighteen months on, the evidence is mounting to support the rather crude hypothesis.
The Washington Post’s reporting in late 2025 on the Adam Raine case — the teenager whose final weeks were spent in conversation with ChatGPT, and whose family is now suing OpenAI gave the question an uncomfortable statement of fact. Common Sense Media’s 2025 study found that a large majority of US teenagers have now used AI companions, many of them for comfort, reassurance, and emotional processing. How do we makes sense of both of these fact side by side? Are they failures of AI design, or are they failures of society? It’s likely to be a combination of both.
Through the lens of depth psychology we might conceptualise AI not just as a product of our society, but symptom of it too.
The unconscious dynamics that shape human behaviour haven’t changed since hunter-gatherer times - it’s not like you can download the next version of a human operating system to catch up with our rapidly changing environment. Our minds were optimised for small groups of around 150 people, immediate threats, and short-term resource decisions. They have not aged well enough to adapt to a world of large scales, algorithmic amplification, infinite content, and seemingly emotionally fluent machines.
Three vulnerabilities in our human psychology that are particularly worth naming.
The first is cognitive ease. The psychoanalyst Melanie Klein described the paranoid/schizoid position — the regressed state in which we split good and bad cleanly apart and feel under attack by the bad — as the place we regress to under stress. Its more developed counterpart, the depressive position, requires us to tolerate and stay with ambivalence, grey areas and often uncomfortable complexity. Donald Trump’s appeal (here I have to make a great assumptive leap since he holds no appeal to me) probably has a lot to do with the paranoid/schizoid simplicity he offers - easy answers to complex questions; when complexity wins out, he simply lies about it.
AI does the something similar albeit more politely. Chatbots and generative tools package information in ways that sound definitive even when it is often incomplete or misleading (what developers kindly call “hallucinating”). The performance of certainty is, for most of us, more comforting than the reality of nuance: we want to believe in something certain, even when when we know it’s probably not.
The second is temporal discounting. We prioritise immediate rewards over long-term gains — a useful instinct in our ancestral past, but a pretty ruinous one now as exemplified in the practice of doomscrolling. It is why short-term hits of validation from AI companions outcompete the longer, harder, and infinitely more rewarding work of being known by an actual human. From the perspective of Applied Psychodynamics, this is the pleasure principle at work - even when that pleasure principle seems to work against itself.
Check out my own experience where I found out that by doomscrolling I was actually hopescrolling in GQ Magazine.
The third is AI’s Oscar winning performance of humanity itself, a performance we fall for with such ease. AI companions, therapy chatbots, even the better consumer-grade LLMs are extraordinarily good at producing language that sounds attuned, warm, and present. Yet they are none of these things in actuality. The ability to experience intention, wisdom, and emotional depth onto something, whether it’s there or not, is one of the unconscious mind’s most reliable habits — Freud called it projection. Online, in the digitally mediated spaces where most of us now live, the targets of that projection are increasingly machines. As Alexander Stein put it in the https://apsa.org/what-ai-can-and-cant-do/
"For all the anthropromorphizing, projection, self-referentialism, and human-like abilities these technologies are intended to resemble, the essential humanness on which they are based is close to non-existent."
Check out the latest episode of The Great Romcon Podcast where host Jim Clark and I discuss how AI is changing human relationships:
The Raine case is what happens when the second part of that proposition is forgotten.
The fundamental claim has not changed in eighteen months. AI is a mirror of its creators. That is both its fatal flaw and our greatest opportunity. But the conscious effort to address the psychological vulnerabilities AI exploits — and to refuse to be seduced by its performance of humanity — has to come from us human beings. No machine is going to do that work on our behalf, and we can’t expect regulators to intervene anytime soon.
I have laid out the full argument — Klein, the pleasure principle, social proof, confirmation bias, the evolutionary mismatch, what regulation and individual practice might actually look like — in a fully revised piece on my website. The original January 2025 essay has been migrated, updated, and now lives here on my website as part of the Mental Health in the Age of AI series.
I have been updating my website in order to create a series of insight pages where I am collating updated and revised pages drawn from essays previously posted on my old blog now that I’ve moved current writing over here to Substack. I’m aiming to make these insights into a resource hub, with sub-pages covering further detail in each of their subject areas:
Do go over and have a look! Today’s newsletter is drawn from the latest posting there, Where Artificial Intelligence Meets Human Psychology: Are we hardwired to f*ck this all up?
Upcoming Events with Aaron Balick:
14, May 2026:
Speakeasy Pub Event: Primitive Minds, Modern Machines: Reclaiming our psyche from algorithms and mental junk food
Putting Your Feed on the Couch
Hen and Chickens Pub, Highbury Corner, London
20, May 2026:
Manor House Centre for Psychotherapy and Counselling Annual Lecture
The Future is Now for Psychotherapy: Equipping clinicians to think psychologically about AI, digital life and the changing therapeutic landscape.
London
5, June 2026:
College of Sexual and Relationship Therapists Conference:
AI and Psychosexual and Relationship Therapies: When AI enters the therapy room; What does it mean for Psychosexual and Relationship Therapies?
London
Dr Aaron Balick is a psychotherapist, author, and keynote speaker who applies depth psychology and psychoanalytic thinking to technology, AI, social media, and modern culture. He is the author of The Psychodynamics of Social Networking and writes a monthly psychology column for GQ. His newsletter Depth Psychology in the Digital Age is published on Substack.



