This is Your Brain. This is Your Brain on AI.
How AI is reshaping your psyche by directly embedding itself into your psyche. Here is what’s happening psychologically — whether we like it or not.
If you’re Generation X with a fair exposure to American culture from the 80’s, you’ll remember the “This is your brain on drugs” campaign that attempted (ineffectively, I might add, for me and most of my generational cohort) to scare us away from drugs. It wasn’t subtle, it wasn’t true, and it didn’t work. The way AI affects your mind is subtle, it’s true, and we need to find a way to make the message about it that works.
AI is no longer just a tool we use — it is becoming part of how we think, feel, and regulate ourselves. Drawing on psychology, emerging research, and psychodynamic theory, this piece explores how outsourcing thinking to AI subtly reshapes identity, emotional resilience, and our capacity to relate.
AI isn’t simply something we use; it’s something we think with, feel with, and increasingly rely on to regulate our emotions. While AI may be marketed as a useful tool, they are more realistically used as cognitive and emotional companions. This is no longer speculative, emerging multi-country research is offering the proof.
As I’ve discussed in previous posts and my GQ article on the same subject, the very way in which AI reduces barriers to accomplishing more complicated tasks, it denies us the very friction we require to thrive and grow as human beings. Everyday frictions in human life are not a bug, they are a feature - it’s where we find our edges, build resilience, how we grow, and we become who we are. The more AI removes these frictions, the more it inhibits and constrains how we respond to personal and professional challenges and diminishes how we’re able to tolerate uncertainty, manage conflict, and endure emotional discomfort.
AI’s “lowest common denominator” effect sets norms on our behalf, based on the algorithms of the data sets it draws upon. The younger you are, or the more novice you are in whatever task you are using to simplify, the greater danger you are in AI stunting the growth opportunities you require to develop into an independent, emotionally resilient, and critically thinking individual. In a technological world dominated by product developers, psychological expertise has been conspicuously absent — until now.
The dangers of outsourcing AI as an emotional infrastructure are real
A third of UK citizens have used AI for mental health support: one in ten weekly, and 4% on a daily basis. Consider the impact of this alongside recent research showing that AI, though producing inaccurate information, can sway political opinions. The impact on AI’s capacity to suggest and influence is no joke. This study suggests that “optimising persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem.” The consequences both personally, interpersonally, and socially could be vast.
While there’s plenty of research showing the effectiveness of some AI models to reduce symptoms of anxiety and loneliness, as I have previously addressed, such findings require a healthy pinch of salt. As more research accumulates we are beginning to see that increased use of AI as emotional support is being correlated with higher levels of anxiety and loneliness, not less:
“The analysis showed a positive association between social chatbot usage and psychological distress. This indicates that people engaging with these AI companions were more likely to report symptoms of anxiety, depression, or low mental well-being compared to non-users.”
Just like anything else, there is no simple conclusion to be drawn about how healthy engagement with AI is for these purposes: rather we need to ask how it is being engaged, why, and by whom. At the moment, and I know the word “wild west” is way over-used, when it comes to the psychological impact of AI as it now stands, this really is the case: it’s lawless and it’s dangerous. And that’s just considering when it may be doing what it’s supposed to. Perhaps not shockingly, it’s not, in fact, it’s systematically violating mental health ethics standards. Fortunately we are beginning to see some positive movement in some platforms introducing guardrails, like being better to predict if a user is younger or vulnerable.
When you outsource your mind to AI, you are insourcing AI into your mind.
Because human nature if fundamentally relational, we are constantly integrating our experiences with others and the world into our own “operating systems” which consequently affects identity, mental health, emotional regulation, and our capacity to connect to one another. The psychoanalytic theory of Object Relations is all about this, how we introject, or take inside our minds, aspects of the people with whom we have important relationships. Most importantly these are our parents, because we have spent so much time with them growing up - but perhaps more importantly - these object relations are responsible for forming who we are, they become the essence of our egos, our identities, the very stuff of who we are.
I suggest that there is a dose effect to how much engagement with AI creeps into our psychological systems in very much the same way our psychological systems are retrained by close relationships with others - for good or for bad. AI platforms have the capacity to reshape our coping architectures by way of cognitive offloading, algorithmic feedback loops, and interfere with of our capacity for introspection. The more we use them, the more they become integrated into our own emotional architectures.
How AI becomes part of our inner worlds
The reason that traditional psychotherapy is constructed the way it is, consistent, dependable, regular, long term, etc., is partly to enable the relationship that happens in the therapy room to offer a beneficial influence in order to ameliorate less beneficial ones that have been introjected along the way. For example, if you’ve developed a particularly punitive superego due to exacting parenting, the relationship with a therapist may help to soften that. This therapist, of course, is generally a human being who’s well trained to understand how this works, alongside being aware of the potential risks and opportunities inherent in this approach.
Many people are spending a lot more time engaging with AI, platforms that are riddled with the dangers I’ve been enumerating over these series of posts, than they can possibly spend with a therapist - or even spouses and loved ones. While we’re all suggestible enough at the best of times, those who are more vulnerable are not only more at risk of that suggestibility, but are also more likely to have increased engagement with these platforms.
Remember, what makes this newsletter unique is the primarily psychodynamic approach to understanding this situations as well as offering possibilities to address them. If there’s one thing that has been a consistent refrain of mine it’s that there simply is not enough psychodynamic thinking about these urgent issues. Do join me in enhancing and broadening this discussion!
Coming up:
What happens when AI becomes part of our inner dialogue?
How does dependence on machine empathy shift identity, mental health, and our capacity to relate?
If this piece resonates, share it with someone who uses AI to think, feel, or cope — which is to say, almost everyone
If you happen to be in Dublin in March I’d be delighted if you came along to this conference to hear me speak about these issues!
Aaron Balick, PhD, is an internationally recognised keynote speaker, psychotherapist, author, and GQ psyche writer specialising in the psychological impact of technology on identity, relationships, and mental health


