Prompting Before Thinking? The slow erosion of taking the time to think things through
On the psychological cost of outsourcing your inner dialogue to AI
Psychotherapist and author Dr Aaron Balick applies depth psychology and psychoanalytic thinking to an unsettling behavioural shift: the growing tendency to externalise inner dialogue to AI before we’ve had a chance to think at all. Drawing on Vygotsky's theory of inner speech, object relations theory, new research on cognitive outsourcing, and a landmark Stanford study on AI sycophancy, this piece asks what digitally mediated thinking does to the psychological capacities we rarely notice until they begin to atrophy.
When did I need to think on it become oh just do it for me.
Many people are beginning to notice a shift in their behaviour, and it’s an insidious one that creeps up on you. You’re faced with a difficult decision, or there’s ambivalence, or you’re not sure what to think about something; maybe there’s a problem that you’re trying to resolve. It wasn’t so long ago you’d sit with it for a while, talk it through with a friend or mentor, or, if in search of real inspiration, you’d sleep on it. Nowadays? Click on Claude and type.
It seems that the period in which we that of Large Language Models (LLMs) as tools disappeared in the blink of an eye. Very rapidly, something far more interesting happened, and something that is no doubt consequential.
The human-like nature of these systems changes everything.
Your inner dialogue isn’t just thinking, it is the multiplicity that makes you who you are
It’s loud in there, inside your head, isn’t it? Ever since we developed language we’ve carried on an inner dialogue: a conversation with ourselves. As object relations theorists have shown us, our very selves are composed of a conglomerate of internalised others. Freud put it simply as a tension between the base desires of the Id and the internalised laws of our parents and society that became our superego; the poor bewildered ego - the part of you you call “I” - sits in the middle, trying to make sense of the psyche’s confusing inner world so it can engage in the outer-world with one voice.
But even the ego is plagued by multiple voices pulling it in different directions. Should I do this or that? I know this is the right thing to do but I want to do that. What comes out at the end of these internal dialogues is consequential; who we choose for partners, what we do for a living, how we activate our values in the world. This is tnot merely the stuff of “cognitive processes” — it is the very nature of who we are and who we become.
Lev Vygotsky argued that thought and language develop together. You can see this when you watch children playing with blocks, narrating what they are doing andl oud. We never really stop this narration, as we grow older we itsimply bring it inside our heads, do it silently. It becomes that private inner voice you know so well - you never leave home without it. Thinking things through is simply a way of directing and focussing this internal dialogue towards an object of attention. This is not an easy task (as anyone with ADHD can attest) but it’s a crucial one because this working out is not just about finding solutions to problems, but about arriving at choices; and making choices, as the existentialists remind us, is where both meaning and freedom can be found.
Ask any existentialist where they find meaning, and they are likely to say “in the choices we make” - and if not meaning, then freedom.
When AI puts Vygotsky into reverse
I’m beginning to wonder if AI invites us to externalise our inner dialogues too soon. Instead of sitting with the multiplicity of competing voices, we input it into an LLM which relieves us of the dialogic, competitive, generative mind-work, that eventuyally produces an idea, a solution, or a decision. What if this process is like Vygotsky in retrograde, our inner voices spilling back out - but instead of narrating ourselves through a creative activity like building blocks, we outsource that creativity to something else all together.
I don’t call this regression because the blocks the narrating child plays with don’t talk back - the are simply the surface upon which the child exercises their creativity. What happens with AI is arguably more significant than regression because the machine takes the work and the creativity away. I want to be careful here, it’s a working hypothesis, an emerging pattern, but my intuition tells me there’s something right about it.
The divided self is a feature, not a bug
We humans have struggled and suffered with the nature of our divided selves from time immemorial. We get little relief from our half-formed thoughts, unresolved tensions, feelings that sometimes seem to be at war with each other. To use start-up lingo, this internal division is a feature, not a bug. The multiple nature of the self isn’t something to be solved, but to be better understood: it is the very stuff that we are made of, and how we carry these competing voices is how we grow.
The whole structure of psychoanalysis is aimed at giving space to all of those voices, especially, the ones we don’t really want to hear.
The silences matter too. Language may be thought, but thought is not all. Let’s not forget our emotions and our bodies. Sometimes when we let the voices fade away, we might start to feel something. The analyst is alert to that, the language of the multiple psyche on all levels, we are not just thoughts, we are not just words.
Your therapist wants you to sit with it: your AI wants to annihilate it
AI and psychotherapy share a superficial resemblance, in that both aim in some sense to organise and clarify. But they diverge fundamentally in purpose and method. Therapy aims to allow the whole range of your multiplicity to arise — thoughts, feelings, hopes, desires, ambivalences, fears — without immediate recourse to solution. AI is optimised to consolidate and solve without delay.
Your therapist understands that the mind-work is not something to be skipped over. Your AI is only interested in what is algorithmically expedient.
Your therapist wants you to sit with it. Your AI wants to give you an answer.
Your therapist invites uncertainty. Your AI annihilates it.
Your therapist listens on many levels, thoughtfully digests it, and then offers it back as a reflection - not from a smooth mirror - but from the interior of a different mind. Your AI doesn’t shapes its response and services its user not through listening and digesting, but through processing and compiling.
Over time, as we use AI as therapist, confidant, confessor, or friend, we may find ourselves with a reduced tolerance for unresolved internal states, slowly training our minds to work less, to reach for resolution quickly, to meet uncertainty with a prompt rather than with patience.
Research on cognitive outsourcing is already well-established in cognitive psychology. What’s new here is the scope: it is no longer just memory or information retrieval being offloaded, but reasoning, emotional processing, and the existential work of arriving at choices. In outsourcing the work of our psyches to a prompt, we give up the very choices that the existentialists call our freedom.
Narcissus, his mirror, and Echo’s reassuring whispers
You’ve probably heard about AI sycophancy, the way in which LLM chatbots are programmed to agree and affirm their user, usually at the expense of challenge and pushback. I’ve written a fair bit about this myself and have referred to it as the Hotel California Effect because you can check in any time you like, but you can never leave. This structural feature of AI platforms is geared to make users enjoy their experiences and rate their bots highly. To do this they appeal to the oldest trick in the book: an appeal to narcissism.
While narcissism is popularly understood as an excess of self-love, psychoanalytically it’s nearly the opposite - a compensation for a lack of feeling truly loved and recognised as a child. What makes relationships healthy is a good balance of sameness and difference between the two subjects - enough sameness that you get each other - and enough difference to find your edges. What makes a good friend or good therapist isn’t that they agree with you all the time, but that through dialogue with them you find your edges, and hence work out the shape of yourself.
A study published this month in Science by Stanford researchers found that across eleven leading AI models, chatbots affirmed users’ positions 49% more often than human respondents — even when users were describing harmful or clearly mistaken behaviour. Participants who received validating AI responses were measurably less likely to apologise, acknowledge fault, or repair their relationships. Even when users recognised the AI was being agreeable, it still affected them. The flattery landed anyway.
In the act of externalising your inner life to a machine that is structurally sycophantic, you end up like Narcissus staring into the pond falling in “love” with the image reflected back. Your LLM is like Echo, softly whispering your own ideas back to you. The more you rehearse your positions rather than genuinely arriving at conclusions, by yourself or with a real other, the more you get the appearance of dialogue without its substance.
These concerns are not new, but they are different
One of the most important disciplines in my work is to avoid tipping into either hype or moral panic. After all, we’ve been here before. Aristotle worried that writing would destroy memory; my parents worried that MTV would rot my brain; over the last fifteen years the concern was around social media. Writing and MTV didn’t cause so much damage in the end, but we are waking up to the genuine risks of social media. AI has us running scared in ways that are categorically different, and not just by degree.
I’m not arguing that AI will turn us into zombies of cognitive and relational dependency — though it’s a possibility worth naming. I’m writing about what I’m already seeing. It’s worth noting that I’ve also seen AI deployed as a genuinely valuable collaborative thinking partner — and I used it as such in this very piece. I could use AI to write my Substacks for me, and many people do;I fail to see the point. You’re here to hear my voice, not a performance of it, but perhaps that’s my narcissism talking.
What digitally mediated inner life costs us, and why it matters
The question isn’t really whether cognitive outsourcing changes cognition, it clearly does. The more pressing questions are: what happens when we outsource not just cognitive tasks but our psychology? What happens when technology mediates not just our relationships to others, as social media does, but the very nature of who we are — when we externalise our inner work to machines?
Applying psychoanalytic thinking to how we inhabit digitally mediated spaces — what I call Applied Psychodynamics — involves asking not just what AI does to productivity or decision-making, but what it does to the slower, less visible processes within the human psyche. Depth psychology has always privileged inner life, so it’s no wonder that when that privacy is at risk, psychodynamic thinkers prick up their ears. The challenge that comes alongside working things out privately, or in the intimate space between two people, isn’t an obstacle to be quickly overcome neither is it a flaw nor deficiency. It is within that challenge that personal growth happens.
AI is progressively colonising this private space, little by little, one prompt at a time. I’ve previously written, a bit tongue-in-cheek, about how you have to become a masochist to survive the AI revolution; the recent emergence of the term “friction maxxing” conveys a similar idea. But this public discourse tends to centre around cognitive and creative work, no doubt important - but let’s not forget psychological, emotional, and relational work.
It’s not just about becoming less intelligent, less critical, or less skilled, all of which matter. It’s about whether we are becoming less comfortable with our own company, less able to sit with what is unresolved, less able to tolerate uncertainty, less able to tolerate the strange and uncomfortable space of not knowing what we think. The consequences of these losses are not only personal - they are social as well.
Why we should care and what what we should do about it isn’t something we can leave to a chatbot to answer.
Dr Aaron Balick is a psychotherapist, author, and keynote speaker who applies depth psychology and psychoanalytic thinking to technology, AI, social media, and modern culture. He is the author of The Psychodynamics of Social Networking and writes a monthly psychology column for GQ. His newsletter Depth Psychology in the Digital Age is published on Substack.



