The Algorithmic Self: How AI is shaping identity and inner life
We tend to think of our egos as coming independently from deep inside ourselves, but psychologically, it’s always been formed in relationship to others. Until now. Today AI is becoming another shaper of our internal worlds. This essay explores the emergence of the algorithmic self: how AI platforms are contributing to how we understand and interpret ourselves, regulate our emotions, and contribute to the formation of our identities asking what’s to be gained, and what’s to be lost?
When you ask yourself the question, “who am I?” what are the first things that you think of? That simple pronoun “I”, composed of that one single vowel, does a lot of heavy lifting! We tend to experience ourselves as a singular bounded “me”, continuous over time, making choices, and having thoughts and feelings – a complex subject with a history and an identity. But this sense of self, what psychologists call the ego, didn’t come fully formed when you were born, but developed over time, through being mirrored, recognised, and responded to by others. And once it’s formed it doesn’t just stop, but continues to develop through all of the important relationships we encounter across our lives.
AI as an ego-shaping agent: AI and subjectivity
For most of human history, those ego-shaping relationships were other humans, but this may no longer be the case. More and more, AI is becoming an agent in the co-creation of the ego through its ongoing feedback. AI platforms continuously track, predict and summarise your behaviour, thoughts, feelings and preferences and re-interpret in ways that shape the way you see yourself and your identity.
According to a recently published paper in Frontiers of Psychology, Jeena Joseph defines the algorithmic self in a way that helps us understand the interactive and co-created dynamic between the ego and AI:
… a form of digitally mediated identity in which personal awareness, preferences, and even emotional patterns are shaped through continuous feedback from AI systems (Turtle et al., 2024). It is not merely a self-reflected in technology but co-constructed by it—where algorithms do not passively reflect the self but actively participate in its formation (Masiero, 2023) … In this view, the self is no longer autonomous and inwardly derived, but assembled across interfaces, platforms, and predictive logics.
Just like in human relations, there is a “dose effect”. The reason we talk about our parents so much in therapy is because of the “high dose” of our time with them across our early lives gives them a disproportionate influence upon our psyches. Similarly, the more time we spend engaging with our AI platforms (whether they be companions or “simply” assistants) the more our psyches are shaped by them. Presumably the shaping goes both ways, but the impact on the AI platforms through its learning protocols is likely to be negligible on the individual level.
The advent of social media removed the passive quality of the way we used to interact with screens and the development of AI has increased it by orders of magnitude. Joseph uses the example of Spotify Wrapped and mood-tracking apps of examples of apps that “not only serve to mirror behaviour of the user but also to define, shape, and control the user’s sense of self over time.” With Spotify Wrapped, users are not only somewhat defined by their listening habits reflected back at them (and modified through algorithmic suggestions) but doubled down in the social sharing of these results. Users can both feel seen and unsettled by the accuracy of what is reflected back at them.
Reflection, distortion, and the algorithmic mirror: How AI shapes identity
Joseph’s paper goes on to explore the effects of the algorithmic self not so much as a tool for self discovery, but a “reflection experience, one that is external and that is facilitated by the interpretations from machines.” Because of the nature of these machines, they don’t mirror the self so much as, disturbingly, “shapes it in conformity with algorithms.” Yet again we are confronted with challenges that are complex and subtle. While using AI systems to assist may indeed be helpful in certain contexts, problems arise when we wholly outsource aspects of intellectual, creative, or emotional work to such platforms.
For more on outsourcing see my GQ article: Want to survive the AI revolution? Find your inner masochist.
The dangers here lie in the constricting of the opportunities to explore the richness of our inner lives. According to Joseph, “Outsourcing emotional intelligence to machines can, in the long run, produce a diminished sense of personal emotional awareness and make it difficult to negotiate subtlety in emotions without the help of the machines.” The good news is that this doesn’t have to be so, and AI interventions can indeed be designed to facilitate self exploration too.
Joseph provides examples like AI enhanced journaling assistants or intelligently designed mental health apps that enable users to be more curious about their internal states. Ideally, AI can complement introspection rather than reduce it. The harmful aspects of AI interventions are related to the way it personalises its responses with an aim to reduce decision-making. Rather than just offering suggestions about the things we might like, it is constantly reinforcing our “interests” as perceived by our interactions with it.
When AI uses predictive algorithms, for example, to guide our writing with suggested prompts, it can subtly turn the course of our intended communications. At a high dose level, with regular use, these “homogenised expressions” risk “stifling individuality and suppressing a person’s sense of authentic expression in communication.” This results in “preference reinforcement” and “cognitive entrenchment”: it provides the illusion of choice while gently and subtly guiding us towards something more predetermined.
Comfort, friction, and the loss of psychological grit: The Psychological impact of AI
If there is one theme that I keep coming back to in my work it is this idea of getting what we want and not what we need. AI, in taking the edge of off the hard work required from us by daily life offers options that are comfortable in the moment, but may have long term consequences in much the same way that a relaxing cigarette break will likely need to be paid for further down the line. AI offers reduction of uncertainty, relief from ambivalence, and the externalisation of doubt ways that threaten to atrophy the emotional grit we require to tolerate the vicissitudes of daily life. This, combined with the always on, always available route to reassurance, validation, and confirmation of our biases create a real threat to the very experiences that define us as human.
Mirroring in therapy vs. mirroring by AI:
Therapists also endeavour to mirror and recognise the complexity of their clients. Nearly all therapeutic models have come to understand that psychotherapy is, at its very core, an intersubjective event. While basic mirroring and reflection in the Rogerian sense offers clients the safety they need to open up, it is actually within the differences between how therapists and clients see the world and themselves where the real richness emerges. While an aspect of self-exploration and introspection can be done on one’s own, it will only take you so far. You’re really begin to find out who you are during intimate exchanges between yourself and others. This is not always easy, which is why long term therapy can be such a slog.
The sycophancy and affirmation bias that AI offers thins out difference, smooths disagreement, diminishes internal conflict, creating a level of psychological and emotional comfort that does not reflect the true nature of interpersonal relations in the real world. The process of individuation requires edge; we find it in patiently waiting for the unconscious to speak to us, enduring misunderstanding, tolerating contradiction, and managing unresolved tension. The move to the algorithmic self seeks to resolve these discomforts too soon, smooths out the rough parts too efficiently, offers narrative solutions too soon.
The individuated self doesn’t arise from clear narratives and neatly sewn up resolutions, it emerges from tolerating the discomfort of complexity and ambivalence, it emerges from allowing all the disparate aspects of the self the space to surface.
As we move forward in to this new world we need to be asking ourselves what is fundamentally the best approach when our questions elude clearly cut answers that AI will seek to give us. When do we need time to think and reflect rather than instinctively reach for answers? If we really want to get to know ourselves, we should avoid reaching for oven-ready interpretations that are not intended for us as individuals (as the therapist hopes to assist) but conclusions based on algorithms developed on the population level?
The self has always been shaped within relationship, the algorithmic self raises a question that is psychological rather than technical, begging the question, “What kinds of relationships are we allowing to participate in who we become? While there may be moments when algorithmic reflection helps us see something we may otherwise miss, overall such interpretations need time, silence, friction, and most importantly, to be filtered through the lens of another’s mind (a real human mind!). I’m increasingly interested in where readers detect this tension in their own lives, where the clarity an AI agent provides is truly helpful, and those moments where the responses may be too quick and too clean. Those edges, where comfort and depth come into conflict may be the most important questions we can ask ourselves right now.
Aaron Balick, PhD, is an internationally recognised keynote speaker, psychotherapist, author, and GQ psyche writer specialising in the psychological impact of technology on identity, relationships, and mental health



This is really, really good.
What unnerves me about the longer-term picture is how AI’s immersion into society will shape relationships between people as humans and what predetermined understanding and misunderstandings can occur. I think it may increase manipulation and ignorance in this way.
My fear is that young people who have little self-control will just cave into the easier and comfortable mode of talking to and listening to AI, delaying and/or shortening face to face communication, thereby becoming isolated but content. Until they aren't. Then they will have virtually no real communication skills. Get ready for a busy schedule!