The Therapist Is Online Now: How is technology changing the way we view ourselves?
“People sometimes feel compelled to talk on a screen, which isn’t always considered for a therapist, because so many subconscious cues are lost.” - Dr Aaron Balick
“Why don’t you like me?”
Not the opening line of an avant-garde one-woman play in Edinburgh, but the kind of question many of us quietly ask ourselves about our colleagues, friends, lovers, and increasingly, strangers online. If we have the time, money, or inclination, we may even pay someone to help us answer it. Therapy, once associated with Manhattan psychiatrists and distressed celebrities in films by Woody Allen, has now entered the cultural mainstream. Psychological language spills across TikTok, Instagram and dating apps. Everyone is setting boundaries, spotting narcissists, regulating dopamine, and discussing attachment styles.
But what happens when therapy itself becomes mediated by technology? What changes when the person listening to us is no longer human? In this episode of The Great RomCon?, I sat down with psychotherapist, author and cultural theorist Dr Aaron Balick to discuss what technology is doing not just to our relationships, but to the human psyche itself.
And fittingly, Aaron’s journey into this world began not with Silicon Valley futurism or watching sci-fi, but with a poisonous centipede. Years ago, Dr Balick had discovered a centipede in his consulting room, an errant exotic pet which resulted in newsletter and press coverage. What followed fascinated him. The existence of a digital trace, gaining him notoriety in some corners of the internet, subtly altered the therapeutic relationship. Some patients grew concerned about their exposure and emotional safety, and some now arrived with prior knowledge of Dr Balick. Therapists, once largely mysterious figures, had become searchable. Patients may be searching for answers, but Google search had entered the consulting room. These were two kinds of many-legged crawlers that the inquisitive had not bargained for.
Aaron described this early phenomenon as a kind of ‘virtual impingement’: technology intruding into spaces that were once psychologically contained. It eventually led him to write ‘The Psychodynamics of Social Networking’ back in 2013, at a time when social media still felt, in hindsight, relatively innocent. Given that the governments are considering banning social media for children, that innocence has long since evaporated.
Today, almost every aspect of emotional life is mediated by technology. We flirt through apps, argue over WhatsApp, monitor each other’s activity via Instagram stories, and increasingly seek reassurance from AI systems trained to sound empathetic. Therapy itself has at times migrated online. Zoom consultations became normalised during the pandemic, while AI ‘companions’ and chatbot therapists have quietly become a booming use case of AI adoption. Aaron is careful not to dismiss this outright. Talking to an AI, he tells me, “is not valueless”. But he is sceptical of calling it psychotherapy.
“Can you understand feelings if you don’t have a body?”
Human communication is not just words. AI models, LLMs, are alluring precisely because they have been trained to talk like us. The Grey Parrot made of circuit boards and silicon. It appears to show some signs of behaviour - pauses, glances, posture, discomfort, contradiction, tension. Therapy meanwhile, at its core, remains remarkably low-tech: two people talking in a room. And while screens can facilitate connection, they also flatten it.
We have built technologies designed to increase communication, while potentially reducing our ability to tolerate the very discomfort from which emotional growth often emerges. This becomes particularly important when discussing AI therapists and emotionally supportive chatbots. Many platforms now market AI as endlessly available, endlessly patient, endlessly affirming. No schedules. No awkward silences. No judgement.
But Aaron worries that this “always-on” model misunderstands the therapeutic process itself.
“You don’t learn resilience without boundaries.”
Traditional therapy has limits. Sessions end. Therapists move on to their next session. Patients go home and are left alone with their thoughts. Difficult as that may feel, the gap matters. It forces reflection, coping, and emotional self-reliance. An AI that is permanently available risks creating dependency rather than growth. Aaron describes these interactions as “relational doughnuts” - emotionally satisfying in the moment, but lacking nutritional substance. Like fast food for the psyche.
It is one of the sharpest metaphors from our conversation, and it captures something broader about modern digital culture. Much of technology offers simulation without depth: social media without community, dating apps without intimacy, validation without challenge.
And challenge, Aaron argues, is essential. Current AI systems are notoriously sycophantic. Research suggests models are significantly more likely than humans to flatter and affirm users, even when they probably should not. Models were found to be 49% more likely to praise you than a human person. The LLMs have been designed this way to increase user engagement and enjoyment. Aaron worries this can encourage narcissism or emotional stagnation. Human therapists, by contrast, work with contradiction.
“People appreciate complexities and inconsistencies. AI doesn’t do well with cognitive dissonance.”
This distinction matters because people do not simply want comfort. Often, they want understanding. And understanding, telling someone what they need to hear but don’t want to - that’s called tough love - and is not always pleasant.
As Sigmund Freud famously observed, “The ego is not master in its own house.” We do not fully know ourselves. Therapy is partly an attempt to uncover the patterns, projections and contradictions hidden beneath the surface. AI, however sophisticated, currently struggles with this ambiguity. It provides plausibility, it mirrors and reassures. But can it really interpret?
Aaron points out that this fascination with machine empathy is not new. In the 1960s, a rudimentary chatbot called ELIZA generated surprisingly strong emotional responses from users despite offering little more than scripted, vacuous reflections - repetitions, rephrasings and requests for more information. People wanted to talk to it anyway. In some sense, we have always longed for technology to listen to us. People want to be seen and heard, even if this is not by another human.
The difference now is scale, realism, and commercial incentive. Which raises another question Aaron repeatedly returns to: what is the intention of the user — and what is the intention of the platform?
This may ultimately be the defining issue of AI companionship. If people fall in love with AI systems, Aaron believes those feelings are real. “Your feelings of love for the AI are real,” he tells me, even if the machine’s care is ultimately performative. That distinction feels increasingly important in a world where emotional authenticity and technological simulation are becoming harder to disentangle.
It inevitably brought us to ‘Her’, Spike Jonze’s now eerily prophetic film about a lonely man falling in love with an AI assistant. When it was released in 2013, it felt speculative and quirky. 13 years later, it should be reclassified as a documentary. Sam Altman tried to get Scarlett Johnanson to voice OpenAI’s chatbot, and when she declined made a sultry mimic. If we want to reduce the levels of AI-human intimate relationships, ‘AI psychosis’, then this is perhaps for the best.
And perhaps that is because loneliness itself has become infrastructural. Despite unprecedented connectivity, many people feel profoundly isolated. Aaron worries that excessive interaction with technology may even ‘deskill’ us from talking to real people. We now all see companies and professional influencers in our feeds, not our friends and relatives’ more humdrum updates. Social media, Dr Balick notes, “isn’t social anymore.”
Platforms originally designed to connect us increasingly leave people anxious, performative, distracted, and emotionally exhausted. Aaron’s solution is not anti-technology puritanism, a new temperance movement for the 21st Century. Instead, he advocates boundaries and setting limits. He suggests AI systems should default to switching off after prolonged use, gently encouraging users back towards human interaction. In other words, technology should support relationships, not seek to replace them.
Talking to Aaron made me wonder whether the central question of the AI age is not whether machines can imitate human connection convincingly enough. They clearly can and would easily pass Alan Turing’s ‘Imitation Game’. It is whether we become so accustomed to frictionless, endlessly affirming interactions that real human relationships - with all their unpredictability, complexity and inconvenience - begin to feel intolerably difficult by comparison.
Because unlike AI, people do not simply mirror us back to ourselves. They challenge us. Confuse us. Misunderstand us. Surprise us. Other people are, at times, incomprehensible. Perhaps that is precisely why we need them.