Botocracy: Is AI going to take over all aspects of public life?

“AI needs to take the form of a social and practical good for society.” - Dr Susan Oman

Artificial intelligence has been part of the cultural wallpaper for decades, but something shifted after 2022. What had once been a niche research field suddenly became a mainstream organising principle: not just of how we work, but how we understand ourselves, how we communicate, and increasingly, how we are governed. In this episode of The Great RomCon?, I sat down with Dr Susan Oman of Sheffield University to explore what happens when the systems we build to reflect society begin to represent it. And, perhaps more importantly, what happens when the public is asked to trust technologies they don’t fully understand, deployed by politicians they don’t fully trust. This is a conversation about democracy, but also about intimacy: the fragile relationship between people and the institutions that claim to speak for them.

When Matt Clifford set out the Government’s AI Opportunities Action Plan in early 2025, the vision was relatively benign: automate the boring, repetitive tasks in your personal and professional life. Free people from admin. Allow humans to focus on what matters. Let people be people. Render therefore unto the machines, the things that is of the machines. After all, no one likes admin. But somewhere between automating spreadsheets and answering emails, we appear to have drifted into automating something more fundamental: public and political representation itself.

In 2025, Labour MP Mark Sewards launched an AI version of himself - “AI Mark” (Mark’s Yorkshire accent combined with a The Sims 2-style image) - allowing constituents to message a chatbot and receive instant responses on policy, local issues, and even, somewhat bizarrely, what Mark’s dating profile might include. Tony Blair may have joked that ‘lapsing into complete honesty’ was breaking “the first rule of politics”, but AI Mark informed budding inquirers that he (AI Mark or Real Mark?) is, “Passionate about social justice and equality, just like my love for local food spots in Leeds, seeking someone who shares a heart for community.” For Dr Oman, “AI Mark” wasn’t just a quirky footnote in parliamentary history. It was a sign of a deeper shift: a growing willingness to outsource the messy, relational work of democracy to systems that promise efficiency, neutrality and convenience. But convenience for whom? And at what cost?

It was framed as a boon for accessibility, efficiency and innovation. The chatbot would be available at all times. No queues. No voicemail. No waiting. And yet, there were some concerns about this digital approach. For one, the system struggled with reality in surprising ways. Sewards represented a seat in Leeds, yet the bot struggled to understand a strong Yorkshire accent. This was a small reminder that even our most advanced systems struggle (at the moment) with the texture of real life. More troubling was the question of what the system actually knew, and why.

“What data had AI Mark been trained on, and what was it collecting from constituents?” - Dr Oman

At one point, the chatbot offered ‘Mark’s’ views on the Iraq War that felt plausible, but not necessarily factual (he had never spoken publicly about it), the kind of thing a vaguely centre-left politician might say, an approximation of post-Milibandism. In politics, approximation (as with triangulation) can be more dangerous than error. This is not entirely a new issue. Modern politics has long operated in a space where what is said is not always what is believed. As Keir Starmer’s much-discussed ‘Island of Strangers’ speech (May 2025) demonstrated, political language often functions as positioning and signposting, rather than conviction. It is already a problem when the things politicians say isn’t what they necessarily believe. But when AI enters the equation, that gap widens. We are no longer dealing with strategic ambiguity. We are dealing with synthetic plausibility: leading to systems that when standing in for an elected official, can manage to sound right, rather than be right.

It’s seemingly impossible to discuss AI in public life without invoking the TV series, Black Mirror. A particular episode, ‘The Waldo Moment’, in 2013 built on the electorate’s yearning for anti-establishment novelty, in which a blue (Reform teal?) cartoon bear goes from amusing political attack dog (like a digital Count Binface) to elected candidate - a satire that feels less like fiction given the continued rise of ‘anti-professional politics’ populism. I, for one, would welcome our AI overlords, as the only jobs left will be to toil fixing the AC in their datacentre caves, but would the electorate at large feel the same?

What struck both of us was how quickly technology is changing our expectations. A decade ago, the idea of an AI MP would have been sci-fi fantasy. Today, it’s a pilot project - sorry, ‘experiment’. Tomorrow, it may be a policy proposal. Could we do away with all human politicians and be run by chatbots, hard-coded with our views and preferences? Given the resistance in Parliament to adopt digital voting, this would not be without a serious fight from the system. As much as people may not like/trust politicians, the public would need to have trust that this system worked and wasn’t a massive cybersecurity risk waiting to be hacked.

Technological ambition is admirable, but ‘AI Mark’ felt like an engineer’s solution to a human problem. People do not just want answers; they want to feel listened to. A chatbot, however efficient, risks feeling like a voicemail service, a polite deflection, a digital fob-off. If the public already feels unheard by politicians (The Hansard Society has data showing that trust in politicians has been low since the 2009 Expenses Scandal), replacing human contact with semi-verifiable automated responses risks widening that gap, not closing it.

 Even if it provides technically correct information, it may fail at something more important: reassurance and agency. The sense that the higher powers have been alerted to some grievous injustice and something will actually be done. Dr Oman’s work focuses on how data operates in real life, particularly in areas like loneliness, inequality and wellbeing. These are not easily quantifiable domains. They resist neat categorisation. They depend on context, nuance and lived experience. What happens when wellbeing becomes data, experience becomes metrics, and representation becomes automation? You get corporate-style metrics on Net Promotor Score and Brand Consideration Score. As public bodies digitalise and adopt AI into their processes, they will need to consider what is lost when qualitative data is turned into binary.

Susan spoke about some of the findings of the ‘Public Voices on AI’ study that she works on. Public attitudes towards AI are often more cautious than policymakers assume. Studies on public voices in AI consistently show a desire for greater regulation, concern about transparency, and scepticism about unchecked deployment. People are not anti-technology, but they are wary of systems that operate without clear accountability. In short, AI needs to form a social and practical good, not merely a technical one.

There is also a more practical concern. Failed experiments have consequences. When innovations, however well-meaning, are poorly executed with unclear purpose, weak transparency, or questionable outputs they do not just fail; they make future innovation harder. No one wants to try again after a bad first experience. Trust, once lost, is difficult to rebuild.

Although, I might question whether ‘AI Mark’ was the great retrenchment in democracy that some may think. Is an AI chatbot really so different from how many offices, or even political systems, already operate? Layered communication, scripted responses written on behalf of someone else, distance between decision-makers and the public. Perhaps AI is not transforming governance as much as revealing its existing structure.

It is also worth noting that Sewards was not the first to attempt this. In the 2024 General Election, businessman Steve Endacott ran in the Brighton Pavilion constituency under the pseudonym “AI Steve”, a campaign built around AI-driven representation. The result? 174 votes. Fewer than the Monster Raving Loony Party. A reminder that while the idea of AI governance fascinates, it has yet to inspire genuine public enthusiasm. Perhaps we can now ask AI how we should revolutionise the First Past the Post voting system in an era of ‘5 Party Politics’ in England.

Talking to Dr Oman made one thing clear to me. The future of democracy will not be determined by the sophistication of our technologies, but by the quality of our relationships: with each other, with our institutions, and with the systems we increasingly rely on to mediate both. AI can improve access, reduce friction and process information, but it cannot do the most important thing. It cannot care, only give the effect of caring. And until it can more convincingly simulate that, or we decide just to suspend our disbelief and anthropomorphise the bots, then the human element of politics will remain not just relevant, but essential.

Previous
Previous

The Therapist Is Online Now: How is technology changing the way we view ourselves?

Next
Next

Love is Blind: How is tech helping when dating with a disability?