We are becoming enslaved to AI

This is an edited version of an article that appeared in The Yorkshire Post on 7 April 2026 in which Matt Bromley cautions against becoming enslaved to AI…

I long ago left Twitter – partly to protest Elon Musk’s attempts to derail UK democracy and partly to protect my ears from the echo chamber of right-wing culture war conspiracies, charlatanism, and chimerical concoction. I sought solace in LinkedIn instead, which seemed to offer a professional space for nuanced debate. But I may soon have to climb out of that social media cesspit, too, because I fear the robots are taking over. To wit: half the posts on my timeline seem to have been written by AI. Even people who can write well are using AI to polish their prose. And yet the polish is proving toxic. 

How can I tell? Well… 

Firstly, AI prose is typically characterised by a high degree of grammatical correctness, clarity, and structural consistency. However, the text often demonstrates an absence of informal nuance, emotional variability, or stylistic imperfections that would otherwise signal authentic human authorship. Furthermore, the tone is frequently neutral, objective, and informational—prioritising coherence and readability over spontaneity or expressive individuality. Consequently, the writing may appear polished yet impersonal, optimised for clarity rather than personality. 

Secondly, AI-generated content frequently demonstrates an organised, systematic presentation of ideas that emphasises readability and logical flow. For example: predictable sentence construction, consistent paragraph formatting, repeated use of transition terms such as moreover, furthermore, and however, extensive reliance on bullet points, standardised rhetorical progression from introduction to elaboration to conclusion, and frequent deployment of em dashes—often used to introduce clarifications or extensions of thought. 

Thirdly, AI-generated text often demonstrates a tendency toward generalised observations, offering broad explanatory statements that are logically valid but operationally non-specific. Furthermore, the narrative may prioritise comprehensiveness over depth—using numerous words to communicate relatively straightforward concepts. In addition, similar ideas may be restated in multiple ways to reinforce conceptual clarity; however, this can produce redundancy. 

I apologise for last three paragraphs. They were written by AI. Did you spot it? 

There’s something robotic about those passages, isn’t there? Something hollow which suggests the absence of intellectual thought and, more telling, the absence of human heart. 

The other problem with AI is that, because it scrapes the internet for content and can only reproduce what already exists, it obeys the law of diminishing returns, trapped in a loop of recycled thoughts, repackaged ideas, and predictable outcomes. The sources shaping its “thoughts” are also drawn from the same well – and so the well’s drying up. That means AI tools produce inaccurate, fabricated, or “hallucinated” facts. Because AI models are trained on existing human work, there’s also the risk of unintentional plagiarism and copyright infringement. Plus, AI writing often reflects and amplifies societal biases and harmful stereotypes. Further, an over-reliance on AI for writing, planning, or critical thinking can lead to de-skilling, where human creativity, research skills, and cognitive engagement decline. 

Look, I’m no Luddite; I love tech. But we must be cautious of our over-reliance on robots like AI to outsource our thinking and communication. The word ‘robot’ – first used in a play by the Czech writer Karel Čapek in 1920 – was taken from the Old Church Slavonic root ‘rabota’ and means ‘servitude’. The problem is, AI is not so much a slave to us, as we are to it. So, we must resist the rise of the robots before it’s too late. 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.