We were all raised on stories of obvious tyranny. We were taught to look for the book burnings and the public shamings. We were told to listen for the sound of the cage door slamming shut. But what happens when the cage has no bars? What happens when the prison isn’t a place, but a state of mind, meticulously constructed to feel like freedom?
This is the world of informational autocracy. It’s a far slicker, more sophisticated beast than the clumsy dictatorships of the last century. It doesn’t need to rule by fear when it can rule by manufactured consent. This new model of power doesn’t abolish elections; it mimics them, ensuring the outcome is a foregone conclusion while maintaining a veneer of legitimacy. It doesn’t ban the free press; it buys it, starves it of advertising, or floods the zone with so much state-sponsored noise that the truth is simply drowned out. Look at Putin’s Russia, Orbán’s Hungary, or Erdoğan’s Turkey. The playbook is the same: project an image of competence and stability, paint all opposition as chaotic or treacherous, and ensure the majority of the public never gets a clear enough signal to know the difference. The primary goal is not to terrorise the population, but to convince them. And the engine room of this entire operation is the device in your pocket.
Enter the social media platform: the greatest accelerator of informational autocracy ever invented. These systems are not neutral tools; they are battlegrounds designed for a very specific kind of warfare. Their algorithms, built not for truth but for traffic, are perfectly tuned to reward the divisive, the sensational, and the outrageous. It’s no accident that, on platforms like X, false political stories are proven to spread 70% faster than the truth. Outrage is profitable. Division drives engagement. In this environment, an autocrat’s propaganda isn’t just another post—it’s premium fuel for a machine designed to run on it. We are not just the audience; we are the unwitting foot soldiers, sharing and amplifying narratives that fracture our own societies. But this battle isn’t just for the hearts and minds of the masses. There’s a more specific, more strategic target in its sights.
Every society has an “informed elite”—that small but crucial group of journalists, academics, professionals, and artists who have the access and the training to see through the noise. In the old world, an autocrat had to arrest or exile them. In the new world, the strategy is far more subtle. Social media allows the regime to monitor them, identifying dissenters for a quiet campaign of shadow-banning, legal threats, or professional exclusion. Even more effectively, it allows them to be co-opted. A slice of the elite is turned into well-paid influencers, their credibility used to launder regime propaganda. The very tool that could expand the ranks of the informed by democratizing information also shatters their authority, turning public discourse into a chaotic free-for-all where a verified expert has the same algorithmic weight as a state-funded troll farm.
It leaves us in the crossfire of a silent, borderless war. The tactics perfected in Moscow and Beijing are now exported globally, seeping into the bedrock of democracies. This is the slow poison: the erosion of public trust, the exhaustion of civic life, and the creeping sense that objective truth no longer exists. This is the ultimate goal. The aim isn’t just to win an argument; it’s to create an environment where the very idea of a shared reality seems naive. It is to foster a deep, weary cynicism that leads to democratic fatigue, where we disengage not because we are forced to, but because we are too tired to continue.
So, what is the way out? It is not to find a mythical, uncompromised platform or to wait for a single heroic leader. The resistance begins with a conscious and deliberate act of what can only be called informational hygiene. It starts with us. We must become fierce curators of our own information, deliberately seeking out and paying for quality, independent journalism. We must take our conversations offline and into the real world, rebuilding the connective tissue of society in our own communities. And above all, we must build our own resilience as if it were armour. They are counting on our burnout. An exhausted, cynical public is their ideal political landscape.
This is the work. It is not glamorous. It is not easy. But it is real. The most radical act in an age of quiet persuasion is a loud and curious mind. Keep yours sharp. Keep it open. And never, ever let them convince you to close it.
Whilst all eyes are on Trump at Windsor the UK Government announced the “Tech Prosperity Deal,” a picture is emerging not of a partnership, but of a wholesale outsourcing of Britain’s digital future to a handful of American tech behemoths. The government’s announcement, dripping with talk of a “golden age” and “generational step change,” paints a utopian vision of jobs and innovation. But peel back the layers of PR, and the £31 billion deal begins to look less like an investment in Britain and more like a leveraged buyout of its critical infrastructure.
At the heart of this cosy relationship lies a bespoke new framework: the “AI Growth Zone.” The first of its kind, established in the North East, is the blueprint for this new model of governance. It isn’t just a tax break; it’s a red-carpet-lined, red-tape-free corridor designed explicitly for the benefit of companies like Microsoft, NVIDIA, and OpenAI. The government’s role has shifted from regulation to facilitation, promising to “clear the path” by offering streamlined planning and, crucially, priority access to the national power grid—a resource already under strain.
While ministers celebrate the headline figure of £31 billion in private capital, the true cost to the public is being quietly written off in the footnotes. This isn’t free money. The British public is footing the bill indirectly through a cascade of financial incentives baked into the UK’s Freeport and Investment Zone strategy. These “special tax sites” offer corporations up to 100% relief on business rates for five years, exemptions from Stamp Duty, and massive allowances on capital investment. For every pound of tax relief handed to Microsoft for its £22 billion supercomputer or Blackstone for its £10 billion data centre campus, that is a pound less for schools, hospitals, and public services.
Conspicuously absent from this grand bargain is any meaningful protection for the very people whose data will fuel this new digital economy. The deafening silence from Downing Street on the need for a Citizens’ Bill of Digital Rights is telling. Such a bill would enshrine fundamental protections: the right to own and control one’s personal data, the right to transparency in algorithmic decision-making, and the right to privacy from pervasive state and corporate surveillance. Instead, the British public is left to navigate this new era with a patchwork of outdated data protection laws, utterly ill-equipped for the age of sovereign AI and quantum computing. Without these enshrined rights, citizens are not participants in this revolution; they are the raw material, their health records and digital footprints the currency in a deal struck far above their heads.
What is perhaps most revealing is the blurring of lines between the state and the boardroom. The government’s own press release celebrating the deal reads like a corporate shareholder report, quoting the CEOs of NVIDIA, OpenAI, and Microsoft at length. Their voices are not presented as external partners but as integral players in a shared national project. When Sam Altman, CEO of OpenAI, declares that “Stargate UK builds on this foundation,” it raises the fundamental question: who is building what, and for whom?
This unprecedented integration of Big Tech into the fabric of national infrastructure raises profound questions about sovereignty and control. These data centres and supercomputers are not just buildings; they are the “factories of the future,” processing everything from sensitive healthcare data from the UK Biobank to research that will define our national security. By handing the keys to this infrastructure to foreign entities, the UK risks becoming a digital vassal state, reliant on the goodwill and strategic interests of corporations whose primary allegiance is to their shareholders, not to the British public.
The “Tech Prosperity Deal” has been sold as a triumph of post-Brexit, “Global Britain.” But the reality is far more sobering. It is a deal that sees the government leveraging its power to reduce tax income, strain the national grid, and cede control of critical infrastructure, all to entice a handful of the world’s most powerful companies. The handshake may have been for £31 billion, but the price of Britain’s digital independence could prove to be far higher.
Further Reading and Sources
Official Government Announcements and Policies:
The US-UK Tech Prosperity Deal Announcement (16 September 2025): The full press release from the Department for Science, Innovation and Technology and 10 Downing Street.
I understand your concerns about the recent deal between the UK government and Big Tech, especially regarding data privacy and national security. It’s a complex issue with various potential implications.
On one hand, proponents of the deal might argue that collaborations with leading technology companies can bring significant benefits, such as:
Economic Growth: Attracting investment and creating jobs in the tech sector.
Technological Advancement: Gaining access to cutting-edge research, development, and infrastructure that could boost the UK’s scientific and technological capabilities.
Improved Public Services: Utilizing advanced data analytics and cloud computing to enhance efficiency and effectiveness in areas like healthcare (e.g., accelerating medical research through projects like the UK Biobank) and national security.
Global Competitiveness: Ensuring the UK remains a key player in the global digital economy.
Concerns about data sovereignty and the potential risks are valid. Here are some of the “what ifs” that often arise in discussions like this:
Data Security and Privacy:
Breaches: The risk of data breaches increases when sensitive information is stored and processed by third-party corporations, regardless of their size or security measures.
Jurisdiction: In cases of international corporations, questions arise about which country’s laws apply to data privacy and protection, potentially complicating legal recourse for UK citizens.
Usage of Data: Concerns about how this data might be used beyond the stated purpose, including for commercial gain or even by foreign governments, especially if those corporations are subject to laws in their home countries that compel data access.
National Security Implications:
Foreign Influence: The potential for foreign governments to gain indirect access to sensitive national security research or infrastructure through these corporations.
Critical Infrastructure: If essential services become overly reliant on foreign-owned tech infrastructure, it could pose a risk during international disputes or cyber-attacks.
Loss of Control and Sovereignty:
Policy Influence: Big Tech companies wield significant influence, and collaborations could lead to their greater sway over UK policy and regulatory frameworks.
Dependence: Becoming overly dependent on specific foreign providers could limit the UK’s long-term strategic autonomy in the digital sphere.
Ethical Considerations:
Bias in Algorithms: If AI and data analytics tools provided by these companies are used in public services, there’s a risk of embedded biases affecting outcomes for citizens.
Transparency: The level of transparency around how data is managed, accessed, and secured by these corporations can be a major point of contention.
These are all important points for public discussion and robust oversight. Governments typically try to mitigate these risks through strict contractual agreements, data protection regulations (like GDPR in the UK), independent audits, and national security assessments. The effectiveness of these safeguards is often the subject of ongoing debate and scrutiny.
Ever had that strange feeling? You mention needing a new garden fork in a message, and for the next week, every corner of the internet is suddenly waving one in your face. It’s a small thing, a bit of a joke, but it’s a sign of something much bigger, a sign that the digital world—a place of incredible creativity and connection—doesn’t quite feel like your own anymore.
The truth is, and let’s be authentic about it, we’ve struck a strange bargain. We’re not really the customers of these huge tech companies; in a funny sort of way, we’re the product. We leave a trail of digital breadcrumbs with every click and share, not realising they’re being gathered for someone else’s feast. Our digital lives are being used to train algorithms that are learning to anticipate our every move. It’s all a bit like we’re living in a house with glass walls, and we’ve forgotten who’s looking in or why. We’ve drifted into a new kind of system, a techno-feudalism, where a handful of companies own the infrastructure, write the rules we blithely agree to, and profit from the very essence of us.
This isn’t some far-off problem; it’s happening right here on our doorstep. Take Palantir, a US spy-tech firm now managing a massive platform of our NHS patient data. They’re also working with UK police forces, using their tech to build surveillance networks that can track everything from our movements to our political views. Even local councils are getting in on the act, with Coventry reviewing a half-a-million-pound deal with the firm after people, quite rightly, got worried. This is our data, our health records, our lives.
When you see how engineered the whole system is, you can’t help but ask: why aren’t we doing more to protect ourselves? Why do we have more rights down at the DVLA than we do online? Here in the UK, we have laws like the GDPR and the new Data (Use and Access) Act 2025, which sound good on paper. But in practice, they’re riddled with loopholes, and recent changes have actually made it easier for our data to be used without clear consent. Meanwhile, data brokers are trading our information with little oversight, creating risks that the government itself has acknowledged are a threat to our privacy and security.
It feels less like a mistake and more like the intended design.
This isn’t just about annoying ads. Algorithms are making life-changing decisions. In some English councils, AI tools have been found to downplay women’s health issues, baking gender bias right into social care. Imagine your own mother or sister’s health concerns being dismissed not by a doctor, but by a dispassionate algorithm that was never taught to listen properly. Amnesty International revealed last year how nearly three-quarters of our police forces are using “predictive” tech that is “supercharging racism” by targeting people based on biased postcode data. At the same time, police are rolling out more live facial recognition vans, treating everyone on the street like a potential suspect—a practice we know discriminates against people of colour. Even Sainsbury’s is testing it to stop shoplifters. This isn’t the kind, fair, and empathetic society we want to be building.
So, when things feel this big and overwhelming, it’s easy to feel a bit lost. But this is where we need to find that bit of steely grit. This is where we say, “Right, what’s next?”
If awareness isn’t enough, what’s the one thing that could genuinely change the game? It’s a Digital Bill of Rights. Think of it not as some dry legal document, but as a firewall for our humanity. A clear, binding set of principles that puts people before profit.
So, if we were to sit down together and draft this charter, what would be our non-negotiables? What would we demand? It might look something like this:
The right to digital privacy. The right to exist online without being constantly tracked and profiled without our clear, ongoing, and revocable consent. Period.
The right to human judgment. If a machine makes a significant decision about you – such as your job or loan – you should always have the right to have a human review it. AI does not get the final say.
A ban on predictive policing. No more criminalising people based on their postcode or the colour of their skin. That’s not justice; it’s algorithmic segregation.
The right to anonymity and encryption. The freedom to be online without being unmasked. Encryption isn’t shady; in this world, it’s about survival.
The right to control and delete our data. To be able to see what’s held on us and get rid of it completely. No hidden menus, no 30-day waiting periods. Just gone.
Transparency for AI. If an algorithm is being used on you, its logic and the data it was trained on should be open to scrutiny. No more black boxes affecting our lives.
And we need to go further, making sure these rights protect everyone, especially those most often targeted. That means mandatory, public audits for bias in every major AI system. A ban on biometric surveillance in our public spaces. And the right for our communities to have a say in how their culture and data are used.
Once this becomes law, everything changes. Consent becomes real. Transparency becomes the norm. Power shifts.
Honestly, you can’t private-browse your way out of this. You can’t just tweak your settings and hope for the best. The only way forward is together. A Digital Bill of Rights isn’t just a policy document; it’s a collective statement. It’s a creative, hopeful project we can all be a part of. It’s us saying, with one voice: you don’t own us, and you don’t get to decide what our future looks like.
This is so much bigger than privacy. It’s about our sovereignty as human beings. The tech platforms have kept us isolated on purpose, distracted and fragmented. But when we stand together and demand consent, transparency, and the simple power to say no, that’s the moment everything shifts. That’s how real change begins – not with permission, but with a shared sense of purpose and a bit of good-humoured, resilient pressure. They built this techno-nightmare thinking no one would ever organise against it. Let’s show them they were wrong.
The time is now. With every new development, the window for action gets a little smaller. Let’s demand a Citizen’s Bill of Digital Rights and Protections from our MPs and support groups like Amnesty, Liberty, and the Open Rights Group. Let’s build a digital world that reflects the best of us: one that is creative, kind, and truly free.
The AI on your phone isn’t just a helper. It’s a tool for corporate and state control that puts our democracy at risk.
I was surprised when my Android phone suddenly updated itself, and Gemini AI appeared on the front screen, inviting me to join the AI revolution happening worldwide.
Google, Apple, and Meta are locked in a high-stakes race to put a powerful AI assistant in your pocket. The promise is a life of seamless convenience. The price, however, may be the keys to your entire digital life, and the fallout threatens to stretch far beyond your personal data.
This isn’t merely my middle-aged luddite paranoia; widespread public anxiety has cast a sharp light on the trade-offs we are being asked to accept. This investigation will demonstrate how the fundamental design of modern AI, with its reliance on vast datasets and susceptibility to manipulation, creates a perfect storm. It not only exposes individuals to new forms of hacking and surveillance but also provides the tools for unprecedented corporate and government control, undermining the foundations of democratic society while empowering authoritarian regimes.
A Hacker’s New Playground
Let’s be clear about the immediate technical risk. Many sophisticated AI tasks are too complex for a phone to handle alone and require data to be sent to corporate cloud servers. This process can bypass the end-to-end encryption we have come to rely on, exposing our supposedly private data.
Worse still is the documented vulnerability known as “prompt injection.” This is a new and alarmingly simple form of hacking where malicious commands are hidden in webpages or even video subtitles. These prompts can trick an AI assistant into carrying out harmful actions, such as sending your passwords to a scammer. This technique effectively democratises hacking, and there is no foolproof solution.
The Foundations of Democracy Under Threat
This combination of data exposure and vulnerability creates a perfect storm for democratic systems. A healthy democracy relies on an informed public and trust in its institutions, both of which are directly threatened.
When AI can generate floods of convincing but entirely fake news or deepfake videos, it pollutes the information ecosystem. A 2023 article in the Journal of Democracy warned that this erosion of social trust weakens democratic accountability. The threat is real, with a 2024 Carnegie Endowment report detailing how AI enables malicious actors to disrupt elections with sophisticated, tailored propaganda.
At the same time, the dominance of a few tech giants creates a new form of unaccountable power. As these corporations become the gatekeepers of AI-driven information, they risk becoming a “hyper-technocracy,” shaping public opinion without any democratic oversight.
A Toolkit for the Modern Authoritarian
If AI presents a challenge to democracies, it is a powerful asset for authoritarian regimes. The tools that cause concern in open societies are ideal for surveillance and control. A 2023 Freedom House report noted that AI dramatically amplifies digital repression, making censorship faster and cheaper.
Regimes in China and Russia are already leveraging AI to produce sophisticated propaganda and control their populations. From automated censorship that suppresses dissent to the creation of fake online personas that push state-sponsored narratives, AI provides the ultimate toolkit for modern authoritarianism.
How to Take Back Control
A slide into this future is not inevitable. Practical solutions are available for those willing to make a conscious choice to protect their digital autonomy.
For private communication, established apps like Signal offer robust encryption and have resisted AI integration. For email services, Tuta Mail provides an AI-free alternative. For those wanting to use AI on their own terms, open-source tools like Jan.ai allow you to run models locally on your own computer.
Perhaps the most powerful step is to reconsider your operating system. On a PC, Linux Mint is a privacy-respecting alternative. For smartphones, GrapheneOS, a hardened version of Android, provides a significant shield against corporate data gathering.
The code has been written, and the devices are in our hands. The next battle will be fought not in the cloud, but in parliaments and regulatory bodies, where the rules for this new era have yet to be decided. The time for us, and our government, to act is now.
There’s a growing sense that the whole capitalist project is running on fumes. For decades, it’s been a system built on one simple rule: endless growth. But what happens when it runs out of road? It has already consumed new lands, markets, and even the quiet personal spaces of our attention. Think of it like a shark that must constantly swim forward to breathe, and it has finally hit the wall of the aquarium. The frantic, desperate thrashing we’re seeing in our politics and society? That’s the crisis.
For the last forty-odd years, the dominant philosophy steering our world has been Neoliberalism. Stripped to its bare bones, it’s a simple creed: privatise anything that isn’t nailed down, deregulate in the name of ‘freedom’, and chase economic growth as if it were the only god worth worshipping. What has become chillingly clear is that the current lurch towards authoritarianism isn’t a strange detour or a bug in the system; it’s the next logical feature. Technofascism isn’t some bizarre alternative to neoliberalism; it is its terrifying, inevitable endgame. It is emerging as a ‘last-ditch effort’ to rescue a system in terminal crisis, and the price of that rescue is democracy itself.
Before you can build such a machine, you need a blueprint. The blueprint for this new form of control is a set of extreme ideas that’d be laughable if their proponents weren’t sitting on mountains of cash and power. At the heart of a gloomy-sounding gentlemen’s club of philosophies, which includes Neo-Reactionism (or NRx), the Dark Enlightenment, and Accelerationism, is a deep, abiding, and utterly sincere contempt for the very idea of liberal democracy. They see it as a messy, sentimental, and ‘incredibly inefficient’ relic, a ‘failed experiment’ that just gets in the way of what they consider real progress.
This isn’t just a passing grumble about politicians. It’s a root-and-branch rejection of the last few centuries of political thought. Their utopia is a society restructured as a hyper-efficient tech start-up, helmed by a god-like ‘CEO-autocrat’. This genius-leader, naturally drawn from their own ranks, would be free to enact his grand vision without being bothered by tedious things like elections or civil liberties. It’s an idea born of staggering arrogance, a belief that a handful of men from Silicon Valley are so uniquely brilliant that they alone should be calling the shots.
This thinking didn’t spring from nowhere. Its strange prophets include figures like Curtis Yarvin, a blogger who spins academic-sounding blather that tells billionaires their immense power is not just deserved but necessary. It’s a philosophy that offers a convenient, pseudo-intellectual justification for greed and bigotry, framing them as signs that one is ‘red-pilled’, an enlightened soul who can see through the progressive charade. This worldview leads directly to a crucial pillar of technofascism: the active rejection of history and expertise. This mindset is captured in the terrifying nonchalance of a Google executive who declared, ‘I don’t even know why we study history… what already happened doesn’t really matter.’ This isn’t just ignorance; it’s a strategic necessity. To build their imagined future, they must demolish the guardrails of historical lessons that warn us about fascism and teach us the value of human rights. They declare war on the ‘ivory tower’ and the ‘credentialed expert’ because a population that respects knowledge will see their project for the dangerous fantasy it is.
But an ideology, no matter how extreme, remains hot air until it is forged into something tangible. The next chapter of this story is about how that strange, anti-democratic philosophy was hammered into actual, working tools of control. A prime case study is the company Palantir. It is the perfect, chilling expression of its founder Peter Thiel’s desire to ‘unilaterally change the world without having to constantly convince people.’ This company did not accidentally fall into government work; it was built from its inception to serve the state. Its primary revenue streams are not ordinary consumers, but the most powerful and secretive parts of government: the CIA, the FBI, and the Department of Homeland Security. It embodies the new ‘public-private partnership’, where the lines between a corporation and the state’s security apparatus are erased entirely.
The product of this unholy union is a global software of oppression. At home, Palantir was awarded a contract to create a tool for ICE to ‘surveil, track, profile and ultimately deport undocumented migrants,’ turning high-minded talk of ‘inefficiency’ into the ugly reality of families being torn apart. This same machinery of control is then exported abroad, where the company becomes a key player in the new defence industrial base. Its systems are deployed by militaries around the globe, and nowhere is this more terrifyingly apparent than in conflicts like the one in Gaza. There, occupied territories have become a digital laboratory where AI-powered targeting systems, enabled by companies within this ecosystem, are battle-tested with brutal efficiency. The line between a software company and an arms dealer is not just blurred; it is erased. This is the ultimate expression of the public-private partnership: the privatisation of war itself, waged through algorithms and data streams, where conflict zones become the ultimate testing ground.
This architecture of control, however, is not just aimed outward at state-defined enemies; it is turned inward, against the foundational power of an organised populace: the rights of workers. Technofascism, like its historical predecessors, understands that to dominate a society, you must first break its collective spirit. There’s a chilling historical echo here; the very first groups targeted by the Nazis were communists, socialists, and trade unionists. They were targeted first because organised labour is a centre of collective power that stands in opposition to total authority. Today, this assault is cloaked in the language of ‘disruption’. The gig economy, championed by Silicon Valley, has systematically shattered stable employment in entire industries, replacing it with a precarious workforce of atomised individuals who are cheaper, more disposable, and crucially, harder to organise. This attack on present-day labour is just a prelude to their ultimate goal: the stated desire to ‘liberate capital from labor for good.’ The ‘mad rush’ to develop AI is, at its core, a rush to create a future where the vast majority of humanity is rendered economically irrelevant and therefore politically powerless.
The human cost of this vision is already being paid. A new global caste system is emerging, starkly illustrated by OpenAI. While AI researchers in California enjoy ‘million-dollar compensation packages,’ Kenyan data workers are paid a ‘few bucks an hour’ to be ‘deeply psychologically traumatised’ by the hateful content they must filter. This is not an oversight; it is a calculated feature of what can only be called the ‘logic of Empire’, a modern colonialism where the human cost is outsourced and rendered invisible. This calculated contempt for human dignity is mirrored in their treatment of the planet itself. The environmental price tag for the AI boom is staggering: data centres with the energy footprint of entire states, propped up by coal plants and methane turbines. A single Google facility in water-scarce Chile planned to use a thousand times more fresh water than the local community. This isn’t an unfortunate side effect; it’s the logical outcome of an ideology that sees the natural world as an obstacle to be conquered or a flawed planet to be escaped. The fantasy of colonising Mars is the ultimate expression of this: a lifeboat for billionaires, built on the premise that they have the right to destroy our only home in the name of their own ‘progress’.
Having built this formidable corporate engine, the final, crucial act is to seize the levers of political power itself. While it is tempting to see this as the work of one particular political tribe, embodied by a figure like Donald Trump acting as a ‘figurehead’ who normalises the unthinkable, the reality is now far more insidious. The ideology has become so pervasive that it has captured the entire political establishment.
Consider this: after years of opposing Tory-led Freeports, Keir Starmer’s Labour government announces the creation of ‘AI Growth Zones’—digital versions of the same deregulated havens, designed explicitly for Big Tech. The project has become bipartisan. The state’s role is no longer to regulate these powerful entities, but to actively carve out legal exceptions for them. This move is mirrored on the global stage, where both the UK and US refuse to sign an EU-led AI safety treaty. The reasoning offered is a masterclass in technofascist rhetoric. US Vice President JD Vance, a direct protégé of Peter Thiel, warns that regulation could “kill a transformative industry,” echoing the Silicon Valley line that democracy is a drag on innovation. Meanwhile, the UK spokesperson deflects, citing concerns over “national security,” the classic justification for bypassing democratic oversight to protect the interests of the state and its corporate security partners.
This quiet, administrative capture of the state is, in many ways, more dangerous than a loud revolution. It doesn’t require a strongman; it can be implemented by polished, ‘sensible’ leaders who present it as pragmatic and inevitable. The strategy for taking power is no longer just about a chaotic ‘flood the zone with shit’ campaign; it’s also about policy papers, bipartisan agreements, and the slow, methodical erosion of regulatory power.
This is where the abstract horror becomes horrifyingly, tangibly real. The tools built by Palantir are actively used to facilitate the ‘cruel deportations’ of real people, a process that is only set to accelerate now that governments are creating bespoke legal zones for such technology. The AI systems built on the backs of traumatised workers are poised to eliminate the jobs of artists and writers. The political chaos deliberately sown online spills out into real-world violence and division. This is the strategy in action, where the combination of extremist ideology, corporate power, and a captured political class results in devastating human consequences.
When you line it all up, the narrative is stark and clear. First, you have the strange, elitist philosophy, born of ego and a deep-seated contempt for ordinary people. This ideology then builds the corporate weapons to enforce its vision. And finally, these weapons are handed to a political class, across the spectrum, to dismantle democracy from the inside. This entire project is fuelled by a desperate attempt to keep the wheels on a capitalist system that has run out of options and is now cannibalising its own host society to survive.
And here’s the kicker, the final, bitter irony that we must sit with. An ideology that built its brand by screaming from the rooftops about ‘freedom’, individualism, and the power of the ‘free market’ has, in the end, produced the most sophisticated and all-encompassing tools of control and oppression humanity has ever seen.
It’s a grim picture, but there are no two ways about it. But this is precisely where our own values of resilience, empathy, and grounded and courageous optimism must come into play. The first, most crucial act of resistance is simply to see this process clearly, to understand it for what it is. to engage in what the ancient Greeks called an apocalypse, not an end-of-the-world event but a lifting of the veil, a revelation.
Seeing the game is the first step to refusing to play it, especially now that all the major political teams are on the same side. It’s the moment we can say, ‘No, thank you.’ It’s the moment we choose to slow down, to log off from their manufactured chaos, and to reconnect with the real, tangible world around us. It’s the choice to value the very things their ideology seeks to crush: kindness, community, creativity, and the simple, profound magic of human connection. Facing this reality takes courage, but doesn’t have to lead to despair. It can be the catalyst that reminds us what is truly worth fighting for. And that, in itself, in a world of bipartisan consensus, is the most powerful and hopeful place to start.