Tag Archives: politics

Palantir & Brit Card: The Final Piece of the Surveillance State.

To understand what’s coming with the mandatory “Brit Card,” you first have to understand who is already here. The scheme isn’t appearing out of thin air; it’s the logical capstone on an infrastructure that has been quietly and deliberately assembled over years by a single, dominant player: Palantir. Their involvement isn’t just possible—it’s the probable, planned outcome of a strategy that serves both their corporate interests and the UK government’s long-held ambitions.

Let’s be clear about the facts. Palantir isn’t some new bidder for a government contract; they are already embedded, their surveillance tentacles wrapped around the core functions of the British state. They have over two dozen contracts, including with the NHS to analyse patient data, the Ministry of Defence for military intelligence, and police forces for “predictive policing.” They are in the Cabinet Office, they are in local government. They are, in essence, the state’s private intelligence agency.

This is a company forged in the crucible of the CIA and the NSA, whose entire business model is to turn citizen data into surveillance gold. Their track record is one of mass surveillance, racial profiling algorithms, and profiting from border control and deportations. To believe that this company would be hired to build a simple, privacy-respecting ID system is to willfully ignore everything they are and everything they do. The “Brit Card” is not a separate project for them. It is the keystone—the final piece that will allow them to link all their disparate data streams into one terrifyingly complete surveillance engine, with every UK adult forced onto its database.

But to grasp the scale of the threat, you have to ask why this is happening here, in the UK, and not anywhere else in Europe. This isn’t a happy accident; it’s a deliberate strategy. Palantir has chosen the UK for its European Defence HQ for a very simple reason: post-Brexit Britain is actively marketing itself as a deregulated safe harbour.

The UK government is offering what the EU, with its precautionary principles and landmark AI Act, cannot: regulatory flexibility. For a company like Palantir, whose business thrives in the grey areas of ethics and law, the EU is a minefield of compliance. The UK, by contrast, is signalling that it’s willing to write the rules in collaboration with them. The government’s refusal to sign the Paris AI declaration over “national security” concerns was not a minor diplomatic snub; it was the smoking gun. It was a clear signal to Silicon Valley that Britain is open for a different kind of business, one where restrictive governance will not get in the way of profit or state power.

This brings us to the core of the arrangement: a deeply symbiotic relationship. The UK government offers a favourable legal environment and waves a giant chequebook, with an industrial policy explicitly geared towards making the country a hub for AI and defence tech. The MoD contracts and R&D funding are a direct financial lure for predatory American corporations like Palantir, Blackrock, and Blackstone, inviting them to make deep, strategic incursions into our critical public infrastructure.

This isn’t charity, of course. In return, Palantir offers the government the tools for mass surveillance under the plausible deniability of a private contract. By establishing its HQ here, Palantir satisfies all the sovereign risk and security concerns, making them the perfect “trusted” partner. It’s a perfect feedback loop: the government signals its deregulatory intent, the money flows into defence and AI, and a company like Palantir responds by embedding itself ever deeper into the fabric of the state.

This isn’t about controlling immigration. It’s about building the infrastructure to control citizens. We are sacrificing our regulatory sovereignty for a perceived edge in security and technology, and in doing so, we are rolling out the red carpet for the very companies that specialise in monitoring us. When the firm that helps the CIA track its targets is hired to build your national ID card, you’re not getting documentation. You’re getting monitored.

Flipping the Switch: The Digital Pound in the Wrong Hands

The Digital Pound: A Tyrant’s Dream Come True.

You’ve heard all the promises about the Digital Pound. That it’s safe. That your privacy is guaranteed. But you have to ask yourself one brutal question: what happens when the people making those promises are gone? Because in the hands of an authoritarian regime, the system they are building today becomes the perfect weapon for controlling you tomorrow. This isn’t an academic exercise; it’s a warning. The infrastructure of a digital cage is being assembled right now, and what matters isn’t the current design, but what it will become in the wrong hands.

This isn’t just an academic exercise. History is littered with democracies that faltered. To build this infrastructure without considering the worst-case scenario is not just naive; it is reckless. In the event of an authoritarian takeover, the digital pound, linked to a Digital ID, would not be a tool of convenience. It would be the most perfect instrument of state control ever conceived.

The first and most immediate change would be the weaponisation of surveillance. All the current safeguards—GDPR, promises of data privacy, the separation between the Bank and private wallet providers—would be swept away overnight. An authoritarian state would rewrite the laws, bypass regulations, or simply coerce private companies to hand over the data. The system is already designed for traceability; a new regime would just have to point it in the right direction. Every transaction, every donation, every purchase would become an open book to the state, revealing your networks, your beliefs, and your loyalties. Financial privacy would cease to exist.

This leads directly to the next implication: conditional access to your own life. Today, they promise it’s a choice. Under an authoritarian regime, that choice would vanish. The digital pound would become mandatory, and cash, the last bastion of anonymity, would be aggressively phased out. We’ve seen how quickly existing financial systems can be turned against citizens. During the 2016 coup attempt in Turkey, the government froze the bank accounts of thousands of suspected dissidents. A digital pound would make this process frictionless and absolute.

Your access to money, and therefore your ability to buy food, pay rent, or travel, would be tied directly to your compliance. A centralised Digital ID would become the linchpin of a social credit system, where your right to participate in the economy is granted or denied based on your loyalty to the regime. Step out of line, and you could be switched off. Not arrested, not charged, just silently and efficiently excluded.

With this power, our fundamental civil liberties would be dismantled. The right to protest, to assemble, and to speak freely would be neutered. An authoritarian state could reprogramme the digital pound in an instant. It could block donations to opposition groups, restrict travel to protest locations, or even limit what you are allowed to purchase. The “silent denial of a transaction” would become the state’s most effective tool for suppressing dissent, creating a chilling effect that would silence opposition far more effectively than any police force.

And in a final, devastating step, such a regime could use the digital pound to manipulate the economy for its own ends. It could issue “helicopter money” directly into citizens’ wallets to shore up loyalty, but with strings attached—programmable funds that can only be spent on state-approved goods. It could even revalue the currency overnight, forcing everyone into the new system and wiping out the savings of those who resist.

The democratic checks and balances we rely on today are fragile. They can be eroded or dismantled. The infrastructure we build, however, is permanent. To create a centralised system that fuses identity and money is to build a cage. We may be promised that the door will remain unlocked, but in the hands of an authoritarian ruler, that door would be slammed shut and bolted. The Digital Pound would become the ultimate enforcer, turning every citizen into a subject, their freedom contingent on the flick of a switch.

https://www.bankofengland.co.uk/the-digital-pound

Your New Digital ID Isn’t For Convenience. It’s For Control.


The Digital Back Door: Why a National ID is the End of a Free Society

Every breath you take
And every move you make
Every bond you break
Every step you take
I’ll be watching you

Lyric George Sumner – The Police

There’s a pitch being sold to the British public, dressed up in the language of convenience and national security. It’s the idea of a Digital ID for every adult, a neat, modern solution to complex problems like illegal migration.

I can tell you this isn’t progress. It’s the architecture of a control system, a Trojan horse that smuggles a surveillance state in under the guise of efficiency. It is the end of a free society, and we are sleepwalking towards it.

Let’s start by dismantling the primary justification: fixing the border. The claim that a Digital ID will stop the boats is, to put it plainly, bollocks. It will not stop trafficking gangs, nor will it fix a fundamentally broken system. Criminals and their networks are, by their very nature, experts at working around systems; they adapt faster than bureaucracies can legislate. The ones who will pay the price for this vast, expensive, and dangerous infrastructure will not be the criminals, but the honest, law-abiding citizens of this country.

The fundamental flaw lies in a concept I deal with daily: centralised risk. We spend hundreds of billions a year on cybersecurity, yet the volume and severity of data breaches are breaking records. The threat grows faster than the spend. From Jaguar Land Rover to major airports, no centralised system has proven impenetrable. Now, imagine that vulnerability scaled up to a national level, with a single database linking your identity to every checkpoint of daily life: where you go, what you buy, what you read, and who you speak to.

Here is the risk that ministers will not admit. A sophisticated ransomware attack, seeded quietly through a compromised supplier or a disgruntled insider, lies dormant for months. It slowly rolls through the backups, undetected. Then, on trigger day, the live registry and every recovery set are encrypted simultaneously. The country grinds to a halt. Payments fail. Health and benefits systems stall. Borders slow to a crawl. Citizens are frozen out of their own lives until a ransom is paid or the state is forced to rebuild the nation’s identity from scratch. To centralise identity is to centralise failure.

This, however, is only the technical risk. The greater political and social danger lies in the certainty of function creep. It will begin as an optional, convenient way to log in or prove your age. But it will not end there. It will inevitably become a mandatory prerequisite for accessing money, travel, employment, and essential public services. Our fundamental rights will be turned into permissions, granted or revoked by the state and its chosen corporate contractors.

This isn’t a theoretical dystopian future; it’s a documented reality. India’s Aadhaar system, initially for welfare, now underpins everything from banking to mobile phones and has been plagued by data leaks exposing millions to fraud. We are seeing the groundwork laid in the UK with the Digital Identity and Attributes Trust Framework (DIATF), a federated model reliant on a network of private suppliers like Yoti, Hippo Digital, and IDEMIA. This multi-vendor approach doesn’t eliminate risk; it multiplies the potential points of failure through a web of interconnected APIs, each a potential back door for attackers.

Furthermore, this system is built on a foundation of exclusion. The assumption of universal digital literacy is a dangerous fiction. With a significant percentage of UK adults lacking basic digital skills, a mandatory Digital ID will create a two-tier society. The elderly, the poor, and the vulnerable—those who cannot or will not comply—risk being locked out of the services they need most, deepening inequality and fuelling social unrest.

The gravest danger, however, emerges when this infrastructure is placed in the context of a crisis. Economic collapse, social unrest, or an environmental emergency often serves as the justification for an expansion of state power. A Digital ID system provides the ready-made tool for authoritarianism. In a crisis, it could be repurposed to monitor dissent, freeze the bank accounts of protesters, or restrict the movement of individuals deemed a threat. It builds, by stealth, the machinery for a social credit system.

And this brings us to the corporate engine waiting to power this machine: Palantir. The US data-mining firm is already deeply embedded within the UK state, with contracts spanning the NHS and the Ministry of Defence. Palantir doesn’t need a specific contract for the “Brit Card”; its platforms, Foundry and Gotham, are designed to do precisely what a Digital ID enables on a mass scale: fuse disparate datasets into a single, all-encompassing profile for every citizen.

The Digital ID would be the “golden record” that connects your health data, your financial transactions, your movements, and your communications. In a crisis, Palantir’s AI could be used for predictive surveillance—flagging individuals who enter a “protest zone” or transactions to “undesirable” organisations. This isn’t just a British system; with Palantir’s deep ties to US intelligence, it becomes a system subject to foreign demands under legislation like the CLOUD Act. We would be outsourcing our national sovereignty.

The entire premise is flawed. If the government were serious about the border, it would enforce current laws, properly resource patrols and processing, and close existing loopholes. You do not need to build a panopticon to do that. We scrapped ID cards in 2010 for a reason, recognising their threat to our fundamental liberties. Reintroducing them through the digital back door, outsourced to a network of private contractors and data-mining firms, is a monumental error.

There are better ways. Decentralised alternatives using cryptographic methods like zero-knowledge proofs can verify status or identity without creating a central honeypot of data. But these privacy-first solutions lack government traction because the true, unstated goal is not security or convenience. It is control. We must not fall for the pitch. This is a system that will centralise risk and outsource blame. It will punish the vulnerable while failing to stop the criminals it targets. It is the foundation for a future where our rights are contingent on our compliance. The choice is simple: yes to privacy-first proofs, no to a database state.

Beware the all-seeing eye!

New Look Fascism: Hiding In Plain Sight

We all know the old footage. The stark, monochrome marches, the rigid salutes, the frenzied crowds. It’s the ghost that haunts our modern world, and we’ve convinced ourselves we’d spot its return a mile off. We tell ourselves, “Never again,” with a quiet confidence that comes from knowing the enemy’s uniform. But what if the uniform has changed? What if the new fascism isn’t wearing jackboots, but a tailored suit, a tech bro’s hoodie, or the ironic grin of a meme?

That’s the unsettling truth we have to face. The aesthetics of authoritarianism have undergone a quiet but total redesign for the 21st century. It’s a friendlier, more insidious form that creeps in not with the bang of a dictator’s fist on a podium, but with the soft, persuasive glow of a smartphone screen. It’s less about stormtroopers and more about Silicon Valley’s vision of a tech-utopia, less about blood-and-soil rallies and more about the curated nostalgia of a “lost” masculinity. To my own mind, the most dangerous trick it’s pulled is making the whole thing feel like one big, bad taste joke.

Take a look around. The Italian Futurist Artists glorified war and speed; today’s tech oligarchs preach a gospel of progress, selling us a shiny, minimalist future where their corporations, not nations, are in charge. It’s a vision of power wrapped in the cool, unobjectionable aesthetics of a corporate keynote. And when that feels too cold, it offers Solar Punk—a beautiful, green-washed dream of harmony that can so easily be twisted to justify eco-fascist ideas of population control and exclusion. It’s utopia as a sales pitch, and it’s dangerously persuasive.

But the real shift, the one that leaves many of us feeling like we’re shouting into a void, is the weaponisation of irony. The symbols of hate have been replaced by cartoon frogs and anime girls. The dehumanising rhetoric is hidden behind layers of “just banter, mate.” It’s a shield of plausible deniability that allows cruel ideologies to spread through gaming chats and podcast bro culture, targeting young men who feel adrift. When you try to point out the nastiness lurking beneath the surface, you’re instantly labelled a humourless “snowflake.” It’s a brilliant, frustrating tactic: they make the world meaningless, so that caring about anything at all becomes a sign of weakness.

And now, we have AI. This, to me, feels like the final stage of this aesthetic hollowing-out. We’re being flooded with AI-generated slop—politicised art created without a shred of human conviction or creativity. It’s the ultimate tool for aestheticising politics, turning historical atrocities and genocidal fantasies into just another piece of content, stripped of all weight and horror. When everything can be faked and every image is just empty aesthetics, how do we hold on to truth?

So, how do we push back against something that’s designed to be slippery, ironic, and everywhere? I don’t claim to have all the answers, but I believe it starts with a kind of stubborn, clear-eyed authenticity.

First, we have to get better at reading the aesthetics. We need a new kind of literacy that looks past the what and questions the how. Why does that political ad look like a video game trailer? Why is that leader communicating entirely in memes? We have to name the tactics when we see them, pulling back the curtain on the irony and the aesthetic whitewashing.

More importantly, we have to offer a better, truer story. You can’t fight a sense of belonging built on hatred with a list of policy points. We need to build real, tangible communities—through unions, local projects, mutual aid—that give people a genuine stake and a connection that no online cult can match. And we need to champion art and narratives that are unafraid of complexity and rich with empathy, that offer a vision of a future worth fighting for, one that includes everyone.

Ultimately, it comes down to a simple, profound choice: we have to insist on meaning. In a world that’s being deliberately drained of it, we must value truth over fiction, complexity over simplistic lies, and the inherent dignity of every person over the fascist’s cruel hierarchy of worth.

It’s not about winning an online argument or a single election. It’s a long, persistent effort to build a world where people feel secure and respected enough to see the new fascism for what it is: a seductive, well-designed package with nothing but emptiness inside. And that requires us to be, above all else, true to ourselves.

The Keep Sane in Troubled Times Playbook
1. Develop Critical Aesthetic Literacy
The first step is to recognise the weaponisation of aesthetics. This means moving beyond analysing what is said to how it is presented.

  • Teach Media Literacy 2.0: Go beyond identifying fake news. Teach people to deconstruct visual rhetoric: Why does a political ad use a specific type of animation? Why does a leader’s social media feed look like a meme page? What emotions is a corporate “utopian” video trying to evoke, and what material realities does it hide?
  • Name the Tactics: Publicly label the strategies when you see them. Point out the irony-poisoning, the co-option of subcultures, the use of AI slop to flood the zone. By making the mechanics visible, you rob them of their power.

2. Rebuild Trust through Material Politics and Local Organising
Fascism feeds on alienation, despair, and the collapse of trust in institutions. The most powerful antidote is to demonstrate that collective, democratic action can improve people’s lives.

  • Focus on Material Conditions: Shift the conversation from the abstract culture war to concrete, material issues: affordable housing, healthcare, wages, unionisation, climate resilience. Fascism offers scapegoats; a real alternative must offer solutions that address the root causes of anxiety.
  • Strengthen Local Community: Support and participate in local unions, tenants’ associations, mutual aid networks, and community gardens. These organisations build real-world solidarity, trust, and collective power that is immune to online manipulation. They provide a sense of belonging that is not based on hatred of an “other”.

3. Create Competing, Hopeful Narratives and Aesthetics
You cannot defeat a powerful aesthetic with a dry policy paper. The left and centre must relearn the art of storytelling and vision-building.

  • Articulate a Positive, Inclusive Future: Solar Punk, as mentioned, has positive potential. We need compelling, artistically rendered visions of a future that is both technologically advanced and socially just, ecologically sustainable, and inclusive. This vision must be attractive enough to compete with the nostalgic fantasies of the far right.
  • Support Art and Culture that Builds Empathy: Fund, celebrate, and amplify art, films, music, and games that celebrate complexity, diversity, and human dignity. Counter the dehumanising caricatures with rich, humanising stories.

4. Strategic, Unified Opposition and Deplatforming
While open debate is ideal, the video correctly shows that these movements often argue in bad faith, using debate as a platform to spread conspiracies.

  • Do Not Normalise: Avoid treating fascist ideology as a legitimate point of view in political discourse. The goal is not to “debate” whether some people are inferior, but to isolate and discredit those ideas. Media outlets have a responsibility not to platform figures who traffic in replacement theory or Holocaust denial for “balance”.
  • Strengthen Institutional Guardrails: Defend and strengthen independent journalism, an independent judiciary, free and fair elections, and the rule of law. Support projects that document hate speech and extremist networks. This is the unsexy, bureaucratic work that is essential for democracy’s survival.

5. Personal Responsibility and Courage

  • Interrupt Casual Bigotry: Do not let racist, homophobic, or antisemitic “jokes” slide in personal conversations. A calm, firm response like, “I don’t find that funny,” or “Why do you say that?” can disrupt the normalisation process.
  • Support Victims: Stand in solidarity with those targeted by hate. If you see someone being harassed, be a proactive bystander. This demonstrates that the community will not tolerate intimidation.
  • Protect Your Mental Space: The constant barrage of corrosive content is designed to exhaust and demoralise. It is essential to log off, engage in real-world communities, and protect your capacity for empathy and hope. You cannot fight a long-term battle while burned out.

The Core Challenge: Rejecting Meaninglessness
The video concludes that the ultimate goal of this aestheticisation is to make everything meaningless. Therefore, the most profound act of resistance is to insist on meaning.

This means:

  • Valuing Truth: Upholding the distinction between fact and fiction.
  • Valuing Complexity: Rejecting simplistic, us-vs-them narratives in favour of nuanced understanding.
  • Valuing Human Dignity: Constantly affirming the inherent and equal worth of every person, against the hierarchy of worth that fascism promotes.

Countering this new fascism is not about winning a single election or a viral online battle. It is a long-term, cultural, and political project to rebuild a society where people feel secure, respected, and hopeful enough to reject the seductive but deadly lies of fascism in any aesthetic guise.

How To Beat Reform

Core Strategic Principle: Diagnosis Before Prescription

Think of the 1970s and you think of flared trousers and Abba. You probably don’t think of Nazi salutes on British streets.

But for a time, the far-right National Front (NF) was a terrifying force in UK politics. Its skinhead gangs terrorised immigrant communities. Its leaders were open Hitler admirers. And in the 1977 elections, over 200,000 people voted for them.

Then, they were crushed. Not in a war, but by a brilliant, gritty campaign that united punk rockers, grandparents, trade unions and communities. Today, as a new wave of populism gains traction, the lessons from that victory are not just history – they’re a handbook.

Here’s how it was done, and how it applies now.

Lesson 1: Stop Debating, Start Disrupting

The anti-fascists of the ’70s knew a crucial truth: you can’t reason someone out of a position they weren’t reasoned into. So they didn’t try. Instead, their strategy was simple: make it impossible for the NF to function.

They physically blocked their marches. They packed their meetings and shouted them down. The goal wasn’t to win an argument; it was to create such a logistical nightmare that the authorities were forced to ban events and the Nazis were too ashamed to show their faces.

The Modern Application: Today, the town hall meeting has been replaced by the social media algorithm. The tactic of disruption isn’t just about physical blocking—which can backfire against a legal party—but about a more sophisticated, multi-pronged assault. This means flooding the digital space with compelling counter-content, using ‘pre-bunking’ techniques to inoculate the public against predictable manipulation, and actively ‘de-branding’ their language by refusing to parrot loaded terms. Instead of “stop the boats,” the debate becomes about “fixing the asylum system.” The goal remains the same: to deny their narrative the clean air it needs to breathe.

Lesson 2: Expose the Core, Not Just the Policies

The NF tried to hide its Nazi core behind a veneer of ‘respectable’ racism. Anti-fascists ripped this mask off. They circulated photos of leader John Tyndall in his not-at-all-a-Nazi-uniform and highlighted his speeches praising Hitler. The result? The more moderate followers fled, and the party splintered. The label ‘Nazi’ stuck because the evidence was overwhelming.

The Modern Application: This isn’t about slapping the ‘fascist’ label on every opponent. It’s about rigorous exposure. Who endorses this party? What do their policies logically lead to? When a candidate is found to have made extremist statements, the question to the leadership is simple: “Do you condone this? If not, what are you doing about it?” Force them to either repudiate their fringe or be defined by it. The battle is to expose the underlying narrative of national humiliation and purging, no matter how sanitised the language.

Lesson 3: Apply Institutional and Economic Friction

Beyond the battle of ideas lies the less visible but equally critical war of institutional accountability. The 1970s activists understood that pressure had to be applied at every level. When the Hackney Gazette ran an NF advert, its staff went on strike.

The Modern Application: The contemporary equivalent is wielding strategic economic and legal pressure. This means holding corporate donors publicly accountable, supporting rigorous challenges to potential campaign spending breaches, and demanding that media platforms couple any coverage with immediate, contextual fact-checking. The objective is to create friction—to make supporting or enabling populism a professionally and reputationally costly endeavour. This isn’t about silencing opposition, but about enforcing the rules and standards that populists seek to erode, ensuring demagoguery carries a tangible price.

Lesson 4: Out-Create Them. Make Hope Go Viral.

This was the masterstroke. While some groups fought in the streets, the Anti-Nazi League and Rock Against Racism (RAR) fought for the culture. They realised that to win over a generation, you couldn’t just be against something; you had to be for something better.

RAR staged legendary gigs that paired white punk bands like The Clash with Black reggae acts. Their 1978 carnival in London attracted 100,000 people—a joyful, defiant celebration that made the NF look like the miserable, hate-fuelled sect they were.

“This ain’t no fucking Woodstock. This is the Carnival against the Nazis!” – Red Saunders, RAR co-founder

The Modern Application: This is the most critical lesson. Populism feeds on pessimism and cultural despair. The antidote is to build a more compelling, positive, and inclusive vision. Where is the modern equivalent of RAR? It’s about supporting creators, artists, and community initiatives that showcase a confident, modern Britain. It’s about telling stories of successful integration and shared future, making ‘hope’ more viral than ‘fear’.

Lesson 5: Protect Your Own. Community is Armour.

When the state failed to protect them, targeted communities organised their own defence. The Southall Youth Movement and others made their neighbourhoods ‘no-go zones’ for racists, patrolling streets and confronting threats directly. This wasn’t just about physical safety; it was about building unbreakable social and political resilience.

“What did we  share with the white left? We learned from them   as well. We shared the vision of a new world,  our world, a world in which we were all equal,   a fairer world.” – Tariq Mahmood, activist

The Modern Application: The threats today are often more digital and psychological than physical, but the principle is the same. This means strengthening local community bonds, supporting organisations that monitor and combat hate crime, and building robust support networks. Critically, this work must be underpinned by a ‘marathon, not a sprint’ mentality. The defeat of the National Front was not the work of a single election cycle but a sustained, multi-year effort. The modern challenge is to build resilient, long-term infrastructure—’the bakery’—that can withstand populist waves by addressing the underlying grievances of isolation and economic despair they exploit.

The Uncomfortable Truth for Today

The crucial difference is that Reform UK is not the National Front. It is a populist party, not a fascist paramilitary one. Applying the 1970s playbook isn’t about mindlessly copying tactics; it’s about intelligently adapting the principles.

The battle against the NF was won by a coalition that understood this was a war fought on multiple fronts simultaneously. It required the raw energy of street-level disruption, the sharp wit of cultural creation, the shrewdness of political exposure, and the patient, grinding work of institutional and legal challenge.

To effectively challenge modern populism demands the same holistic courage. It is not enough to out-create them online if their economic enablers face no consequences. It is not enough to win a legal battle if the cultural narrative of grievance remains unchallenged. The lesson of the 1970s is that victory comes not from a single masterstroke, but from the relentless, coordinated application of pressure everywhere it counts. The question is whether we can build a movement with the strategic depth to fight on all those fronts at once.

The UK Didn’t Just Sign a Tech Deal – It Handed Over the Keys.

Whilst all eyes are on Trump at Windsor the UK Government announced the “Tech Prosperity Deal,” a picture is emerging not of a partnership, but of a wholesale outsourcing of Britain’s digital future to a handful of American tech behemoths. The government’s announcement, dripping with talk of a “golden age” and “generational step change,” paints a utopian vision of jobs and innovation. But peel back the layers of PR, and the £31 billion deal begins to look less like an investment in Britain and more like a leveraged buyout of its critical infrastructure.

At the heart of this cosy relationship lies a bespoke new framework: the “AI Growth Zone.” The first of its kind, established in the North East, is the blueprint for this new model of governance. It isn’t just a tax break; it’s a red-carpet-lined, red-tape-free corridor designed explicitly for the benefit of companies like Microsoft, NVIDIA, and OpenAI. The government’s role has shifted from regulation to facilitation, promising to “clear the path” by offering streamlined planning and, crucially, priority access to the national power grid—a resource already under strain.

While ministers celebrate the headline figure of £31 billion in private capital, the true cost to the public is being quietly written off in the footnotes. This isn’t free money. The British public is footing the bill indirectly through a cascade of financial incentives baked into the UK’s Freeport and Investment Zone strategy. These “special tax sites” offer corporations up to 100% relief on business rates for five years, exemptions from Stamp Duty, and massive allowances on capital investment. For every pound of tax relief handed to Microsoft for its £22 billion supercomputer or Blackstone for its £10 billion data centre campus, that is a pound less for schools, hospitals, and public services.

Conspicuously absent from this grand bargain is any meaningful protection for the very people whose data will fuel this new digital economy. The deafening silence from Downing Street on the need for a Citizens’ Bill of Digital Rights is telling. Such a bill would enshrine fundamental protections: the right to own and control one’s personal data, the right to transparency in algorithmic decision-making, and the right to privacy from pervasive state and corporate surveillance. Instead, the British public is left to navigate this new era with a patchwork of outdated data protection laws, utterly ill-equipped for the age of sovereign AI and quantum computing. Without these enshrined rights, citizens are not participants in this revolution; they are the raw material, their health records and digital footprints the currency in a deal struck far above their heads.

What is perhaps most revealing is the blurring of lines between the state and the boardroom. The government’s own press release celebrating the deal reads like a corporate shareholder report, quoting the CEOs of NVIDIA, OpenAI, and Microsoft at length. Their voices are not presented as external partners but as integral players in a shared national project. When Sam Altman, CEO of OpenAI, declares that “Stargate UK builds on this foundation,” it raises the fundamental question: who is building what, and for whom?

This unprecedented integration of Big Tech into the fabric of national infrastructure raises profound questions about sovereignty and control. These data centres and supercomputers are not just buildings; they are the “factories of the future,” processing everything from sensitive healthcare data from the UK Biobank to research that will define our national security. By handing the keys to this infrastructure to foreign entities, the UK risks becoming a digital vassal state, reliant on the goodwill and strategic interests of corporations whose primary allegiance is to their shareholders, not to the British public.

The “Tech Prosperity Deal” has been sold as a triumph of post-Brexit, “Global Britain.” But the reality is far more sobering. It is a deal that sees the government leveraging its power to reduce tax income, strain the national grid, and cede control of critical infrastructure, all to entice a handful of the world’s most powerful companies. The handshake may have been for £31 billion, but the price of Britain’s digital independence could prove to be far higher.


Further Reading and Sources

Official Government Announcements and Policies:

Digital Rights and Privacy Advocacy:

  • Open Rights Group: A leading UK organisation campaigning for digital rights and privacy, with analysis on AI and data protection.
  • Big Brother Watch: Investigates and challenges threats to civil liberties, including state surveillance and the use of private data.

Data and Infrastructure Context:

I understand your concerns about the recent deal between the UK government and Big Tech, especially regarding data privacy and national security. It’s a complex issue with various potential implications.

On one hand, proponents of the deal might argue that collaborations with leading technology companies can bring significant benefits, such as:

  • Economic Growth: Attracting investment and creating jobs in the tech sector.
  • Technological Advancement: Gaining access to cutting-edge research, development, and infrastructure that could boost the UK’s scientific and technological capabilities.
  • Improved Public Services: Utilizing advanced data analytics and cloud computing to enhance efficiency and effectiveness in areas like healthcare (e.g., accelerating medical research through projects like the UK Biobank) and national security.
  • Global Competitiveness: Ensuring the UK remains a key player in the global digital economy.

Concerns about data sovereignty and the potential risks are valid. Here are some of the “what ifs” that often arise in discussions like this:

  • Data Security and Privacy:
    • Breaches: The risk of data breaches increases when sensitive information is stored and processed by third-party corporations, regardless of their size or security measures.
    • Jurisdiction: In cases of international corporations, questions arise about which country’s laws apply to data privacy and protection, potentially complicating legal recourse for UK citizens.
    • Usage of Data: Concerns about how this data might be used beyond the stated purpose, including for commercial gain or even by foreign governments, especially if those corporations are subject to laws in their home countries that compel data access.
  • National Security Implications:
    • Foreign Influence: The potential for foreign governments to gain indirect access to sensitive national security research or infrastructure through these corporations.
    • Critical Infrastructure: If essential services become overly reliant on foreign-owned tech infrastructure, it could pose a risk during international disputes or cyber-attacks.
  • Loss of Control and Sovereignty:
    • Policy Influence: Big Tech companies wield significant influence, and collaborations could lead to their greater sway over UK policy and regulatory frameworks.
    • Dependence: Becoming overly dependent on specific foreign providers could limit the UK’s long-term strategic autonomy in the digital sphere.
  • Ethical Considerations:
    • Bias in Algorithms: If AI and data analytics tools provided by these companies are used in public services, there’s a risk of embedded biases affecting outcomes for citizens.
    • Transparency: The level of transparency around how data is managed, accessed, and secured by these corporations can be a major point of contention.

These are all important points for public discussion and robust oversight. Governments typically try to mitigate these risks through strict contractual agreements, data protection regulations (like GDPR in the UK), independent audits, and national security assessments. The effectiveness of these safeguards is often the subject of ongoing debate and scrutiny.

The Nihilism Factory: Why Far-Right Memes Aren’t a Joke

In the US, online political clashes are often better understood as a battle of internet subcultures. Two major groups on the far-right, while frequently lumped together, are worlds apart: the traditional Christian nationalists and the nihilistic ‘black-pilled’ wing of the ‘groyper’ scene.

The simplest way to frame it is as the ‘builders’ versus the ‘burners’.

The builders—the Christian nationalists—are still trying to construct something. They have a vision for an explicitly Christian nation, founded on order, hierarchy, and a return to what they see as ‘proper’ social roles. Their strategy is institutional: win elections, pass laws, stack the courts, and capture the school boards. Their language centres on ‘restoration’ and ‘revival’. Even when their rhetoric gets apocalyptic, the end goal is to use state power to enforce a particular moral order.

The burners, however, are orbiting a completely different sun. This is a much younger, more terminally online crowd, full of streamers and internet personalities. Their worldview is steeped in the cynicism of incel forums, gamer culture, and a deeply ironic, ‘edgelord’ sense of humour.

The crucial distinction is their profound loss of faith in reform. The black pilled wing is utterly convinced that our institutions, our culture, and even people themselves are beyond saving. The ‘black pill’ is a metaphor for accepting a brutal ‘truth’: that decline is irreversible, making despair the only rational response. If nothing can be redeemed, the only creative act left is to tear it all down. This accelerationism operates less like a political programme and more like a social physics, deliberately pressing on every social fault line—from race to gender—just to see what breaks. It is, essentially, the worship of things falling apart.

The bizarre, cryptic memes are central because, for them, the style is the substance. The meme factory serves several functions at once.

It’s a fiercely effective recruitment tool. A darkly funny, high-contrast image travels much faster and wider than a dense policy document. It’s also wrapped in the Kevlar vest of irony, which offers plausible deniability; if you’re offended, they were ‘just joking’. Finally, it works to desensitise its audience. Shock is used like a muscle. The first time you see something awful, you flinch. By the hundredth time, an idea that was once unthinkable feels perfectly normal within the group. This is why their aesthetic is such a chaotic mash-up of cartoon frogs and nihilistic jokes. The underlying message is that nothing matters.

You can start to see the appeal for those who feel exiled from the traditional games of status—dating, university, a good career. It offers a cheap and easy form of belonging where attention is the only currency.

This helps explain why real-world incidents are often followed by posts loaded with strange symbols. The act itself is a performance for an online audience, where the primary aim is gaining in-group status by turning reality into a toxic, private joke.

This doesn’t make it harmless, not for a second. A politics that only wants to break things can still inspire catastrophe, because its only measure of success is destruction.

The antidote requires us to refuse the seductive pull of nihilism and call the black pill what it is: a permission slip for cruelty hiding behind a mask of sophistication. After that, it’s about doing the quiet, unglamorous work of building real meaning and belonging in our lives—in places where empty spectacle can’t compete.

When you get right down to it, Christian nationalism is a plan to rule; black pilled accelerationism is a plan to ruin. Once you grasp that polarity, the memes stop looking like mysterious runes and start looking like what they are: billboards for a politics of nothing.

If you are interested in the world of memes here’ a great place to start https://knowyourmeme.com/memes/

We all need a ‘Digital Bill of Rights’

Ever had that strange feeling? You mention needing a new garden fork in a message, and for the next week, every corner of the internet is suddenly waving one in your face. It’s a small thing, a bit of a joke, but it’s a sign of something much bigger, a sign that the digital world—a place of incredible creativity and connection—doesn’t quite feel like your own anymore.

The truth is, and let’s be authentic about it, we’ve struck a strange bargain. We’re not really the customers of these huge tech companies; in a funny sort of way, we’re the product. We leave a trail of digital breadcrumbs with every click and share, not realising they’re being gathered for someone else’s feast. Our digital lives are being used to train algorithms that are learning to anticipate our every move. It’s all a bit like we’re living in a house with glass walls, and we’ve forgotten who’s looking in or why. We’ve drifted into a new kind of system, a techno-feudalism, where a handful of companies own the infrastructure, write the rules we blithely agree to, and profit from the very essence of us.

This isn’t some far-off problem; it’s happening right here on our doorstep. Take Palantir, a US spy-tech firm now managing a massive platform of our NHS patient data. They’re also working with UK police forces, using their tech to build surveillance networks that can track everything from our movements to our political views. Even local councils are getting in on the act, with Coventry reviewing a half-a-million-pound deal with the firm after people, quite rightly, got worried. This is our data, our health records, our lives.

When you see how engineered the whole system is, you can’t help but ask: why aren’t we doing more to protect ourselves? Why do we have more rights down at the DVLA than we do online? Here in the UK, we have laws like the GDPR and the new Data (Use and Access) Act 2025, which sound good on paper. But in practice, they’re riddled with loopholes, and recent changes have actually made it easier for our data to be used without clear consent. Meanwhile, data brokers are trading our information with little oversight, creating risks that the government itself has acknowledged are a threat to our privacy and security.

It feels less like a mistake and more like the intended design.

This isn’t just about annoying ads. Algorithms are making life-changing decisions. In some English councils, AI tools have been found to downplay women’s health issues, baking gender bias right into social care. Imagine your own mother or sister’s health concerns being dismissed not by a doctor, but by a dispassionate algorithm that was never taught to listen properly. Amnesty International revealed last year how nearly three-quarters of our police forces are using “predictive” tech that is “supercharging racism” by targeting people based on biased postcode data. At the same time, police are rolling out more live facial recognition vans, treating everyone on the street like a potential suspect—a practice we know discriminates against people of colour. Even Sainsbury’s is testing it to stop shoplifters. This isn’t the kind, fair, and empathetic society we want to be building.

So, when things feel this big and overwhelming, it’s easy to feel a bit lost. But this is where we need to find that bit of steely grit. This is where we say, “Right, what’s next?”

If awareness isn’t enough, what’s the one thing that could genuinely change the game? It’s a Digital Bill of Rights. Think of it not as some dry legal document, but as a firewall for our humanity. A clear, binding set of principles that puts people before profit.

So, if we were to sit down together and draft this charter, what would be our non-negotiables? What would we demand? It might look something like this:

  • The right to digital privacy. The right to exist online without being constantly tracked and profiled without our clear, ongoing, and revocable consent. Period.
  • The right to human judgment. If a machine makes a significant decision about you – such as your job or loan – you should always have the right to have a human review it. AI does not get the final say.
  • A ban on predictive policing. No more criminalising people based on their postcode or the colour of their skin. That’s not justice; it’s algorithmic segregation.
  • The right to anonymity and encryption. The freedom to be online without being unmasked. Encryption isn’t shady; in this world, it’s about survival.
  • The right to control and delete our data. To be able to see what’s held on us and get rid of it completely. No hidden menus, no 30-day waiting periods. Just gone.
  • Transparency for AI. If an algorithm is being used on you, its logic and the data it was trained on should be open to scrutiny. No more black boxes affecting our lives.

And we need to go further, making sure these rights protect everyone, especially those most often targeted. That means mandatory, public audits for bias in every major AI system. A ban on biometric surveillance in our public spaces. And the right for our communities to have a say in how their culture and data are used.

Once this becomes law, everything changes. Consent becomes real. Transparency becomes the norm. Power shifts.

Honestly, you can’t private-browse your way out of this. You can’t just tweak your settings and hope for the best. The only way forward is together. A Digital Bill of Rights isn’t just a policy document; it’s a collective statement. It’s a creative, hopeful project we can all be a part of. It’s us saying, with one voice: you don’t own us, and you don’t get to decide what our future looks like.

This is so much bigger than privacy. It’s about our sovereignty as human beings. The tech platforms have kept us isolated on purpose, distracted and fragmented. But when we stand together and demand consent, transparency, and the simple power to say no, that’s the moment everything shifts. That’s how real change begins – not with permission, but with a shared sense of purpose and a bit of good-humoured, resilient pressure. They built this techno-nightmare thinking no one would ever organise against it. Let’s show them they were wrong.

The time is now. With every new development, the window for action gets a little smaller. Let’s demand a Citizen’s Bill of Digital Rights and Protections from our MPs and support groups like Amnesty, Liberty, and the Open Rights Group. Let’s build a digital world that reflects the best of us: one that is creative, kind, and truly free.

Say no to digital IDs here https://petition.parliament.uk/petitions/730194

Sources

  1. Patient privacy fears as US spy tech firm Palantir wins £330m NHS …
  2. UK police forces dodge questions on Palantir – Good Law Project
  3. Coventry City Council contract with AI firm Palantir under review – BBC
  4. Data (Use and Access) Act 2025: data protection and privacy changes
  5. UK Data (Access and Use) Act 2025: Key Changes Seek to …
  6. Online tracking | ICO
  7. protection compliance in the direct marketing data broking sector
  8. Data brokers and national security – GOV.UK
  9. Online advertising and eating disorders – Beat
  10. Investment in British AI companies hits record levels as Tech Sec …
  11. The Data Use and Access Act 2025: what this means for employers …
  12. AI tools used by English councils downplay women’s health issues …
  13. Automated Racism Report – Amnesty International UK – 2025
  14. Automated Racism – Amnesty International UK
  15. UK use of predictive policing is racist and should be banned, says …
  16. Government announced unprecedented facial recognition expansion
  17. Government expands police use of live facial recognition vans – BBC
  18. Sainsbury’s tests facial recognition technology in effort to tackle …
  19. ICO Publishes Report on Compliance in Direct Marketing Data …
  20. Data brokers and national security – GOV.UK
  21. International AI Safety Report 2025 – GOV.UK
  22. Revealed: bias found in AI system used to detect UK benefits fraud
  23. UK: Police forces ‘supercharging racism’ with crime predicting tech
  24. AI tools risk downplaying women’s health needs in social care – LSE
  25. AI and the Far-Right Riots in the UK – LSE
  26. Unprecedented Expansion of Facial Recognition Is “Worrying for …
  27. The ethics behind facial recognition vans and policing – The Week
  28. Sainsbury’s to trial facial recognition to catch shoplifters – BBC
  29. No Palantir in the NHS and Corporate Watch Reveal the Real Story
  30. UK Data Reform 2025: What the DUAA Means for Compliance
  31. Advancing Digital Rights in 2025: Trends – Oxford Martin School
  32. Declaration on Digital Rights and Principles – Support study 2025
  33. Advancing Digital Rights in 2025: Trends, Challenges and … – Demos

The Trojan Horse in Your Pocket

The AI on your phone isn’t just a helper. It’s a tool for corporate and state control that puts our democracy at risk.

I was surprised when my Android phone suddenly updated itself, and Gemini AI appeared on the front screen, inviting me to join the AI revolution happening worldwide.

Google, Apple, and Meta are locked in a high-stakes race to put a powerful AI assistant in your pocket. The promise is a life of seamless convenience. The price, however, may be the keys to your entire digital life, and the fallout threatens to stretch far beyond your personal data.

This isn’t merely my middle-aged luddite paranoia; widespread public anxiety has cast a sharp light on the trade-offs we are being asked to accept. This investigation will demonstrate how the fundamental design of modern AI, with its reliance on vast datasets and susceptibility to manipulation, creates a perfect storm. It not only exposes individuals to new forms of hacking and surveillance but also provides the tools for unprecedented corporate and government control, undermining the foundations of democratic society while empowering authoritarian regimes.

A Hacker’s New Playground

Let’s be clear about the immediate technical risk. Many sophisticated AI tasks are too complex for a phone to handle alone and require data to be sent to corporate cloud servers. This process can bypass the end-to-end encryption we have come to rely on, exposing our supposedly private data.

Worse still is the documented vulnerability known as “prompt injection.” This is a new and alarmingly simple form of hacking where malicious commands are hidden in webpages or even video subtitles. These prompts can trick an AI assistant into carrying out harmful actions, such as sending your passwords to a scammer. This technique effectively democratises hacking, and there is no foolproof solution.

The Foundations of Democracy Under Threat

This combination of data exposure and vulnerability creates a perfect storm for democratic systems. A healthy democracy relies on an informed public and trust in its institutions, both of which are directly threatened.

When AI can generate floods of convincing but entirely fake news or deepfake videos, it pollutes the information ecosystem. A 2023 article in the Journal of Democracy warned that this erosion of social trust weakens democratic accountability. The threat is real, with a 2024 Carnegie Endowment report detailing how AI enables malicious actors to disrupt elections with sophisticated, tailored propaganda.

At the same time, the dominance of a few tech giants creates a new form of unaccountable power. As these corporations become the gatekeepers of AI-driven information, they risk becoming a “hyper-technocracy,” shaping public opinion without any democratic oversight.

A Toolkit for the Modern Authoritarian

If AI presents a challenge to democracies, it is a powerful asset for authoritarian regimes. The tools that cause concern in open societies are ideal for surveillance and control. A 2023 Freedom House report noted that AI dramatically amplifies digital repression, making censorship faster and cheaper.

Regimes in China and Russia are already leveraging AI to produce sophisticated propaganda and control their populations. From automated censorship that suppresses dissent to the creation of fake online personas that push state-sponsored narratives, AI provides the ultimate toolkit for modern authoritarianism.

How to Take Back Control

A slide into this future is not inevitable. Practical solutions are available for those willing to make a conscious choice to protect their digital autonomy.

For private communication, established apps like Signal offer robust encryption and have resisted AI integration. For email services, Tuta Mail provides an AI-free alternative. For those wanting to use AI on their own terms, open-source tools like Jan.ai allow you to run models locally on your own computer.

Perhaps the most powerful step is to reconsider your operating system. On a PC, Linux Mint is a privacy-respecting alternative. For smartphones, GrapheneOS, a hardened version of Android, provides a significant shield against corporate data gathering.

The code has been written, and the devices are in our hands. The next battle will be fought not in the cloud, but in parliaments and regulatory bodies, where the rules for this new era have yet to be decided. The time for us, and our government, to act is now.