Tag Archives: #HumanRights

We all need a ‘Digital Bill of Rights’

Ever had that strange feeling? You mention needing a new garden fork in a message, and for the next week, every corner of the internet is suddenly waving one in your face. It’s a small thing, a bit of a joke, but it’s a sign of something much bigger, a sign that the digital world—a place of incredible creativity and connection—doesn’t quite feel like your own anymore.

The truth is, and let’s be authentic about it, we’ve struck a strange bargain. We’re not really the customers of these huge tech companies; in a funny sort of way, we’re the product. We leave a trail of digital breadcrumbs with every click and share, not realising they’re being gathered for someone else’s feast. Our digital lives are being used to train algorithms that are learning to anticipate our every move. It’s all a bit like we’re living in a house with glass walls, and we’ve forgotten who’s looking in or why. We’ve drifted into a new kind of system, a techno-feudalism, where a handful of companies own the infrastructure, write the rules we blithely agree to, and profit from the very essence of us.

This isn’t some far-off problem; it’s happening right here on our doorstep. Take Palantir, a US spy-tech firm now managing a massive platform of our NHS patient data. They’re also working with UK police forces, using their tech to build surveillance networks that can track everything from our movements to our political views. Even local councils are getting in on the act, with Coventry reviewing a half-a-million-pound deal with the firm after people, quite rightly, got worried. This is our data, our health records, our lives.

When you see how engineered the whole system is, you can’t help but ask: why aren’t we doing more to protect ourselves? Why do we have more rights down at the DVLA than we do online? Here in the UK, we have laws like the GDPR and the new Data (Use and Access) Act 2025, which sound good on paper. But in practice, they’re riddled with loopholes, and recent changes have actually made it easier for our data to be used without clear consent. Meanwhile, data brokers are trading our information with little oversight, creating risks that the government itself has acknowledged are a threat to our privacy and security.

It feels less like a mistake and more like the intended design.

This isn’t just about annoying ads. Algorithms are making life-changing decisions. In some English councils, AI tools have been found to downplay women’s health issues, baking gender bias right into social care. Imagine your own mother or sister’s health concerns being dismissed not by a doctor, but by a dispassionate algorithm that was never taught to listen properly. Amnesty International revealed last year how nearly three-quarters of our police forces are using “predictive” tech that is “supercharging racism” by targeting people based on biased postcode data. At the same time, police are rolling out more live facial recognition vans, treating everyone on the street like a potential suspect—a practice we know discriminates against people of colour. Even Sainsbury’s is testing it to stop shoplifters. This isn’t the kind, fair, and empathetic society we want to be building.

So, when things feel this big and overwhelming, it’s easy to feel a bit lost. But this is where we need to find that bit of steely grit. This is where we say, “Right, what’s next?”

If awareness isn’t enough, what’s the one thing that could genuinely change the game? It’s a Digital Bill of Rights. Think of it not as some dry legal document, but as a firewall for our humanity. A clear, binding set of principles that puts people before profit.

So, if we were to sit down together and draft this charter, what would be our non-negotiables? What would we demand? It might look something like this:

  • The right to digital privacy. The right to exist online without being constantly tracked and profiled without our clear, ongoing, and revocable consent. Period.
  • The right to human judgment. If a machine makes a significant decision about you – such as your job or loan – you should always have the right to have a human review it. AI does not get the final say.
  • A ban on predictive policing. No more criminalising people based on their postcode or the colour of their skin. That’s not justice; it’s algorithmic segregation.
  • The right to anonymity and encryption. The freedom to be online without being unmasked. Encryption isn’t shady; in this world, it’s about survival.
  • The right to control and delete our data. To be able to see what’s held on us and get rid of it completely. No hidden menus, no 30-day waiting periods. Just gone.
  • Transparency for AI. If an algorithm is being used on you, its logic and the data it was trained on should be open to scrutiny. No more black boxes affecting our lives.

And we need to go further, making sure these rights protect everyone, especially those most often targeted. That means mandatory, public audits for bias in every major AI system. A ban on biometric surveillance in our public spaces. And the right for our communities to have a say in how their culture and data are used.

Once this becomes law, everything changes. Consent becomes real. Transparency becomes the norm. Power shifts.

Honestly, you can’t private-browse your way out of this. You can’t just tweak your settings and hope for the best. The only way forward is together. A Digital Bill of Rights isn’t just a policy document; it’s a collective statement. It’s a creative, hopeful project we can all be a part of. It’s us saying, with one voice: you don’t own us, and you don’t get to decide what our future looks like.

This is so much bigger than privacy. It’s about our sovereignty as human beings. The tech platforms have kept us isolated on purpose, distracted and fragmented. But when we stand together and demand consent, transparency, and the simple power to say no, that’s the moment everything shifts. That’s how real change begins – not with permission, but with a shared sense of purpose and a bit of good-humoured, resilient pressure. They built this techno-nightmare thinking no one would ever organise against it. Let’s show them they were wrong.

The time is now. With every new development, the window for action gets a little smaller. Let’s demand a Citizen’s Bill of Digital Rights and Protections from our MPs and support groups like Amnesty, Liberty, and the Open Rights Group. Let’s build a digital world that reflects the best of us: one that is creative, kind, and truly free.

Say no to digital IDs here https://petition.parliament.uk/petitions/730194

Sources

  1. Patient privacy fears as US spy tech firm Palantir wins £330m NHS …
  2. UK police forces dodge questions on Palantir – Good Law Project
  3. Coventry City Council contract with AI firm Palantir under review – BBC
  4. Data (Use and Access) Act 2025: data protection and privacy changes
  5. UK Data (Access and Use) Act 2025: Key Changes Seek to …
  6. Online tracking | ICO
  7. protection compliance in the direct marketing data broking sector
  8. Data brokers and national security – GOV.UK
  9. Online advertising and eating disorders – Beat
  10. Investment in British AI companies hits record levels as Tech Sec …
  11. The Data Use and Access Act 2025: what this means for employers …
  12. AI tools used by English councils downplay women’s health issues …
  13. Automated Racism Report – Amnesty International UK – 2025
  14. Automated Racism – Amnesty International UK
  15. UK use of predictive policing is racist and should be banned, says …
  16. Government announced unprecedented facial recognition expansion
  17. Government expands police use of live facial recognition vans – BBC
  18. Sainsbury’s tests facial recognition technology in effort to tackle …
  19. ICO Publishes Report on Compliance in Direct Marketing Data …
  20. Data brokers and national security – GOV.UK
  21. International AI Safety Report 2025 – GOV.UK
  22. Revealed: bias found in AI system used to detect UK benefits fraud
  23. UK: Police forces ‘supercharging racism’ with crime predicting tech
  24. AI tools risk downplaying women’s health needs in social care – LSE
  25. AI and the Far-Right Riots in the UK – LSE
  26. Unprecedented Expansion of Facial Recognition Is “Worrying for …
  27. The ethics behind facial recognition vans and policing – The Week
  28. Sainsbury’s to trial facial recognition to catch shoplifters – BBC
  29. No Palantir in the NHS and Corporate Watch Reveal the Real Story
  30. UK Data Reform 2025: What the DUAA Means for Compliance
  31. Advancing Digital Rights in 2025: Trends – Oxford Martin School
  32. Declaration on Digital Rights and Principles – Support study 2025
  33. Advancing Digital Rights in 2025: Trends, Challenges and … – Demos

The End Game: From Free Markets to Technofascism

There’s a growing sense that the whole capitalist project is running on fumes. For decades, it’s been a system built on one simple rule: endless growth. But what happens when it runs out of road? It has already consumed new lands, markets, and even the quiet personal spaces of our attention. Think of it like a shark that must constantly swim forward to breathe, and it has finally hit the wall of the aquarium. The frantic, desperate thrashing we’re seeing in our politics and society? That’s the crisis.

For the last forty-odd years, the dominant philosophy steering our world has been Neoliberalism. Stripped to its bare bones, it’s a simple creed: privatise anything that isn’t nailed down, deregulate in the name of ‘freedom’, and chase economic growth as if it were the only god worth worshipping. What has become chillingly clear is that the current lurch towards authoritarianism isn’t a strange detour or a bug in the system; it’s the next logical feature. Technofascism isn’t some bizarre alternative to neoliberalism; it is its terrifying, inevitable endgame. It is emerging as a ‘last-ditch effort’ to rescue a system in terminal crisis, and the price of that rescue is democracy itself.

Before you can build such a machine, you need a blueprint. The blueprint for this new form of control is a set of extreme ideas that’d be laughable if their proponents weren’t sitting on mountains of cash and power. At the heart of a gloomy-sounding gentlemen’s club of philosophies, which includes Neo-Reactionism (or NRx), the Dark Enlightenment, and Accelerationism, is a deep, abiding, and utterly sincere contempt for the very idea of liberal democracy. They see it as a messy, sentimental, and ‘incredibly inefficient’ relic, a ‘failed experiment’ that just gets in the way of what they consider real progress.

This isn’t just a passing grumble about politicians. It’s a root-and-branch rejection of the last few centuries of political thought. Their utopia is a society restructured as a hyper-efficient tech start-up, helmed by a god-like ‘CEO-autocrat’. This genius-leader, naturally drawn from their own ranks, would be free to enact his grand vision without being bothered by tedious things like elections or civil liberties. It’s an idea born of staggering arrogance, a belief that a handful of men from Silicon Valley are so uniquely brilliant that they alone should be calling the shots.

This thinking didn’t spring from nowhere. Its strange prophets include figures like Curtis Yarvin, a blogger who spins academic-sounding blather that tells billionaires their immense power is not just deserved but necessary. It’s a philosophy that offers a convenient, pseudo-intellectual justification for greed and bigotry, framing them as signs that one is ‘red-pilled’, an enlightened soul who can see through the progressive charade. This worldview leads directly to a crucial pillar of technofascism: the active rejection of history and expertise. This mindset is captured in the terrifying nonchalance of a Google executive who declared, ‘I don’t even know why we study history… what already happened doesn’t really matter.’ This isn’t just ignorance; it’s a strategic necessity. To build their imagined future, they must demolish the guardrails of historical lessons that warn us about fascism and teach us the value of human rights. They declare war on the ‘ivory tower’ and the ‘credentialed expert’ because a population that respects knowledge will see their project for the dangerous fantasy it is.

But an ideology, no matter how extreme, remains hot air until it is forged into something tangible. The next chapter of this story is about how that strange, anti-democratic philosophy was hammered into actual, working tools of control. A prime case study is the company Palantir. It is the perfect, chilling expression of its founder Peter Thiel’s desire to ‘unilaterally change the world without having to constantly convince people.’ This company did not accidentally fall into government work; it was built from its inception to serve the state. Its primary revenue streams are not ordinary consumers, but the most powerful and secretive parts of government: the CIA, the FBI, and the Department of Homeland Security. It embodies the new ‘public-private partnership’, where the lines between a corporation and the state’s security apparatus are erased entirely.

The product of this unholy union is a global software of oppression. At home, Palantir was awarded a contract to create a tool for ICE to ‘surveil, track, profile and ultimately deport undocumented migrants,’ turning high-minded talk of ‘inefficiency’ into the ugly reality of families being torn apart. This same machinery of control is then exported abroad, where the company becomes a key player in the new defence industrial base. Its systems are deployed by militaries around the globe, and nowhere is this more terrifyingly apparent than in conflicts like the one in Gaza. There, occupied territories have become a digital laboratory where AI-powered targeting systems, enabled by companies within this ecosystem, are battle-tested with brutal efficiency. The line between a software company and an arms dealer is not just blurred; it is erased. This is the ultimate expression of the public-private partnership: the privatisation of war itself, waged through algorithms and data streams, where conflict zones become the ultimate testing ground.

This architecture of control, however, is not just aimed outward at state-defined enemies; it is turned inward, against the foundational power of an organised populace: the rights of workers. Technofascism, like its historical predecessors, understands that to dominate a society, you must first break its collective spirit. There’s a chilling historical echo here; the very first groups targeted by the Nazis were communists, socialists, and trade unionists. They were targeted first because organised labour is a centre of collective power that stands in opposition to total authority. Today, this assault is cloaked in the language of ‘disruption’. The gig economy, championed by Silicon Valley, has systematically shattered stable employment in entire industries, replacing it with a precarious workforce of atomised individuals who are cheaper, more disposable, and crucially, harder to organise. This attack on present-day labour is just a prelude to their ultimate goal: the stated desire to ‘liberate capital from labor for good.’ The ‘mad rush’ to develop AI is, at its core, a rush to create a future where the vast majority of humanity is rendered economically irrelevant and therefore politically powerless.

The human cost of this vision is already being paid. A new global caste system is emerging, starkly illustrated by OpenAI. While AI researchers in California enjoy ‘million-dollar compensation packages,’ Kenyan data workers are paid a ‘few bucks an hour’ to be ‘deeply psychologically traumatised’ by the hateful content they must filter. This is not an oversight; it is a calculated feature of what can only be called the ‘logic of Empire’, a modern colonialism where the human cost is outsourced and rendered invisible. This calculated contempt for human dignity is mirrored in their treatment of the planet itself. The environmental price tag for the AI boom is staggering: data centres with the energy footprint of entire states, propped up by coal plants and methane turbines. A single Google facility in water-scarce Chile planned to use a thousand times more fresh water than the local community. This isn’t an unfortunate side effect; it’s the logical outcome of an ideology that sees the natural world as an obstacle to be conquered or a flawed planet to be escaped. The fantasy of colonising Mars is the ultimate expression of this: a lifeboat for billionaires, built on the premise that they have the right to destroy our only home in the name of their own ‘progress’.

Having built this formidable corporate engine, the final, crucial act is to seize the levers of political power itself. While it is tempting to see this as the work of one particular political tribe, embodied by a figure like Donald Trump acting as a ‘figurehead’ who normalises the unthinkable, the reality is now far more insidious. The ideology has become so pervasive that it has captured the entire political establishment.

Consider this: after years of opposing Tory-led Freeports, Keir Starmer’s Labour government announces the creation of ‘AI Growth Zones’—digital versions of the same deregulated havens, designed explicitly for Big Tech. The project has become bipartisan. The state’s role is no longer to regulate these powerful entities, but to actively carve out legal exceptions for them. This move is mirrored on the global stage, where both the UK and US refuse to sign an EU-led AI safety treaty. The reasoning offered is a masterclass in technofascist rhetoric. US Vice President JD Vance, a direct protégé of Peter Thiel, warns that regulation could “kill a transformative industry,” echoing the Silicon Valley line that democracy is a drag on innovation. Meanwhile, the UK spokesperson deflects, citing concerns over “national security,” the classic justification for bypassing democratic oversight to protect the interests of the state and its corporate security partners.

This quiet, administrative capture of the state is, in many ways, more dangerous than a loud revolution. It doesn’t require a strongman; it can be implemented by polished, ‘sensible’ leaders who present it as pragmatic and inevitable. The strategy for taking power is no longer just about a chaotic ‘flood the zone with shit’ campaign; it’s also about policy papers, bipartisan agreements, and the slow, methodical erosion of regulatory power.

This is where the abstract horror becomes horrifyingly, tangibly real. The tools built by Palantir are actively used to facilitate the ‘cruel deportations’ of real people, a process that is only set to accelerate now that governments are creating bespoke legal zones for such technology. The AI systems built on the backs of traumatised workers are poised to eliminate the jobs of artists and writers. The political chaos deliberately sown online spills out into real-world violence and division. This is the strategy in action, where the combination of extremist ideology, corporate power, and a captured political class results in devastating human consequences.

When you line it all up, the narrative is stark and clear. First, you have the strange, elitist philosophy, born of ego and a deep-seated contempt for ordinary people. This ideology then builds the corporate weapons to enforce its vision. And finally, these weapons are handed to a political class, across the spectrum, to dismantle democracy from the inside. This entire project is fuelled by a desperate attempt to keep the wheels on a capitalist system that has run out of options and is now cannibalising its own host society to survive.

And here’s the kicker, the final, bitter irony that we must sit with. An ideology that built its brand by screaming from the rooftops about ‘freedom’, individualism, and the power of the ‘free market’ has, in the end, produced the most sophisticated and all-encompassing tools of control and oppression humanity has ever seen.

It’s a grim picture, but there are no two ways about it. But this is precisely where our own values of resilience, empathy, and grounded and courageous optimism must come into play. The first, most crucial act of resistance is simply to see this process clearly, to understand it for what it is. to engage in what the ancient Greeks called an apocalypse, not an end-of-the-world event but a lifting of the veil, a revelation.

Seeing the game is the first step to refusing to play it, especially now that all the major political teams are on the same side. It’s the moment we can say, ‘No, thank you.’ It’s the moment we choose to slow down, to log off from their manufactured chaos, and to reconnect with the real, tangible world around us. It’s the choice to value the very things their ideology seeks to crush: kindness, community, creativity, and the simple, profound magic of human connection. Facing this reality takes courage, but doesn’t have to lead to despair. It can be the catalyst that reminds us what is truly worth fighting for. And that, in itself, in a world of bipartisan consensus, is the most powerful and hopeful place to start.