Tag Archives: artificial-intelligence

Art After the Flood: Authenticity in an Age of Hyper-production.

We are living through a second flood. The first, chronicled by Walter Benjamin, was a rising tide of mechanical reproduction that stripped the artwork of its unique presence in time and space, its ritual weight. What we face now is a deluge of a different order, not the copying of an original, but the generation of the ostensibly original itself. This synthetic reproducibility, instant and infinite, does not so much wash away the aura of the artwork as dissolve the very ground from which aura once grew. For the artist, this marks a profound reordering, a passage through a great filter that demands a reckoning with why creation matters in a world saturated with the facsimile of creation.

The crisis is, at its heart, an economic one, born from the final victory of exhibition value over all else. Benjamin saw how reproduction prised art from the domain of ritual, making it a political, exhibitable object. AI hyper-production perfects this shift, creating a universe of content whose sole purpose is to be displayed, circulated, and consumed, utterly detached from any ritual of human making. When ten thousand competent images can be summoned to fill a website’s empty corners, the market value of such functional work collapses. The commercial artist is stranded, their skill rendered not scarce but superfluous in a marketplace where the exhibitable object has been liberated from the cost of its production.

This leads to the deafening, companion problem: the drowning-out effect. If everything can be exhibited, then nothing is seen. The channels of distribution become clogged with a spectral, ceaseless tide, a ‘slop’ of algorithmic potential. Discovery becomes a lottery. In this storm of accessibility, the scarce resource is no longer the means of production, but attention. And attention, in such a climate, refuses to be captured on the mass scale that the logic of exhibition value demands; it must be cultivated in the intimate, shadowed spaces the floodlight cannot reach.

Consequently, the artist’s identity fractures and reassembles. The role shifts from creator to curator, editor, and context-engineer. If the machine handles the ‘how,’ the human value retreats into the realms of conception, discernment, and judgement. The artistic self becomes a more ghostly thing, defined less by the manual trace and more by the authority of selection and the narrative woven around the chosen fragment. For some, this is a liberation from tradition’s heavy hand; for others, it feels like the final severance of that unique phenomenon of a distance, however close it may be, that once clung to the hand-wrought object.

This forced evolution makes brutally clear a distinction that has long been blurred: the split between ‘content’ and ‘art.’ ‘Content’ is the pure, polished exhibit. It is information, filler, ornament, the fulfilment of a demand. For this, the synthetic process is peerless. ‘Art,’ however, must now be defined by what it stubbornly retains or reclaims. It must be an act where the process and the human context are the irreducible core, where the value cannot be extracted from the texture of its making. Its purpose shifts from exhibition back towards a new kind of ritual, not of cult, but of verifiable human connection. The artist must now choose which master they serve.

The only viable path, therefore, is a strategic retreat to the domains presence in time and space still governs. Since the object alone is forever suspect, value must be painstakingly rebuilt around radical context and provenance. The aura must be consciously, authentically reconstructed. This becomes the artist’s new, urgent work.

The story of the object’s making is now its last line of defence. The narrative, the intention, the struggle, the trace of the human journey, ceases to be a mere accompaniment and becomes the primary text. Proof of origin becomes a sacred credential. As a latter-day witness to this crisis noted, the only territory where authenticity can now be assured is “in the room with the person who made the thing.” The live performance, the studio visit, the act of co-creation: these are no longer secondary events but the central, unassailable offering. Here, art reclaims its here and now, its witnessable authenticity in a shared moment that no algorithm can simulate or inhabit.

The Unmaking of the Unique

Thus, hyper-production functions as a great filter. It mercilessly commoditises the exhibit, washing away the economic model of the last century. In doing so, it forces a terrible, clarifying question upon every practitioner: what can you anchor your work to that is beyond the reach of synthetic reproduction?

The emerging responses are maps of this new terrain. Some become context engineers, building immersive narratives where the work is a relic of a true human story. Others become synthesist collaborators, directing the machine with a voice of defiantly human taste. A faction turns resolutely physical, seeking refuge in the stubborn, three-dimensional ‘thingness’ that defies flawless digital transference. Yet others become architects of experience, crafting frameworks for interaction where the art is the fleeting, collective moment itself. And many will retreat to cultivate a deep niche, a dedicated community for whom the human trace is the only currency that holds value.

The flood will not cease. The exhibition value of the world will be met, and exceeded, by synthetic means. But this crisis, by shattering the professionalised model, may ironically clear the way for a return to art’s first principles: not as a commodity for distribution, but as a medium for human connection, a testament of presence, and a ritual of shared meaning. The future of art lies not in battling the currents of reproduction, but in learning to build arks, vessels of witnessed, authentic experience that can navigate the vast and glittering, but ultimately hollow, sea of the endlessly exhibitable.

The Playbook: What the Left Can Learn from the Right’s Online War Part 1

The alt-right’s online dominance stems from savvy, adaptive tactics that exploit platform algorithms, human psychology, and cultural voids, turning fringe ideas into mainstream forces through emotional resonance and community building. While the left should never mimic their toxic elements (e.g., hate, disinformation), there’s value in borrowing structural and strategic tools to counter far-right gains and rebuild progressive momentum.

Drawing from 2025 analyses, the key is ethical adaptation: Focus on hope, facts, and inclusivity to create “alt-left pipelines” that radicalise toward justice, highlight economic inequality not racial division.

Below are transferable lessons with deployment ideas tailored for a progressive agenda.

1. Build a Multi-Voice “Roster” for Narrative Dominance (The WWF Model)

  • Lesson from Alt-Right: They succeed via a diverse “ecosystem” of creators—intellectuals, meme-makers, podcasters—who cross-promote, feud playfully, and create social immersion, making ideas feel organic and inescapable (e.g., from Jordan Peterson to Nick Fuentes). This multiplicity normalises extremism, as one voice becomes a chorus.
  • Action Point: Create a “Red-Green roster” of 20-50 voices (e.g., eco-activists, union organisers, TikTok storytellers) focused on inequality/climate. Use X Spaces for collaborative “story arcs” (e.g., debates on wealth taxes) and Patreon-funded collabs to foster community. Aim for viral, relatable formats like short explainers on “why your rent doubled.” In 2025, leverage decentralised platforms to evade moderation while building loyalty.

2. Craft Gradual “Pipelines” for Positive Radicalisation

  • Lesson from Alt-Right: Their pipeline hooks users with benign frustrations (e.g., “woke overreach”) then escalates via algorithms to echo chambers, blending humour and validation to build commitment. This self-radicalises without overt pushes.
  • Action Point: Design an “alt-left pipeline” starting with empowering content (e.g., TikToks on “union wins” or “free college stories”) that funnels to deeper dives (e.g., podcasts on systemic racism). Use AI tools ethically for personalised recommendations, targeting disillusioned centrists with “hope hooks” like community success tales. Avoid outrage; emphasise “business offers” (e.g., “Join for better wages”). A 2025 survey shows this could sway working-class voters by addressing alienation head-on.

3. Weaponise Memes, Humour, and Emotional Storytelling

  • Lesson from Alt-Right: Irony, memes, and outrage farming (e.g., baiting replies for algorithmic boosts) create addictive engagement, polarising while evading bans. They tap anger over issues like immigration but dilute for broad appeal.
  • Action Point: Flood platforms with joyful, subversive memes (e.g., “Billionaires vs. Your Rent” cartoons) and emotional narratives (e.g., worker strike videos with uplifting arcs). Use X for “provocative but substantive” threads that provoke right-wing overreactions, then amplify the absurdity to highlight hypocrisy. Focus on “politics of substance” like cultural symbols of solidarity (e.g., union anthems remixed). In 2025, prioritise TikTok/Reels for Gen Z, where emotionally charged content drives 2x engagement.

4. Invest in Local Organising and Power-Building Networks

  • Lesson from Alt-Right: Online tactics feed offline infrastructure (e.g., rallies channelling frustration into loyalty), absorbing dissent via co-optation and purges. They build from the ground up, turning digital anger into real power.
  • Action Point: Mirror this by linking online campaigns to local “power rosters” (e.g., neighborhood groups for mutual aid). Use X/Discord for one-on-one recruitment: “What matters to you? Let’s organize.” Channel energy into sustained wins like tenant unions, not just viral moments. 2025 reports stress matching right-wing billionaire media with grassroots funding for community hubs. Avoid Alinsky-style baiting; instead, “grey rock” trolls with factual redirects.

5. Pursue Long-Term Institutional Capture and Patience

  • Lesson from Alt-Right: They play the “long game” (e.g., infiltrating education/media over decades), using feigned ignorance to waste opponents’ time and normalise via backlash. Short-term wins (e.g., elections) are secondary to cultural entrenchment.
  • Action Point: Shift from reactive “debates” to proactive institution-building (e.g., progressive media co-ops, school boards). Use “inb4” preemptive framing (e.g., “Before you ask about taxes, here’s how billionaires dodge them”) to control narratives. In 2025, amid platform toxicity, decentralise to Bluesky/Mastodon for safe scaling. Measure success by sustained engagement, not viral spikes.

Ethical Guardrails and Risks

Adaptations must prioritise anti-hate safeguards e.g., community guidelines against doxxing and fact-checking to avoid disinformation pitfalls. Risks include internal purges or echo-chamber toxicity, as seen in past left online spaces.

The goal: Turn alt-right “tactics of scarcity” into left abundance—building power through solidarity, not division. As one 2025 analysis notes, the left’s edge is substance; deploy these tools to make it viral.

Smoke and Mirrors: Forget the small boats. The Real Mass Migration is Digital.

The Fourth World is Coming. It’s Just Not What You Think.

What if the biggest migration in human history isn’t human at all? There’s a theory doing the rounds that frames the AI revolution as just that: an “unlimited, high-IQ mass migration from the fourth world.” It argues we’re witnessing the arrival of a perfect labour force—smarter than average, infinitely scalable, and working for pennies, with none of the messy human needs for housing or cultural integration. It’s a powerful idea that cuts through the jargon, but this perfect story has a fatal flaw.

The biggest lie the theory tells is one of simple replacement. It wants you to believe AI is an immigrant coming only to take your job, but this ignores the more powerful reality of AI as a collaborator. Think of a doctor using an AI to diagnose scans with a level of accuracy no human could achieve alone; the AI isn’t replacing the doctor, it’s making them better. The data shows that while millions of jobs will vanish, even more will be created, meaning the future isn’t about simple replacement, but something far more complex.

If the first mistake is economic, the second is pure Hollywood fantasy. To keep you distracted, they sell you a story about a robot apocalypse, warning that AI will “enslave and kill us all” by 2045. Frankly, this sort of talk doesn’t help. Instead of panicking, we should be focused on the very real and serious work of AI alignment right now, preventing advanced systems from developing dangerous behaviours. The focus on a fantasy villain is distracting us from the real monster already in the machine.

That monster has a name: bias. The theory celebrates AI’s “cultural neutrality,” but this is perhaps its most dangerous lie. An AI is not neutral; it is trained on the vast, messy, and deeply prejudiced dataset of human history, and without careful oversight, it will simply amplify those flaws. We already see this in AI-driven hiring and lending algorithms that perpetuate discrimination. A world run by biased AI doesn’t just automate jobs; it automates injustice.

This automated injustice isn’t a bug; it’s a feature of the system’s core philosophy. The Silicon Valley credo of ‘move fast and break things’ has always been sold as a mark of disruptive genius, but we must be clear about what they actually intend to ‘break’: labour laws, social cohesion, and ethical standards are all just friction to be optimised away. This isn’t theoretical; these same tech giants are now demanding further deregulation here in the UK, arguing that our rules are what’s slowing down their ‘progress’. They see our laws not as protections for the public, but as bugs to be patched out of the system, and they have found a government that seems dangerously willing to listen.

But while our own government seems willing to listen to this reckless philosophy, the rest of the world is building a defence. This isn’t a problem without a solution; it’s a problem with a solution they hope you’ll ignore. UNESCO’s Recommendation on the Ethics of Artificial Intelligence is the world’s first global standard on the subject—a human-centric rulebook built on core values like fairness, inclusivity, transparency, and the non-negotiable principle that a human must always be in control. It proves that a different path is possible, which means the tech giants have made one last, classic mistake.

They have assumed AI is migrating into a world without rules. It’s not. It’s migrating into a world of laws, unions, and public opinion, where international bodies and national governments are already waking up. This isn’t an unstoppable force of nature that we are powerless to resist; it is a technology that can, and must, be shaped by democratic governance. This means we still have a say in how this story ends.

So, where does this leave us? The “fourth world migration” is a brilliant, provocative warning, but it’s a poor map for the road ahead. Our job isn’t to build walls to halt this migration, but to set the terms of its arrival. We have to steer it with ethical frameworks, ground it with sensible regulation, and harness it for human collaboration, not just corporate profit. The question is no longer if it’s coming, but who will write the terms of its arrival.

Palantir & Brit Card: The Final Piece of the Surveillance State.

To understand what’s coming with the mandatory “Brit Card,” you first have to understand who is already here. The scheme isn’t appearing out of thin air; it’s the logical capstone on an infrastructure that has been quietly and deliberately assembled over years by a single, dominant player: Palantir. Their involvement isn’t just possible—it’s the probable, planned outcome of a strategy that serves both their corporate interests and the UK government’s long-held ambitions.

Let’s be clear about the facts. Palantir isn’t some new bidder for a government contract; they are already embedded, their surveillance tentacles wrapped around the core functions of the British state. They have over two dozen contracts, including with the NHS to analyse patient data, the Ministry of Defence for military intelligence, and police forces for “predictive policing.” They are in the Cabinet Office, they are in local government. They are, in essence, the state’s private intelligence agency.

This is a company forged in the crucible of the CIA and the NSA, whose entire business model is to turn citizen data into surveillance gold. Their track record is one of mass surveillance, racial profiling algorithms, and profiting from border control and deportations. To believe that this company would be hired to build a simple, privacy-respecting ID system is to willfully ignore everything they are and everything they do. The “Brit Card” is not a separate project for them. It is the keystone—the final piece that will allow them to link all their disparate data streams into one terrifyingly complete surveillance engine, with every UK adult forced onto its database.

But to grasp the scale of the threat, you have to ask why this is happening here, in the UK, and not anywhere else in Europe. This isn’t a happy accident; it’s a deliberate strategy. Palantir has chosen the UK for its European Defence HQ for a very simple reason: post-Brexit Britain is actively marketing itself as a deregulated safe harbour.

The UK government is offering what the EU, with its precautionary principles and landmark AI Act, cannot: regulatory flexibility. For a company like Palantir, whose business thrives in the grey areas of ethics and law, the EU is a minefield of compliance. The UK, by contrast, is signalling that it’s willing to write the rules in collaboration with them. The government’s refusal to sign the Paris AI declaration over “national security” concerns was not a minor diplomatic snub; it was the smoking gun. It was a clear signal to Silicon Valley that Britain is open for a different kind of business, one where restrictive governance will not get in the way of profit or state power.

This brings us to the core of the arrangement: a deeply symbiotic relationship. The UK government offers a favourable legal environment and waves a giant chequebook, with an industrial policy explicitly geared towards making the country a hub for AI and defence tech. The MoD contracts and R&D funding are a direct financial lure for predatory American corporations like Palantir, Blackrock, and Blackstone, inviting them to make deep, strategic incursions into our critical public infrastructure.

This isn’t charity, of course. In return, Palantir offers the government the tools for mass surveillance under the plausible deniability of a private contract. By establishing its HQ here, Palantir satisfies all the sovereign risk and security concerns, making them the perfect “trusted” partner. It’s a perfect feedback loop: the government signals its deregulatory intent, the money flows into defence and AI, and a company like Palantir responds by embedding itself ever deeper into the fabric of the state.

This isn’t about controlling immigration. It’s about building the infrastructure to control citizens. We are sacrificing our regulatory sovereignty for a perceived edge in security and technology, and in doing so, we are rolling out the red carpet for the very companies that specialise in monitoring us. When the firm that helps the CIA track its targets is hired to build your national ID card, you’re not getting documentation. You’re getting monitored.

The UK Didn’t Just Sign a Tech Deal – It Handed Over the Keys.

Whilst all eyes are on Trump at Windsor the UK Government announced the “Tech Prosperity Deal,” a picture is emerging not of a partnership, but of a wholesale outsourcing of Britain’s digital future to a handful of American tech behemoths. The government’s announcement, dripping with talk of a “golden age” and “generational step change,” paints a utopian vision of jobs and innovation. But peel back the layers of PR, and the £31 billion deal begins to look less like an investment in Britain and more like a leveraged buyout of its critical infrastructure.

At the heart of this cosy relationship lies a bespoke new framework: the “AI Growth Zone.” The first of its kind, established in the North East, is the blueprint for this new model of governance. It isn’t just a tax break; it’s a red-carpet-lined, red-tape-free corridor designed explicitly for the benefit of companies like Microsoft, NVIDIA, and OpenAI. The government’s role has shifted from regulation to facilitation, promising to “clear the path” by offering streamlined planning and, crucially, priority access to the national power grid—a resource already under strain.

While ministers celebrate the headline figure of £31 billion in private capital, the true cost to the public is being quietly written off in the footnotes. This isn’t free money. The British public is footing the bill indirectly through a cascade of financial incentives baked into the UK’s Freeport and Investment Zone strategy. These “special tax sites” offer corporations up to 100% relief on business rates for five years, exemptions from Stamp Duty, and massive allowances on capital investment. For every pound of tax relief handed to Microsoft for its £22 billion supercomputer or Blackstone for its £10 billion data centre campus, that is a pound less for schools, hospitals, and public services.

Conspicuously absent from this grand bargain is any meaningful protection for the very people whose data will fuel this new digital economy. The deafening silence from Downing Street on the need for a Citizens’ Bill of Digital Rights is telling. Such a bill would enshrine fundamental protections: the right to own and control one’s personal data, the right to transparency in algorithmic decision-making, and the right to privacy from pervasive state and corporate surveillance. Instead, the British public is left to navigate this new era with a patchwork of outdated data protection laws, utterly ill-equipped for the age of sovereign AI and quantum computing. Without these enshrined rights, citizens are not participants in this revolution; they are the raw material, their health records and digital footprints the currency in a deal struck far above their heads.

What is perhaps most revealing is the blurring of lines between the state and the boardroom. The government’s own press release celebrating the deal reads like a corporate shareholder report, quoting the CEOs of NVIDIA, OpenAI, and Microsoft at length. Their voices are not presented as external partners but as integral players in a shared national project. When Sam Altman, CEO of OpenAI, declares that “Stargate UK builds on this foundation,” it raises the fundamental question: who is building what, and for whom?

This unprecedented integration of Big Tech into the fabric of national infrastructure raises profound questions about sovereignty and control. These data centres and supercomputers are not just buildings; they are the “factories of the future,” processing everything from sensitive healthcare data from the UK Biobank to research that will define our national security. By handing the keys to this infrastructure to foreign entities, the UK risks becoming a digital vassal state, reliant on the goodwill and strategic interests of corporations whose primary allegiance is to their shareholders, not to the British public.

The “Tech Prosperity Deal” has been sold as a triumph of post-Brexit, “Global Britain.” But the reality is far more sobering. It is a deal that sees the government leveraging its power to reduce tax income, strain the national grid, and cede control of critical infrastructure, all to entice a handful of the world’s most powerful companies. The handshake may have been for £31 billion, but the price of Britain’s digital independence could prove to be far higher.


Further Reading and Sources

Official Government Announcements and Policies:

Digital Rights and Privacy Advocacy:

  • Open Rights Group: A leading UK organisation campaigning for digital rights and privacy, with analysis on AI and data protection.
  • Big Brother Watch: Investigates and challenges threats to civil liberties, including state surveillance and the use of private data.

Data and Infrastructure Context:

I understand your concerns about the recent deal between the UK government and Big Tech, especially regarding data privacy and national security. It’s a complex issue with various potential implications.

On one hand, proponents of the deal might argue that collaborations with leading technology companies can bring significant benefits, such as:

  • Economic Growth: Attracting investment and creating jobs in the tech sector.
  • Technological Advancement: Gaining access to cutting-edge research, development, and infrastructure that could boost the UK’s scientific and technological capabilities.
  • Improved Public Services: Utilizing advanced data analytics and cloud computing to enhance efficiency and effectiveness in areas like healthcare (e.g., accelerating medical research through projects like the UK Biobank) and national security.
  • Global Competitiveness: Ensuring the UK remains a key player in the global digital economy.

Concerns about data sovereignty and the potential risks are valid. Here are some of the “what ifs” that often arise in discussions like this:

  • Data Security and Privacy:
    • Breaches: The risk of data breaches increases when sensitive information is stored and processed by third-party corporations, regardless of their size or security measures.
    • Jurisdiction: In cases of international corporations, questions arise about which country’s laws apply to data privacy and protection, potentially complicating legal recourse for UK citizens.
    • Usage of Data: Concerns about how this data might be used beyond the stated purpose, including for commercial gain or even by foreign governments, especially if those corporations are subject to laws in their home countries that compel data access.
  • National Security Implications:
    • Foreign Influence: The potential for foreign governments to gain indirect access to sensitive national security research or infrastructure through these corporations.
    • Critical Infrastructure: If essential services become overly reliant on foreign-owned tech infrastructure, it could pose a risk during international disputes or cyber-attacks.
  • Loss of Control and Sovereignty:
    • Policy Influence: Big Tech companies wield significant influence, and collaborations could lead to their greater sway over UK policy and regulatory frameworks.
    • Dependence: Becoming overly dependent on specific foreign providers could limit the UK’s long-term strategic autonomy in the digital sphere.
  • Ethical Considerations:
    • Bias in Algorithms: If AI and data analytics tools provided by these companies are used in public services, there’s a risk of embedded biases affecting outcomes for citizens.
    • Transparency: The level of transparency around how data is managed, accessed, and secured by these corporations can be a major point of contention.

These are all important points for public discussion and robust oversight. Governments typically try to mitigate these risks through strict contractual agreements, data protection regulations (like GDPR in the UK), independent audits, and national security assessments. The effectiveness of these safeguards is often the subject of ongoing debate and scrutiny.

We all need a ‘Digital Bill of Rights’

Ever had that strange feeling? You mention needing a new garden fork in a message, and for the next week, every corner of the internet is suddenly waving one in your face. It’s a small thing, a bit of a joke, but it’s a sign of something much bigger, a sign that the digital world—a place of incredible creativity and connection—doesn’t quite feel like your own anymore.

The truth is, and let’s be authentic about it, we’ve struck a strange bargain. We’re not really the customers of these huge tech companies; in a funny sort of way, we’re the product. We leave a trail of digital breadcrumbs with every click and share, not realising they’re being gathered for someone else’s feast. Our digital lives are being used to train algorithms that are learning to anticipate our every move. It’s all a bit like we’re living in a house with glass walls, and we’ve forgotten who’s looking in or why. We’ve drifted into a new kind of system, a techno-feudalism, where a handful of companies own the infrastructure, write the rules we blithely agree to, and profit from the very essence of us.

This isn’t some far-off problem; it’s happening right here on our doorstep. Take Palantir, a US spy-tech firm now managing a massive platform of our NHS patient data. They’re also working with UK police forces, using their tech to build surveillance networks that can track everything from our movements to our political views. Even local councils are getting in on the act, with Coventry reviewing a half-a-million-pound deal with the firm after people, quite rightly, got worried. This is our data, our health records, our lives.

When you see how engineered the whole system is, you can’t help but ask: why aren’t we doing more to protect ourselves? Why do we have more rights down at the DVLA than we do online? Here in the UK, we have laws like the GDPR and the new Data (Use and Access) Act 2025, which sound good on paper. But in practice, they’re riddled with loopholes, and recent changes have actually made it easier for our data to be used without clear consent. Meanwhile, data brokers are trading our information with little oversight, creating risks that the government itself has acknowledged are a threat to our privacy and security.

It feels less like a mistake and more like the intended design.

This isn’t just about annoying ads. Algorithms are making life-changing decisions. In some English councils, AI tools have been found to downplay women’s health issues, baking gender bias right into social care. Imagine your own mother or sister’s health concerns being dismissed not by a doctor, but by a dispassionate algorithm that was never taught to listen properly. Amnesty International revealed last year how nearly three-quarters of our police forces are using “predictive” tech that is “supercharging racism” by targeting people based on biased postcode data. At the same time, police are rolling out more live facial recognition vans, treating everyone on the street like a potential suspect—a practice we know discriminates against people of colour. Even Sainsbury’s is testing it to stop shoplifters. This isn’t the kind, fair, and empathetic society we want to be building.

So, when things feel this big and overwhelming, it’s easy to feel a bit lost. But this is where we need to find that bit of steely grit. This is where we say, “Right, what’s next?”

If awareness isn’t enough, what’s the one thing that could genuinely change the game? It’s a Digital Bill of Rights. Think of it not as some dry legal document, but as a firewall for our humanity. A clear, binding set of principles that puts people before profit.

So, if we were to sit down together and draft this charter, what would be our non-negotiables? What would we demand? It might look something like this:

  • The right to digital privacy. The right to exist online without being constantly tracked and profiled without our clear, ongoing, and revocable consent. Period.
  • The right to human judgment. If a machine makes a significant decision about you – such as your job or loan – you should always have the right to have a human review it. AI does not get the final say.
  • A ban on predictive policing. No more criminalising people based on their postcode or the colour of their skin. That’s not justice; it’s algorithmic segregation.
  • The right to anonymity and encryption. The freedom to be online without being unmasked. Encryption isn’t shady; in this world, it’s about survival.
  • The right to control and delete our data. To be able to see what’s held on us and get rid of it completely. No hidden menus, no 30-day waiting periods. Just gone.
  • Transparency for AI. If an algorithm is being used on you, its logic and the data it was trained on should be open to scrutiny. No more black boxes affecting our lives.

And we need to go further, making sure these rights protect everyone, especially those most often targeted. That means mandatory, public audits for bias in every major AI system. A ban on biometric surveillance in our public spaces. And the right for our communities to have a say in how their culture and data are used.

Once this becomes law, everything changes. Consent becomes real. Transparency becomes the norm. Power shifts.

Honestly, you can’t private-browse your way out of this. You can’t just tweak your settings and hope for the best. The only way forward is together. A Digital Bill of Rights isn’t just a policy document; it’s a collective statement. It’s a creative, hopeful project we can all be a part of. It’s us saying, with one voice: you don’t own us, and you don’t get to decide what our future looks like.

This is so much bigger than privacy. It’s about our sovereignty as human beings. The tech platforms have kept us isolated on purpose, distracted and fragmented. But when we stand together and demand consent, transparency, and the simple power to say no, that’s the moment everything shifts. That’s how real change begins – not with permission, but with a shared sense of purpose and a bit of good-humoured, resilient pressure. They built this techno-nightmare thinking no one would ever organise against it. Let’s show them they were wrong.

The time is now. With every new development, the window for action gets a little smaller. Let’s demand a Citizen’s Bill of Digital Rights and Protections from our MPs and support groups like Amnesty, Liberty, and the Open Rights Group. Let’s build a digital world that reflects the best of us: one that is creative, kind, and truly free.

Say no to digital IDs here https://petition.parliament.uk/petitions/730194

Sources

  1. Patient privacy fears as US spy tech firm Palantir wins £330m NHS …
  2. UK police forces dodge questions on Palantir – Good Law Project
  3. Coventry City Council contract with AI firm Palantir under review – BBC
  4. Data (Use and Access) Act 2025: data protection and privacy changes
  5. UK Data (Access and Use) Act 2025: Key Changes Seek to …
  6. Online tracking | ICO
  7. protection compliance in the direct marketing data broking sector
  8. Data brokers and national security – GOV.UK
  9. Online advertising and eating disorders – Beat
  10. Investment in British AI companies hits record levels as Tech Sec …
  11. The Data Use and Access Act 2025: what this means for employers …
  12. AI tools used by English councils downplay women’s health issues …
  13. Automated Racism Report – Amnesty International UK – 2025
  14. Automated Racism – Amnesty International UK
  15. UK use of predictive policing is racist and should be banned, says …
  16. Government announced unprecedented facial recognition expansion
  17. Government expands police use of live facial recognition vans – BBC
  18. Sainsbury’s tests facial recognition technology in effort to tackle …
  19. ICO Publishes Report on Compliance in Direct Marketing Data …
  20. Data brokers and national security – GOV.UK
  21. International AI Safety Report 2025 – GOV.UK
  22. Revealed: bias found in AI system used to detect UK benefits fraud
  23. UK: Police forces ‘supercharging racism’ with crime predicting tech
  24. AI tools risk downplaying women’s health needs in social care – LSE
  25. AI and the Far-Right Riots in the UK – LSE
  26. Unprecedented Expansion of Facial Recognition Is “Worrying for …
  27. The ethics behind facial recognition vans and policing – The Week
  28. Sainsbury’s to trial facial recognition to catch shoplifters – BBC
  29. No Palantir in the NHS and Corporate Watch Reveal the Real Story
  30. UK Data Reform 2025: What the DUAA Means for Compliance
  31. Advancing Digital Rights in 2025: Trends – Oxford Martin School
  32. Declaration on Digital Rights and Principles – Support study 2025
  33. Advancing Digital Rights in 2025: Trends, Challenges and … – Demos

The Trojan Horse in Your Pocket

The AI on your phone isn’t just a helper. It’s a tool for corporate and state control that puts our democracy at risk.

I was surprised when my Android phone suddenly updated itself, and Gemini AI appeared on the front screen, inviting me to join the AI revolution happening worldwide.

Google, Apple, and Meta are locked in a high-stakes race to put a powerful AI assistant in your pocket. The promise is a life of seamless convenience. The price, however, may be the keys to your entire digital life, and the fallout threatens to stretch far beyond your personal data.

This isn’t merely my middle-aged luddite paranoia; widespread public anxiety has cast a sharp light on the trade-offs we are being asked to accept. This investigation will demonstrate how the fundamental design of modern AI, with its reliance on vast datasets and susceptibility to manipulation, creates a perfect storm. It not only exposes individuals to new forms of hacking and surveillance but also provides the tools for unprecedented corporate and government control, undermining the foundations of democratic society while empowering authoritarian regimes.

A Hacker’s New Playground

Let’s be clear about the immediate technical risk. Many sophisticated AI tasks are too complex for a phone to handle alone and require data to be sent to corporate cloud servers. This process can bypass the end-to-end encryption we have come to rely on, exposing our supposedly private data.

Worse still is the documented vulnerability known as “prompt injection.” This is a new and alarmingly simple form of hacking where malicious commands are hidden in webpages or even video subtitles. These prompts can trick an AI assistant into carrying out harmful actions, such as sending your passwords to a scammer. This technique effectively democratises hacking, and there is no foolproof solution.

The Foundations of Democracy Under Threat

This combination of data exposure and vulnerability creates a perfect storm for democratic systems. A healthy democracy relies on an informed public and trust in its institutions, both of which are directly threatened.

When AI can generate floods of convincing but entirely fake news or deepfake videos, it pollutes the information ecosystem. A 2023 article in the Journal of Democracy warned that this erosion of social trust weakens democratic accountability. The threat is real, with a 2024 Carnegie Endowment report detailing how AI enables malicious actors to disrupt elections with sophisticated, tailored propaganda.

At the same time, the dominance of a few tech giants creates a new form of unaccountable power. As these corporations become the gatekeepers of AI-driven information, they risk becoming a “hyper-technocracy,” shaping public opinion without any democratic oversight.

A Toolkit for the Modern Authoritarian

If AI presents a challenge to democracies, it is a powerful asset for authoritarian regimes. The tools that cause concern in open societies are ideal for surveillance and control. A 2023 Freedom House report noted that AI dramatically amplifies digital repression, making censorship faster and cheaper.

Regimes in China and Russia are already leveraging AI to produce sophisticated propaganda and control their populations. From automated censorship that suppresses dissent to the creation of fake online personas that push state-sponsored narratives, AI provides the ultimate toolkit for modern authoritarianism.

How to Take Back Control

A slide into this future is not inevitable. Practical solutions are available for those willing to make a conscious choice to protect their digital autonomy.

For private communication, established apps like Signal offer robust encryption and have resisted AI integration. For email services, Tuta Mail provides an AI-free alternative. For those wanting to use AI on their own terms, open-source tools like Jan.ai allow you to run models locally on your own computer.

Perhaps the most powerful step is to reconsider your operating system. On a PC, Linux Mint is a privacy-respecting alternative. For smartphones, GrapheneOS, a hardened version of Android, provides a significant shield against corporate data gathering.

The code has been written, and the devices are in our hands. The next battle will be fought not in the cloud, but in parliaments and regulatory bodies, where the rules for this new era have yet to be decided. The time for us, and our government, to act is now.

Humans vs. Machines: The Battle for Work In An AI-Dominated World

As of May 2025, the rapid advancement of artificial intelligence (AI) is significantly reshaping the global workforce. Research indicates that 14% of workers have experienced job displacement due to AI, particularly in technology and customer service (AI Replacing Jobs statistics and trends 2025). Projections suggest AI could impact up to 40% of global jobs by 2030 (World Economic Forum), presenting profound challenges and considerable opportunities. Companies like Shopify and Klarna are increasingly leveraging AI to streamline operations and reduce staff – Shopify by mandating AI use before human hires, and Klarna by replacing 700 customer service agents – raising widespread concerns about future employment (Shopify CEO Tobi Lütke memo on AI hiring policy; Klarna AI replaces 700 customer service agents news). A central debate revolves around balancing AI’s productivity gains, such as a reported 66% increase in employee productivity (NN Group), against potential societal inequality and the urgent need for worker adaptation. This analysis explores the current landscape, future projections, worker anxieties, and the impact of recent announcements from Microsoft and Google, drawing from industry reports, emerging trends, and discussions on X, to offer a guide for navigating this transformative shift.


Current Impact and Specific Examples

AI is already having a huge impact. By May 2025, estimates suggest that 14% of workers have experienced job displacement due to AI. In the US, AI was directly attributed to 3,900 job losses in May 2023 alone, constituting 5% of total job losses that month and ranking as the seventh-largest contributor to displacement (AI Replacing Jobs statistics and trends 2025). The technology sector has been particularly affected, witnessing 136,831 job losses in 2025, the highest figure since 2001, reflecting broader automation trends (AI Replacing Jobs statistics and trends 2025).

Specific cases highlight this development:

  • Shopify: In April 2025, CEO Tobi Lütke issued a memo stipulating that teams must justify human hires by first demonstrating why AI cannot perform the job. AI proficiency is now a “fundamental expectation,” with daily usage required and performance reviews incorporating AI utilisation (Shopify CEO Tobi Lütke memo on AI hiring policy). This policy followed previous workforce reductions of 20% in 2023 and further layoffs in 2024, leaving the company with 8,100 employees (Shopify layoffs 2023 2024 workforce reduction details).
  • Klarna: The CEO of Klarna reported that AI has replaced 700 customer service agents. The company plans to reduce its workforce from 4,000 to 2,000, citing a 74% productivity increase and a rise in revenue per employee from $575,000 to nearly $1 million within a year (Klarna AI replaces 700 customer service agents news). These layoffs targeted entire roles, not just underperformers, indicating a fundamental reimagining of workflows that minimises human involvement.
  • Microsoft: In 2025, Microsoft laid off 6,000 employees (nearly 3% of its global workforce), including senior roles such as Director of AI for Start-ups. This occurred despite AI reportedly contributing 30% of code generation in some projects, reflecting an industry-wide move towards automation (Microsoft lays off 6000 employees, including AI leadership roles).

These examples illustrate how major corporations prioritise AI-driven efficiency, often leading to job reductions, particularly in technology and customer service roles. The bottom line is profit-driven greed, growth at all costs.


Looking Ahead

Research points to significant future displacement. The World Economic Forum’s (WEF) 2025 Future of Jobs Report estimates that 92 million roles will be displaced globally by 2030 due to technological development, the green transition, and other factors. Crucially, however, the same report projects the creation of 170 million new jobs, resulting in a net increase of 78 million. This growth is anticipated to be driven by skills in AI, big data, and technological literacy (Future of Jobs Report 2025). The survey underpinning these projections involved over 1,000 major employers worldwide, representing 22 industry clusters and over 14 million workers, lending robustness to its findings.

Other estimates include:

  • Goldman Sachs predicts that generative AI could expose 300 million full-time jobs to automation, affecting 25% of the global labour market by 2030. (AI and Jobs: How Many Roles Will AI Replace by 2030?).
  • The International Monetary Fund (IMF) states that almost 40% of global employment is exposed to AI, with the potential for significant disruption (AI and Jobs: How Many Roles Will AI Replace by 2030?).
  • According to another WEF report (15 Jobs Will AI Replace by 2030?), 40% of programming tasks could be automated by 2040.

Employer expectations underscore this trend: 40% anticipate workforce reductions between 2025 and 2030 where AI can automate tasks, and 41% plan downsizing due to AI, as per the WEF’s 2025 report (AI could disrupt 40% of global jobs).


Productivity Gains and Job Creation

While displacement is a pressing concern, AI also drives substantial productivity gains, which can, in turn, foster new job creation. McKinsey research estimates the long-term AI opportunity at $4.4 trillion in added productivity growth potential from corporate use cases, highlighting its economic impact (AI in the workplace: A report for 2025 | McKinsey). A study by the NN Group found that generative AI improves employee productivity by 66% across various business tasks, with the most significant gains observed among less-skilled workers. This suggests a potential pathway for upskilling to mitigate displacement (Generative AI improves employee productivity by 66 per cent).

New roles include big data specialists, fintech engineers, and AI and machine learning specialists. Projections suggest AI could create 97 million new jobs by 2025 (Edison and Black). However, these roles often demand higher skill levels, potentially exacerbating inequality if access to relevant training remains uneven.


Worker Concerns and Adaptation Strategies

Worker anxieties are significant. A PwC survey found that 30% of workers fear job replacement by AI by 2025. Furthermore, McKinsey reports that employees believe AI will replace 30% of their work, with 47% expecting this within a year (AI Replacing Jobs statistics and trends 2025). Younger workers (aged 18-24) are 129% more likely than those over 65 to worry about job obsolescence, reflecting notable generational differences in perception (AI Replacing Jobs statistics and trends 2025).

Adaptation is crucial, with AI literacy increasingly becoming a prerequisite for employment. Employees must learn to leverage AI tools to enhance their output, as companies increasingly mandate AI usage and require justification for human hires based on AI’s inability to perform specific tasks. Developing a personal brand, through activities such as thought leadership and content creation, is suggested as a defensive strategy, as AI is perceived to more readily replace “anonymous” workers than those with established visibility and expertise (Human-AI Collaboration and Job Displacement Current Landscape).

Detailed strategies include:

  • Skill Development: Upskilling and reskilling in AI-related fields like data analysis and machine learning are paramount. Many companies and governments offer programmes, such as free courses on Coursera or edX, to assist workers in this transition (Impact of AI on Employment).
  • Personal Branding: Cultivating unique skills and a visible professional presence through thought leadership can highlight human attributes like creativity and emotional intelligence, which AI cannot easily replicate (Human-AI Collaboration and Job Displacement Current Landscape).
  • Complementary Roles: It is advisable to explore AI-adjacent roles such as AI ethics specialists, data stewards, and AI system managers. Emerging fields include big data specialists and AI trainers (15 Jobs Will AI Replace by 2030?).
  • Support Systems: Utilising government and corporate training programmes is encouraged. Public-private partnerships are increasingly designing AI curricula to align with evolving industry demands (Impact of AI on Employment).
  • Proactivity and Adaptability: Staying informed about AI trends, experimenting with AI tools, and maintaining openness to career pivots are key, as adaptability is vital (Job Disruption or Destruction: Adopting AI at the Workplace).
  • Policy Advocacy: Supporting policies that promote universal basic income (UBI), effective retraining initiatives, and ethical AI deployment can help address potential inequality (AI and Economic Displacement). 

Microsoft and Google’s Recent Moves

At Microsoft Build 2025 (Seattle, May 19-22), the company introduced the Windows AI Foundry and the native Model Context Protocol (MCP) in Windows, enhancing AI-driven automation and providing developers with new tools for creating AI-powered applications. The public preview of SQL Server 2025 was also announced, featuring AI-ready enterprise database capabilities for ground-to-cloud data management and advanced analytics. Furthermore, Microsoft brought DeepSeek R1 models to Windows 11 Copilot+ PCs and debuted new research tools for Microsoft 365 Copilot, signalling a deeper integration of AI across its software and services.

Simultaneously, at Google I/O 2025 (Mountain View, May 20-21), Google unveiled substantial AI updates. They announced Gemini 2.5 Pro, which reportedly swept the LMArena leaderboard, demonstrating rapid model progress with Elo scores up more than 300 points since the first-generation Gemini Pro model. Google also introduced Android XR software for smart glasses, showcasing frames capable of language translation and answering queries about the user’s surroundings, with partnerships announced with Samsung, Warby Parker, and Gentle Monster to develop headsets featuring Android XR. New AI integrations across Search, Chrome, and other products were also revealed, emphasising AI’s increasing infiltration into all aspects of their ecosystem.

These concurrent announcements underscore the accelerating expansion of AI offerings by these tech giants. This could further hasten job displacement by embedding AI more deeply into everyday tools and services, thereby intensifying the pressure on workers to adapt swiftly.

Global Risks and Inequality

A UN report highlights that AI could disrupt 40% of global jobs. It also warns of the risk of increased inequality, exacerbated by the concentration of 40% of AI research and development spending among just 100 US-based firms. This concentration could further disadvantage regions lacking access to AI technology or training, raising significant ethical and economic concerns (AI could disrupt 40% of global jobs, UN report warns).


Recent Discussions on X

Recent posts on the X platform reflect ongoing public and expert concerns:

  • JoongAng Daily reported on a Bank of Korea study suggesting that more than half of South Korea’s workforce will be impacted by AI, either through job displacement or enhanced productivity.
  • Star Online noted that AI could affect 40% of jobs worldwide, offering productivity gains and fueling automation anxieties.
  • The New Yorker discussed studies indicating AI’s potential for mass job displacement, even in white-collar fields, questioning whether AI can genuinely augment rather than simply replace human expertise.

These discussions, including predictions like AegisGnosis, which suggests a 10% probability of mass displacement in manufacturing and customer service by 2025 (with 85% confidence), underscore the urgency and breadth of the issue.


Summary Table of Key Statistics

MetricValueSource
Workers affected by AI displacement14% by 2025AI Replacing Jobs statistics and trends 2025
Jobs displaced by 203092 millionFuture of Jobs Report 2025
New jobs created by 2030170 millionFuture of Jobs Report 2025
Workers fearing job replacement by 202530%AI Replacing Jobs statistics and trends 2025
Employers planning AI-driven downsizing41% by 2025–2030AI could disrupt 40% of global jobs (WEF cited source)
Generative AI improves employee productivity by 66 per cent66%Generative AI improves employee productivity by 66 percent

Conclusion

In 2025, AI-driven job displacement is a pressing reality. Current impacts reveal significant job losses, particularly in technology and customer service, while future projections suggest up to 40% of global jobs could be affected by 2030. Although AI stimulates productivity and creates new roles, the equilibrium between displacement and adaptation remains contentious. Workers must upskill, and companies must navigate complex ethical and economic considerations. The recent announcements from Microsoft and Google in May 2025, featuring innovations like the Windows AI Foundry, Gemini 2.5 Pro, and Android XR, signal an accelerated expansion of AI, potentially intensifying these pressures.

Online discourse and expert reports highlight this urgency, advocating for strategies such as reskilling initiatives, personal branding, and potentially broader societal support systems like Universal Basic Income, to mitigate adverse impacts and strive for a future where technology augments human potential rather than merely supplanting it.

Key Citations

Some thoughts: Your Mind, Techno-Feudalism & Big Tech

THE Broligachy OWN YOUR MIND: Welcome to Technofeudalism

Alright, you! Yes, YOU! Snap out of it! You think capitalism was the bloody bogeyman? Ha! That was just the warm-up act, the polite dinner guest before the real monster kicked down the door. We are now neck-deep, drowning in something far more insidious, a digital dark age they’re calling TECHNOFEUDALISM, and it’s rotting us from the inside out. While we mindlessly scroll and feed on clickbait like zombies.

Once upon a time, even that festering wound of capitalism left you a few miserable hours to yourself. A few moments to pretend you were an individual, that your home was your castle, a tiny patch fenced off from the market, the boss, even that you had time to raise your family. That flimsy fence/defence? IT’S GONE! Pulverised! There’s no escape, no private corner where their greasy, data-sucking tentacles can’t reach, right into and manipulate your unguarded mind!

Look at the people around you in social spaces swiping up and down, left and right. Browsing social media. Their faces etched with a gnawing anxiety, not about who they are, but about what carefully constructed, “authentic” version of themselves they need to perform for the algorithms, for the faceless Big Tech overlords who’ll decide their future. “Be yourself,” these hypocrites coo, while they hold the puppet strings, demanding a 24/7 audition for a life that’s already been scripted for their profit. Every TikTok, every post, every sodding photo is another brick in the portfolio of their “curated self,” a digital show pony prancing for a job, for approval, for a scrap from the master’s table. It’s a science fiction dystopia, and guess what? IT’S ALREADY HERE!

Worried about Big Brother watching? Cute. That’s kindergarten stuff. What keeps me up at night, what should be giving YOU cold sweats, isn’t just what they know about you; believe me, they know more than your mother’s maiden name. It’s what they OWN. They own the digital railroads, the town squares, your very identity! And more terrifying still, they own the magnificent machines, the AI, the software with the chilling capacity to MODIFY YOUR THINKING, TO REWIRE YOUR BRAIN, to infect your soul with desires and beliefs that serve THEM and their customers, not you! This isn’t just surveillance; it’s psychological warfare, a constant, subtle waterboarding of your free will!

And we’re complicit, unwitting but willing stooges, aren’t we? With every click, every swipe, and every mindless scroll, you’re training their rotten AI to train you better and to burrow deeper into your psyche. It’s a sick, twisted dance macabre where we’re teaching the digital executioner how to sharpen the axe and infect our minds with urges functional to the interests of the “Cloud,” the shadowy owners of this new form of capital. For the first time in history, we’re in a dialectical relationship with the puppet masters colonising our minds.

Your attention? That’s their gold, their oil, their vampiric lifeblood! They suck it up, package your anxieties and desires, and then sell your commodified consciousness to “vassal capitalists” – pathetic businesses, big and small, forced to pay outrageous “cloud rent,” a digital tithe, just for the privilege of existing on their digital turf. Jeff Bezos doesn’t run a marketplace; he runs a monopolistic digital fiefdom! The moment you enter Amazon.com, you exit the market, you exit capitalism, and you enter a domain belonging to one man and his algorithm, which charges those vassal capitalists a sickening 40% of what you pay. You can’t talk to other buyers or haggle with sellers. It’s a walled garden designed to bleed everyone dry to enrich the technofeudal lord.

And how did this happen? Remember the Internet? The dream of a free, open digital world for all? They privatised it, strangled it, and turned it into their personal playground! You don’t own your identity online anymore! You have to beg Google, or some bank, to vouch for who you are, like a digital peasant pleading for papers. It’s an outrage!

Steve Jobs, the “visionary”? Visionary in building the first fully-fledged cloud fiefdom with his App Store! “Come,” he beckoned to developers, “build your apps on my land!” Then he slapped a 30-40% tax on every dollar they made. Free labour for Apple, and a mountain of cloud rent. The blueprint for every digital overlord since! Elon Musk buying Twitter? Don’t be naive. He wasn’t after a “public square”; he was buying an interface, a direct pipeline into your brain for his data-slurping, behaviour-modifying empire!

And when did these leeches get so powerful, so fast? Cast your mind back to 2008! When the states, those supposed guardians of the public good, unleashed socialism for the bankers and brutal austerity for the rest of us! Trillions of dollars, printed out of thin air, didn’t go to you and me, did it? Hell no! It fuelled the exponential growth of Cloud Capital. The Jeff Bezoses, the Googles, the Apples – they gorged on that free money, building their digital empires while society crumbled. They accuse us of wanting a “money tree”? These bastards invented the money tree, and they’ve been shaking it for themselves ever since, making damn sure you don’t get a sniff of the fruit!

Don’t let them fool you by calling this “algorithmic capitalism” or “hyper-capitalism.” This isn’t just capitalism in new clothes; it’s a mutant, far more toxic species. Capitalism, for all its myriad sins, had markets and profit extracted from entrepreneurial activity. This new beast, Technofeudalism, has replaced markets with these digital fiefdoms, and entrepreneurial profit with parasitic cloud rent. These aren’t innovators; they’re digital landlords, extracting wealth just because they own the platform you’re trapped on! It’s the revenge of the rentier, dressed up in shiny tech!

Your precious “liberal individual”? Dead and buried under an avalanche of data points. Social democracy? A quaint, forgotten dream when industrial capital itself is now a pathetic vassal to these cloud lords. How the hell do you bargain with an algorithm designed to exploit you 24/7? “Liberal democracy”? What a bloody joke! It was always an autocracy of capital with a thin veneer of elections to keep us quiet. Now, even that flimsy illusion is shattering. We’re living under a system of perfect, voluntary surveillance, where Big Brother isn’t the state, but the new ruling class of cloudalists controlling the means of behavioural modification.

And the sickest part? We need these tools! We love these apps! I get it. I use them too! But that’s how they get their hooks in! The question isn’t if we use them, but who the HELL OWNS THEM and what that concentrated ownership is doing to us, to our societies, to the bleeding planet! This is the new Cold War between the US and China? Don’t buy the propaganda. It’s a turf war between two colossal technofeudal empires, two giant cloud fiefdoms, battling for global dominance, and we’re just the bloody collateral damage! While these technoführers fight over the digital spoils, the planet is burning, and we’re doing sod all, mesmerised by their automated propaganda machines that would make Goebbels blush!

It’s hard to even see this system, isn’t it? We’re like fish, swimming in their toxic, algorithmically-curated water, thinking it’s normal. But it’s NOT. This is a creation of human beings, and it can be DIFFERENT!

SO, WHAT THE HELL ARE WE GOING TO DO ABOUT IT?
Feudalism didn’t end because the lords had a change of heart. It ended because of a GRAND ALLIANCE of peasants, workers, and proto-capitalists. That’s our only bloody chance now!

  1. BUILD DIGITAL SOLIDARITY, DAMN IT! A global alliance of “cloud serfs” – that’s YOU, me, the warehouse workers, the coders, even the small-time “vassal capitalists” getting squeezed dry by that 40% cloud rent! Organise online, offline! Share tactics, expose their manipulative bullshit, and amplify our collective roar! This is a digital labour movement for the cloud age!
  2. SOCIALISE CLOUD CAPITAL! These algorithms, these apps, this AI – WE ALL HELP CREATE IT with our data, our labour, our damn attention! Demand collective ownership! Turn these platforms into public utilities or worker-owned cooperatives. We’re not Luddites trying to smash the machines; we’re fighting to make them serve humanity, not a handful of sociopathic billionaires!
  3. REJECT VOLUNTARY SERVITUDE! STARVE THE BEAST! Minimise your engagement with their exploitative brain-rotting platforms like Amazon, Google, or X wherever you can. Support decentralised, open-source alternatives that prioritise your control, not their profit. Every click is a choice – OPT OUT OF THEIR SURVEILLANCE TRAP!
  4. EMPOWER INDEPENDENT MEDIA! Ditch the mainstream mouthpieces owned by the cloudalists! Fund, share, and amplify the independent, progressive voices brave enough to tell the goddamn truth! Truth is our weapon – use it to wake up everyone around you!
  5. FIGHT FOR TECHNO-DEMOCRACY! Advocate for policies that smash Big Tech monopolies, enforce data sovereignty (YOU own your digital identity, not them!), and fund public tech infrastructure. This needs an INTERNATIONAL PROGRESSIVE MOVEMENT because these companies operate beyond borders!
  6. RESIST SURVEILLANCE AND BEHAVIOURAL CONTROL! Use privacy tools – VPNS, encrypted messaging, ad-blockers! Educate yourself and others on how these algorithms are designed to manipulate you! Awareness is the first goddamn step to breaking free!
  7. ORGANISE LOCALLY, ACT GLOBALLY! Start community tech collectives, teach digital literacy, develop local alternatives, and link these efforts to the global fights for climate justice and economic equality because technofeudalism is pouring gasoline on all those fires!

The alternative isn’t to go back to some mythical past. It’s TECHNO-DEMOCRACY! We can’t disinvent this tech, nor should we want to! These tools could liberate humanity, if we rip the property rights from their greedy claws and distribute power to those who produce the value – US!

It sounds utopian? Is it more utopian than sleepwalking into a future where we are only batteries for their machines, serfs on their digital plantations? This isn’t a game! This is a fight for our minds, our dignity, our future, and the very soul of humanity! The clock is ticking! So, will you be a docile data set in their machine, or will you get angry and FIGHT BACK?!