Tag Archives: #Tech

Smoke and Mirrors: Forget the small boats. The Real Mass Migration is Digital.

The Fourth World is Coming. It’s Just Not What You Think.

What if the biggest migration in human history isn’t human at all? There’s a theory doing the rounds that frames the AI revolution as just that: an “unlimited, high-IQ mass migration from the fourth world.” It argues we’re witnessing the arrival of a perfect labour force—smarter than average, infinitely scalable, and working for pennies, with none of the messy human needs for housing or cultural integration. It’s a powerful idea that cuts through the jargon, but this perfect story has a fatal flaw.

The biggest lie the theory tells is one of simple replacement. It wants you to believe AI is an immigrant coming only to take your job, but this ignores the more powerful reality of AI as a collaborator. Think of a doctor using an AI to diagnose scans with a level of accuracy no human could achieve alone; the AI isn’t replacing the doctor, it’s making them better. The data shows that while millions of jobs will vanish, even more will be created, meaning the future isn’t about simple replacement, but something far more complex.

If the first mistake is economic, the second is pure Hollywood fantasy. To keep you distracted, they sell you a story about a robot apocalypse, warning that AI will “enslave and kill us all” by 2045. Frankly, this sort of talk doesn’t help. Instead of panicking, we should be focused on the very real and serious work of AI alignment right now, preventing advanced systems from developing dangerous behaviours. The focus on a fantasy villain is distracting us from the real monster already in the machine.

That monster has a name: bias. The theory celebrates AI’s “cultural neutrality,” but this is perhaps its most dangerous lie. An AI is not neutral; it is trained on the vast, messy, and deeply prejudiced dataset of human history, and without careful oversight, it will simply amplify those flaws. We already see this in AI-driven hiring and lending algorithms that perpetuate discrimination. A world run by biased AI doesn’t just automate jobs; it automates injustice.

This automated injustice isn’t a bug; it’s a feature of the system’s core philosophy. The Silicon Valley credo of ‘move fast and break things’ has always been sold as a mark of disruptive genius, but we must be clear about what they actually intend to ‘break’: labour laws, social cohesion, and ethical standards are all just friction to be optimised away. This isn’t theoretical; these same tech giants are now demanding further deregulation here in the UK, arguing that our rules are what’s slowing down their ‘progress’. They see our laws not as protections for the public, but as bugs to be patched out of the system, and they have found a government that seems dangerously willing to listen.

But while our own government seems willing to listen to this reckless philosophy, the rest of the world is building a defence. This isn’t a problem without a solution; it’s a problem with a solution they hope you’ll ignore. UNESCO’s Recommendation on the Ethics of Artificial Intelligence is the world’s first global standard on the subject—a human-centric rulebook built on core values like fairness, inclusivity, transparency, and the non-negotiable principle that a human must always be in control. It proves that a different path is possible, which means the tech giants have made one last, classic mistake.

They have assumed AI is migrating into a world without rules. It’s not. It’s migrating into a world of laws, unions, and public opinion, where international bodies and national governments are already waking up. This isn’t an unstoppable force of nature that we are powerless to resist; it is a technology that can, and must, be shaped by democratic governance. This means we still have a say in how this story ends.

So, where does this leave us? The “fourth world migration” is a brilliant, provocative warning, but it’s a poor map for the road ahead. Our job isn’t to build walls to halt this migration, but to set the terms of its arrival. We have to steer it with ethical frameworks, ground it with sensible regulation, and harness it for human collaboration, not just corporate profit. The question is no longer if it’s coming, but who will write the terms of its arrival.

Your New Digital ID Isn’t For Convenience. It’s For Control.


The Digital Back Door: Why a National ID is the End of a Free Society

Every breath you take
And every move you make
Every bond you break
Every step you take
I’ll be watching you

Lyric George Sumner – The Police

There’s a pitch being sold to the British public, dressed up in the language of convenience and national security. It’s the idea of a Digital ID for every adult, a neat, modern solution to complex problems like illegal migration.

I can tell you this isn’t progress. It’s the architecture of a control system, a Trojan horse that smuggles a surveillance state in under the guise of efficiency. It is the end of a free society, and we are sleepwalking towards it.

Let’s start by dismantling the primary justification: fixing the border. The claim that a Digital ID will stop the boats is, to put it plainly, bollocks. It will not stop trafficking gangs, nor will it fix a fundamentally broken system. Criminals and their networks are, by their very nature, experts at working around systems; they adapt faster than bureaucracies can legislate. The ones who will pay the price for this vast, expensive, and dangerous infrastructure will not be the criminals, but the honest, law-abiding citizens of this country.

The fundamental flaw lies in a concept I deal with daily: centralised risk. We spend hundreds of billions a year on cybersecurity, yet the volume and severity of data breaches are breaking records. The threat grows faster than the spend. From Jaguar Land Rover to major airports, no centralised system has proven impenetrable. Now, imagine that vulnerability scaled up to a national level, with a single database linking your identity to every checkpoint of daily life: where you go, what you buy, what you read, and who you speak to.

Here is the risk that ministers will not admit. A sophisticated ransomware attack, seeded quietly through a compromised supplier or a disgruntled insider, lies dormant for months. It slowly rolls through the backups, undetected. Then, on trigger day, the live registry and every recovery set are encrypted simultaneously. The country grinds to a halt. Payments fail. Health and benefits systems stall. Borders slow to a crawl. Citizens are frozen out of their own lives until a ransom is paid or the state is forced to rebuild the nation’s identity from scratch. To centralise identity is to centralise failure.

This, however, is only the technical risk. The greater political and social danger lies in the certainty of function creep. It will begin as an optional, convenient way to log in or prove your age. But it will not end there. It will inevitably become a mandatory prerequisite for accessing money, travel, employment, and essential public services. Our fundamental rights will be turned into permissions, granted or revoked by the state and its chosen corporate contractors.

This isn’t a theoretical dystopian future; it’s a documented reality. India’s Aadhaar system, initially for welfare, now underpins everything from banking to mobile phones and has been plagued by data leaks exposing millions to fraud. We are seeing the groundwork laid in the UK with the Digital Identity and Attributes Trust Framework (DIATF), a federated model reliant on a network of private suppliers like Yoti, Hippo Digital, and IDEMIA. This multi-vendor approach doesn’t eliminate risk; it multiplies the potential points of failure through a web of interconnected APIs, each a potential back door for attackers.

Furthermore, this system is built on a foundation of exclusion. The assumption of universal digital literacy is a dangerous fiction. With a significant percentage of UK adults lacking basic digital skills, a mandatory Digital ID will create a two-tier society. The elderly, the poor, and the vulnerable—those who cannot or will not comply—risk being locked out of the services they need most, deepening inequality and fuelling social unrest.

The gravest danger, however, emerges when this infrastructure is placed in the context of a crisis. Economic collapse, social unrest, or an environmental emergency often serves as the justification for an expansion of state power. A Digital ID system provides the ready-made tool for authoritarianism. In a crisis, it could be repurposed to monitor dissent, freeze the bank accounts of protesters, or restrict the movement of individuals deemed a threat. It builds, by stealth, the machinery for a social credit system.

And this brings us to the corporate engine waiting to power this machine: Palantir. The US data-mining firm is already deeply embedded within the UK state, with contracts spanning the NHS and the Ministry of Defence. Palantir doesn’t need a specific contract for the “Brit Card”; its platforms, Foundry and Gotham, are designed to do precisely what a Digital ID enables on a mass scale: fuse disparate datasets into a single, all-encompassing profile for every citizen.

The Digital ID would be the “golden record” that connects your health data, your financial transactions, your movements, and your communications. In a crisis, Palantir’s AI could be used for predictive surveillance—flagging individuals who enter a “protest zone” or transactions to “undesirable” organisations. This isn’t just a British system; with Palantir’s deep ties to US intelligence, it becomes a system subject to foreign demands under legislation like the CLOUD Act. We would be outsourcing our national sovereignty.

The entire premise is flawed. If the government were serious about the border, it would enforce current laws, properly resource patrols and processing, and close existing loopholes. You do not need to build a panopticon to do that. We scrapped ID cards in 2010 for a reason, recognising their threat to our fundamental liberties. Reintroducing them through the digital back door, outsourced to a network of private contractors and data-mining firms, is a monumental error.

There are better ways. Decentralised alternatives using cryptographic methods like zero-knowledge proofs can verify status or identity without creating a central honeypot of data. But these privacy-first solutions lack government traction because the true, unstated goal is not security or convenience. It is control. We must not fall for the pitch. This is a system that will centralise risk and outsource blame. It will punish the vulnerable while failing to stop the criminals it targets. It is the foundation for a future where our rights are contingent on our compliance. The choice is simple: yes to privacy-first proofs, no to a database state.

Beware the all-seeing eye!