Tag Archives: technology

The Sleep of Reason: Why Goya’s Monsters are Winning in 2026

In Francisco Goya’s 1799 etching, The Sleep of Reason Produces Monsters, the artist is not merely napping. He has collapsed. His tools, the pens and paper of the Enlightenment, lie abandoned on the desk. Behind him, a swarm of owls and bats emerges from the blackness.

Goya’s Los Caprichos served as a warning to Spanish society, blinded by superstition and corruption. But today, the etching feels like a live-stream of the 2026 news cycle.

When reason sleeps, we don’t just dream of monsters. We build them.

The New Bestiary: Algorithms and Echoes

In the post-truth era, the “monsters” are digital. They are the algorithms that prioritise cortisol over comprehension.

According to the 2025 Digital News Report, we have reached a tipping point: 47% of the global population now identifies national politicians and “influencers” as the primary architects of disinformation. Reason hasn’t just faltered; it has been outsourced to partisan actors who benefit from its absence.

The Arendtian Nightmare

The political philosopher Hannah Arendt understood that the goal of total deception is not to make people believe a lie. It is intended to ensure that they can no longer distinguish between truth and falsehood.

In her 1967 essay Truth and Politics, Arendt warned that factual truth is “manoeuvred out of the world” by those in power. We see this today in the “defactualisation” of our economy. Despite rising consumer prices and growing unemployment, a barrage of “official” narratives in 2025 and 2026 has attempted to frame the economy as flawless. As Arendt predicted, when the public is subjected to constant, conflicting falsehoods, they don’t become informed—they become cynical and paralysed.

The Outrage Addiction

Why do we let the monsters in? Because they feel good.

Neuroscience tells us that outrage is a biological reward. A landmark study by Dominique de Quervain showed that the act of “punishing” a perceived villain lights up the dorsal striatum—the brain’s pleasure centre.

Social media is essentially a delivery system for this chemical hit. We are trapped in a cycle in which we conflate “online fury” with “social change.” This outrage functions as a smokescreen: while we argue over individual “villains” on our feeds, the structural monsters: inequality, surveillance, and capture – continue their work undisturbed.

The Architecture of the 1%

While the public is distracted by the digital swarm, wealth has been consolidated into a fortress. In 2026, the global wealth gap is no longer a gap; it is a chasm.

  • The Fortune: Billionaire wealth hit $18.3 trillion this year, an 81% increase since 2020.
  • The Control: The top 1% now own 37% of global assets, holding eighteen times the wealth of the bottom 50% combined.

This concentration of capital is the ultimate “monster.” It allows a tiny elite—who are 4,000 times more likely to hold political office than the average person—to dictate the boundaries of reality.

Cognitive Atrophy: The AI Trap

Our most vital tool for resistance, the human mind, is being blunted. A 2025 MIT study confirmed that heavy reliance on Large Language Models (LLMs) for critical thinking tasks correlates with weakened neural connectivity and a “doom loop” of cognitive dependency.

As the Brookings Institution warned in early 2026, we are witnessing a “cognitive atrophy.” If we offload our judgment to machines owned by the 1%, we lose the very faculty required to recognise the monsters in the first place.

Case Study: The Epstein Files and Systemic Silence

The release of 3 million pages of Epstein documents in January 2026 should have been a moment of total reckoning. With 300 “politically exposed persons” implicated—from British peers to European heads of state, the scale of the rot is undeniable.

Yet, the reaction has been a repeat of Goya’s etching. We focus on the “monsters” (the names in the files) while ignoring the “sleep” (the legal impunity and wealth-purchased silence) that enabled their existence. Epstein was not a glitch in the system; he was a feature of it.

Waking the Artist

Goya’s etching ends with a caption: “Imagination abandoned by reason produces impossible monsters; united with her, she is the mother of the arts and the source of their wonders.”

To wake up in 2026 requires more than “fact-checking.” It requires a reclamation of our tools:

  1. Cognitive Sovereignty: Limit the AI-driven “doom loop” and reclaim the capacity for independent analysis.
  2. Structural Sight: Stop chasing the “bats and owls” of individual outrage and look at the “desk”—the economic and political structures that house them.
  3. Institutional Integrity: Support the few remaining impartial bodies capable of holding power to account.

The monsters only vanish when the artist wakes up. It is time to pick up the tools.


Key References

Creating Art In The Age Of AI.

Here’s a series of actionable instructions to guide yourself as an artist in the AI era. Treat these as reminders to refocus on your intrinsic motivations and leverage your unique human strengths.

  1. Reflect on Your Core Motivation: Ask yourself why you’re making art in the first place. Write down your reasons – is it for personal expression, joy, or something else? If it’s primarily for external validation like social media likes, challenge that by creating one piece this week purely for yourself, without sharing it.
  2. Define Your Audience and Goals: Clarify who your art is for – yourself, a small circle, or the public? If public, define what success means to you (e.g., meaningful feedback vs. viral hits). Set a personal success metric, like “complete one project that sparks a conversation,” and track progress toward it monthly.
  3. Test Your Commitment: Imagine your entire creative setup is destroyed. Would you rebuild it? If yes, affirm your passion by dedicating time each day to creating without excuses. If not, explore other fulfilling activities to redirect your energy.
  4. Embrace Human Uniqueness: Remember that AI lacks intent and personal experience. Translate your abstract ideas or emotions into art deliberately – start by journaling one lived experience per session and turning it into a musical element or artwork.
  5. Avoid Genre Traps: If working in a structured genre, don’t just replicate patterns (which AI excels at). Intentionally break rules: Add an unexpected element and keep it human (e.g., fusing bal folk tunes with highland pipes and smallpipes) in your next piece to infuse originality from your mind.
  6. Prioritise Novelty Over Perfection: Chase ideas that intrigue you personally, not flawless output. Experiment with “weird gremlin thoughts” – set aside time weekly for accidental or random creations, then refine them into intentional work.
  7. Differentiate Hearing vs. Listening: Aim to make art that invites active engagement and conversation, not passive background filler. Review your recent work: Does it provoke introspection or critique? Revise one piece to emphasise emotional depth or uniqueness.
  8. Leverage Limitations as Strengths: Use your imperfections (e.g., skill gaps) as opportunities for innovation. Identify a weakness in your technique, then create a workaround in your next project – document how it adds a personal “U” (unique touch) to your art.
  9. Rise Above the AI Noise: In a sea of average AI content, make your work stand out by highlighting overlooked ideas. Observe something unnoticed in your daily life and “shove it in faces” through your art – produce one such piece this month.
  10. View AI as a Bar-Raiser: See AI as making this an exciting time for artists. Commit to creating something unique or weird regularly, knowing it gains value against infinite “slop.” Track how this mindset shifts your output over the next few projects.
  11. Final Affirmation: Remind yourself daily: Your art is a reflection of your human condition – feelings, experiences, and soul. AI is just output; yours is an expression of self. If doubt creeps in, revisit this list and create anyway.

Lesser-Known Truths About AI’s Inability to Question or Redefine Art. The statement “The machine doesn’t care. It knows exactly what art is. It has no reason and no ability to question what it could be” highlights a core philosophical critique of AI in creativity: it operates within predefined patterns without genuine intent, emotion, or the capacity for existential questioning. While mainstream narratives often celebrate AI as a democratising force in art (e.g., tools like DALL-E or Midjourney making creation more accessible), lesser-known truths challenge this by exposing AI’s fundamental limitations and their broader implications for humanity. These ideas are rarely shared publicly because they undermine the trillion-dollar AI industry, which thrives on hype, investment, and the promise of infinite productivity. Tech giants like Adobe and Meta promote AI as “creative” to sell subscriptions and data-training models, suppressing critiques that could slow adoption or invite regulation. Below, I outline key under-the-surface truths, supported by historical context, real-world examples, and explanations for their obscurity. 1. AI Art is a “Philosophical Zombie” – It Mimics Without Consciousness. AI can replicate artistic styles flawlessly but lacks inner experience or intent, resulting in hollow imitations rather than true expressions. This challenges the mainstream view that AI’s technical prowess equates to creativity, revealing it as mere pattern-matching.

  • Historical Context: This draws from René Descartes’ 1637 dictum “I think, therefore I am,” which tied existence to self-aware thought. More directly, it echoes John Searle’s 1980 “Chinese Room” argument: a person following rules to simulate understanding Chinese doesn’t truly comprehend it. Similarly, AI “understands” art only through data, not lived reality. Early algorithmic art in the 1960s (e.g., AARON by Harold Cohen) was celebrated, but philosophers like Searle warned it lacked soul, a critique buried as AI evolved.
  • Real-World Examples: In 2022, an AI-generated piece won the Colorado State Fair’s fine art competition, sparking backlash from artists who argued it lacked emotional depth. csferrie.medium.com Midjourney’s early versions struggled with human hands, symbolising its detachment from embodied experience—AI doesn’t “feel” anatomy like a human artist does. blog.jlipps.com
  • Why It Remains Hidden: Acknowledging this would deflate AI hype, as companies frame tools as “co-creators” to attract users. Investors and media focus on output quality to avoid philosophical debates that could lead to ethical restrictions, such as EU AI regulations that emphasise transparency.

2. AI Erodes Human Creative Capacity Through Atrophy and Over-Reliance. By handling the “hard” parts of creation, AI causes human skills to wither, turning art into a commodified process rather than a form of personal growth. This counters the mainstream claim that AI “lowers barriers” to creativity, showing it instead homogenises output and stifles innovation.

  • Historical Context: As with the 15th-century printing press, which displaced scribes but forced writers to innovate (e.g., leading to the rise of the novel), photography in the 1830s threatened painters until they embraced abstraction (e.g., Impressionism). Critics like Walter Benjamin in 1935 warned of art’s “aura” being lost in mechanical reproduction; today, AI amplifies this by automating not just reproduction but also ideation.
  • Real-World Examples: Artists using AI prompts often iterate endlessly to approximate their vision, losing direct agency—e.g., a digital artist settling for AI’s “approximation” rather than honing their skills. blog.jlipps.com In music, tools like Suno generate tracks, but users report diminished satisfaction from not “struggling” through composition, echoing how auto-tune reduced vocal training in pop. aokistudio.com
  • Why It Remains Hidden: The AI industry markets efficiency to creative professionals (e.g., Adobe’s Firefly), downplaying the long-term erosion of skills to maintain market growth. Public discourse prioritises short-term gains like “democratisation,” as admitting to atrophy could spark backlash from educators and unions concerned about job devaluation.

3. AI Exposes the Illusion of Human Originality, Revealing Most “Creativity” as Formulaic AI’s ability to produce “art” faster than humans uncovers that much human work is pattern-based remix, not true novelty—challenging the romanticised view of artists as innate geniuses and forcing a reevaluation of what “creative” means.

  • Historical Context: The Renaissance idealised the “divine” artist (e.g., Michelangelo), but 20th-century postmodernism (e.g., Warhol’s factory art) questioned originality. AI builds on this; Alan Turing’s 1950 “imitation game” test foreshadowed machines mimicking creativity without possessing it, but his warnings about over-attribution were overshadowed by computational optimism.
  • Real-World Examples: A Reddit discussion notes AI “revealing how little we ever had” by outperforming formulaic genres like lo-fi beats or stock photos, where humans were already “echoing” patterns. reddit.com In 2023, AI-generated books flooded Amazon, exposing how much publishing relies on tropes—authors admitted their “unique” stories were easily replicated. lateralaction.com
  • Why It Remains Hidden: This truth wounds egos in creative industries, where “originality” justifies high valuations (e.g., NFTs). Tech firms and media avoid it to prevent demotivation, as it could reduce user engagement with AI tools—why prompt if it highlights your own mediocrity?

4. AI Art Detaches Us from Authentic Human Connection and Imperfection AI’s frictionless perfection creates idealised content that erodes empathy and growth, as art traditionally thrives on flaws and shared vulnerability—undermining the idea that AI enhances human expression.

  • Historical Context: Existentialists like Jean-Paul Sartre (1943) emphasised authentic self-expression through struggle; AI bypasses this. In the 1960s, Marshall McLuhan’s “medium is the message” critiqued how technology alters perception—AI extends this by simulating emotions without feeling them, akin to early CGI’s “uncanny valley” debates.
  • Real-World Examples: Social media filters and AI portraits promote flawless selves, linked to rising mental health issues; a podcaster notes AI “detaches you from the reality of growth.” creativeprocess.info In visual art, AI’s inability to “risk” (e.g., avoid bold failures) results in bland aggregates, as seen in critiques of DALL-E outputs that lack “visceral” passion. aokistudio.com +1
  • Why It Remains Hidden: Platforms like Instagram benefit from idealised content for engagement metrics. Revealing this could invite scrutiny of AI’s role in societal disconnection, clash with Silicon Valley’s narrative of “connecting the world,” and risk lawsuits or boycotts from mental health advocates.

5. AI cannot Transcend Its Training Data, Limiting True Innovation. Locked into syllogistic logic from datasets, AI reinforces averages rather than questioning norms—contradicting claims of AI as a boundless innovator.

  • Historical Context: Gottfried Leibniz’s 17th-century dream of a “universal calculus” for all knowledge prefigured AI, but critics like Hubert Dreyfus (1972) argued computers lack intuitive “being-in-the-world” (Heideggerian philosophy). This “frame problem” persists: AI can’t question its assumptions without human intervention.
  • Real-World Examples: AI art tools replicate biases from training data (e.g., stereotypical depictions), failing to “leap” like Picasso’s Cubism. Research shows that AI “lacks the sensual/philosophical depth” for originality. researchgate.net In writing, ChatGPT produces coherent but uninspired prose, unable to write in the paradoxical style of Kafka.
  • Why It Remains Hidden: Data dependencies expose ethical issues like IP theft during training (e.g., lawsuits against Stability AI), which companies obscure through NDAs and lobbying. Publicising it could halt progress, as it questions AI’s hype around scalability.

These truths, while supported by philosophers and artists, stay underground due to economic pressures: AI’s market is projected at $1.8 trillion by 2030, incentivising positive spin. However, voices in academia and indie communities (e.g., Reddit, blogs) keep them alive, suggesting a potential shift if regulations evolve.

AI Ethics in Creativity: Navigating the Moral Landscape. AI’s integration into creative fields like art, music, writing, and design has sparked intense debate. While it promises to democratize creation and amplify human potential, it raises profound ethical questions about authorship, exploitation, and the essence of human expression. As of January 2026, ongoing lawsuits, regulatory pushes (e.g., EU AI Act updates), and public backlash highlight these tensions. Below, I break down key ethical concerns, drawing from diverse perspectives—including tech optimists, artists, ethicists, and critics—to provide a balanced view. This includes pro-AI arguments for augmentation and critiques of systemic harm, substantiated by recent developments. Core Ethical Concerns: AI in creativity isn’t just a tool; it intersects with human identity, labour, and society. Here’s a table summarising major issues, with examples and counterpoints:

Ethical IssueDescriptionReal-World ExamplesWhy It Challenges Mainstream ThinkingCounterarguments
Intellectual Property (IP) Infringement and Data TheftAI models are often trained on vast datasets scraped from the internet without creators’ consent or compensation, effectively “laundering” human work into commercial outputs. This violates the social contract where artists share work expecting legal protections against market dilution.– Danish CMO Koda sued Suno in 2025 for using copyrighted music without permission. @ViralManager – Activision Blizzard’s 2024 layoffs of artists amid AI adoption, using models trained on unlicensed content. @ednewtonrex – Ongoing U.S. lawsuits against Midjourney and Stability AI for training on artists’ works.Undermines the AI hype of “innovation for all” by exposing it as profit-driven exploitation, hidden to avoid lawsuits and investor backlash. bytemedirk.medium.com +3Pro-AI view: Training is “fair use” like human learning; ethical models (e.g., Fairly Trained) seek consent, but most companies argue it accelerates creativity without direct copying.
Job Displacement and Labor ExploitationAI automates creative tasks, leading to layoffs and devaluing human skills. It shifts income from creators to tech firms, exacerbating inequality. bytemedirk.medium.com +6– Larian Studios (Baldur’s Gate 3) banned non-internal AI in 2025 to prioritize ethics and quality. @pulpculture323 – Universal Music Group’s 2026 NVIDIA partnership aims to protect artists while expanding creativity. @jjfleagle – Freelancers report AI “infesting” markets, making livelihoods harder. @mohaned_haweshReveals capitalism’s prioritization of efficiency over human flourishing, suppressed by tech lobbying to maintain growth narratives. forbes.com +2AI augments humans (e.g., Adobe’s ethical tools); job shifts are inevitable, like photography displacing painters in the 19th century. gonzaga.edu +1
Loss of Authenticity and Human EssenceAI outputs lack genuine intent, emotion, or originality, potentially atrophying human creativity and turning art into commodified “slop.” It questions what makes art “human.” liedra.net +4– Polls show 90%+ of artists object to AI training on their work. @ednewtonrex – Deepfakes and misinformation from AI art (e.g., viral fakes in 2025 elections). liedra.net +1 – xAI’s Grok faced UK probes in 2026 for non-consensual images. @jjfleagleChallenges romanticized views of progress; hidden because it critiques AI’s “limitless” potential, risking demotivation. niusteam.niu.edu +1AI inspires novelty; e.g., human-AI collabs in music (NVIDIA-UMG) foster new expressions. gonzaga.edu +2
Bias, Misuse, and Societal HarmDatasets inherit human biases, perpetuating stereotypes. AI enables deepfakes, misinformation, and environmental costs (e.g., high carbon emissions from training).

Art After the Flood: Authenticity in an Age of Hyper-production.

We are living through a second flood. The first, chronicled by Walter Benjamin, was a rising tide of mechanical reproduction that stripped the artwork of its unique presence in time and space, its ritual weight. What we face now is a deluge of a different order, not the copying of an original, but the generation of the ostensibly original itself. This synthetic reproducibility, instant and infinite, does not so much wash away the aura of the artwork as dissolve the very ground from which aura once grew. For the artist, this marks a profound reordering, a passage through a great filter that demands a reckoning with why creation matters in a world saturated with the facsimile of creation.

The crisis is, at its heart, an economic one, born from the final victory of exhibition value over all else. Benjamin saw how reproduction prised art from the domain of ritual, making it a political, exhibitable object. AI hyper-production perfects this shift, creating a universe of content whose sole purpose is to be displayed, circulated, and consumed, utterly detached from any ritual of human making. When ten thousand competent images can be summoned to fill a website’s empty corners, the market value of such functional work collapses. The commercial artist is stranded, their skill rendered not scarce but superfluous in a marketplace where the exhibitable object has been liberated from the cost of its production.

This leads to the deafening, companion problem: the drowning-out effect. If everything can be exhibited, then nothing is seen. The channels of distribution become clogged with a spectral, ceaseless tide, a ‘slop’ of algorithmic potential. Discovery becomes a lottery. In this storm of accessibility, the scarce resource is no longer the means of production, but attention. And attention, in such a climate, refuses to be captured on the mass scale that the logic of exhibition value demands; it must be cultivated in the intimate, shadowed spaces the floodlight cannot reach.

Consequently, the artist’s identity fractures and reassembles. The role shifts from creator to curator, editor, and context-engineer. If the machine handles the ‘how,’ the human value retreats into the realms of conception, discernment, and judgement. The artistic self becomes a more ghostly thing, defined less by the manual trace and more by the authority of selection and the narrative woven around the chosen fragment. For some, this is a liberation from tradition’s heavy hand; for others, it feels like the final severance of that unique phenomenon of a distance, however close it may be, that once clung to the hand-wrought object.

This forced evolution makes brutally clear a distinction that has long been blurred: the split between ‘content’ and ‘art.’ ‘Content’ is the pure, polished exhibit. It is information, filler, ornament, the fulfilment of a demand. For this, the synthetic process is peerless. ‘Art,’ however, must now be defined by what it stubbornly retains or reclaims. It must be an act where the process and the human context are the irreducible core, where the value cannot be extracted from the texture of its making. Its purpose shifts from exhibition back towards a new kind of ritual, not of cult, but of verifiable human connection. The artist must now choose which master they serve.

The only viable path, therefore, is a strategic retreat to the domains presence in time and space still governs. Since the object alone is forever suspect, value must be painstakingly rebuilt around radical context and provenance. The aura must be consciously, authentically reconstructed. This becomes the artist’s new, urgent work.

The story of the object’s making is now its last line of defence. The narrative, the intention, the struggle, the trace of the human journey, ceases to be a mere accompaniment and becomes the primary text. Proof of origin becomes a sacred credential. As a latter-day witness to this crisis noted, the only territory where authenticity can now be assured is “in the room with the person who made the thing.” The live performance, the studio visit, the act of co-creation: these are no longer secondary events but the central, unassailable offering. Here, art reclaims its here and now, its witnessable authenticity in a shared moment that no algorithm can simulate or inhabit.

The Unmaking of the Unique

Thus, hyper-production functions as a great filter. It mercilessly commoditises the exhibit, washing away the economic model of the last century. In doing so, it forces a terrible, clarifying question upon every practitioner: what can you anchor your work to that is beyond the reach of synthetic reproduction?

The emerging responses are maps of this new terrain. Some become context engineers, building immersive narratives where the work is a relic of a true human story. Others become synthesist collaborators, directing the machine with a voice of defiantly human taste. A faction turns resolutely physical, seeking refuge in the stubborn, three-dimensional ‘thingness’ that defies flawless digital transference. Yet others become architects of experience, crafting frameworks for interaction where the art is the fleeting, collective moment itself. And many will retreat to cultivate a deep niche, a dedicated community for whom the human trace is the only currency that holds value.

The flood will not cease. The exhibition value of the world will be met, and exceeded, by synthetic means. But this crisis, by shattering the professionalised model, may ironically clear the way for a return to art’s first principles: not as a commodity for distribution, but as a medium for human connection, a testament of presence, and a ritual of shared meaning. The future of art lies not in battling the currents of reproduction, but in learning to build arks, vessels of witnessed, authentic experience that can navigate the vast and glittering, but ultimately hollow, sea of the endlessly exhibitable.

The Playbook: What the Left Can Learn from the Right’s Online War Part 2

The far-right’s online dominance is not an accident. It stems from savvy, adaptive tactics that exploit platform algorithms, human psychology, and cultural voids, turning fringe ideas into mainstream forces. While the left should never mimic their toxic elements like hate and disinformation, there is immense value in borrowing their structural and strategic tools to counter far-right gains.

Drawing from recent analyses, the key is ethical adaptation: using their methods to focus on hope, facts, and inclusivity, creating “alt-left pipelines” that radicalise people toward justice, not division.

Here are five transferable lessons for a progressive counter-strategy.

1. Build a Multi-Voice “Roster” for Narrative Dominance (The WWF Model)

The Right’s Method: They succeed with a diverse “ecosystem of creators“—intellectuals, meme-makers, and podcasters—who cross-promote and create social immersion. This “multiplicity of voices” normalises extremism, turning a single opinion into a perceived chorus.

The Left’s Deployment: Create a “Red-Green roster” of 20-50 voices (eco-activists, union organisers, TikTok storytellers) focused on core issues like inequality and climate. Use X Spaces for collaborative “story arcs” and fund collaborations through platforms like Patreon to foster community. The goal is viral, relatable formats that explain complex issues simply, like “why your rent doubled.”

2. Craft Gradual “Pipelines” for Positive Radicalisation

The Right’s Method: Their infamous “alt-right pipeline” hooks users with benign frustrations (e.g., “woke overreach”) then uses algorithms to pull them into echo chambers. This process of self-radicalisation happens without overt pushes.

The Left’s Deployment: Design an “alt-left pipeline” that starts with empowering content, like TikToks on “union wins” or stories of community success. This can funnel users to deeper dives on podcasts or documentaries about systemic issues. Ethically used AI tools could even offer personalised recommendations that target disillusioned centrists with messages of hope, addressing alienation head-on.

3. Weaponise Memes, Humour, and Emotional Storytelling

The Right’s Method: Irony, memes, and “outrage farming” create addictive engagement that polarises audiences and evades content moderation. They tap into real anger but channel it with simplistic, divisive narratives.

The Left’s Deployment: Flood platforms with joyful, subversive memes (“Billionaires vs. Your Rent” cartoons) and powerful, emotional stories, like videos of successful worker strikes. Use social media for provocative but substantive threads that expose hypocrisy. Focus on a “politics of substance” by creating new cultural symbols of solidarity, like remixing old union anthems for a modern audience.

4. Invest in Local Organising and Power-Building Networks

The Right’s Method: Online tactics are merely the recruitment arm for their offline infrastructure. They channel digital anger into real-world rallies and loyalty, building power from the ground up.

The Left’s Deployment: Mirror this by linking online campaigns directly to local action. Use platforms like Discord for one-on-one recruitment based on what matters to people in their communities. Channel energy into sustained wins, like establishing tenants’ unions or mutual aid groups, rather than chasing fleeting viral moments.

5. Play the Long Game of Institutional Capture and Patience

The Right’s Method: They understand that short-term wins like elections are secondary to long-term cultural entrenchment. They play the “long game,” infiltrating institutions like local school boards and media outlets over decades.

The Left’s Deployment: Shift from reactive online debates to proactive, institution-building. This means creating progressive media co-ops, getting involved in local governance, and controlling the narrative with preemptive framing (e.g., “Before you ask about taxes, here’s how billionaires dodge them”). As mainstream platforms become more toxic, this also means scaling safely on decentralised alternatives like Bluesky or Mastodon.

Ethical Guardrails and Risks

Any adaptation of these methods must prioritise anti-hate safeguards and robust fact-checking to avoid the pitfalls of disinformation. The goal is to turn the right’s tactics of scarcity and division into a new strategy of abundance and solidarity. The left’s greatest advantage is substance; these tools can help make that substance go viral.

The Playbook: What the Left Can Learn from the Right’s Online War Part 1

The alt-right’s online dominance stems from savvy, adaptive tactics that exploit platform algorithms, human psychology, and cultural voids, turning fringe ideas into mainstream forces through emotional resonance and community building. While the left should never mimic their toxic elements (e.g., hate, disinformation), there’s value in borrowing structural and strategic tools to counter far-right gains and rebuild progressive momentum.

Drawing from 2025 analyses, the key is ethical adaptation: Focus on hope, facts, and inclusivity to create “alt-left pipelines” that radicalise toward justice, highlight economic inequality not racial division.

Below are transferable lessons with deployment ideas tailored for a progressive agenda.

1. Build a Multi-Voice “Roster” for Narrative Dominance (The WWF Model)

  • Lesson from Alt-Right: They succeed via a diverse “ecosystem” of creators—intellectuals, meme-makers, podcasters—who cross-promote, feud playfully, and create social immersion, making ideas feel organic and inescapable (e.g., from Jordan Peterson to Nick Fuentes). This multiplicity normalises extremism, as one voice becomes a chorus.
  • Action Point: Create a “Red-Green roster” of 20-50 voices (e.g., eco-activists, union organisers, TikTok storytellers) focused on inequality/climate. Use X Spaces for collaborative “story arcs” (e.g., debates on wealth taxes) and Patreon-funded collabs to foster community. Aim for viral, relatable formats like short explainers on “why your rent doubled.” In 2025, leverage decentralised platforms to evade moderation while building loyalty.

2. Craft Gradual “Pipelines” for Positive Radicalisation

  • Lesson from Alt-Right: Their pipeline hooks users with benign frustrations (e.g., “woke overreach”) then escalates via algorithms to echo chambers, blending humour and validation to build commitment. This self-radicalises without overt pushes.
  • Action Point: Design an “alt-left pipeline” starting with empowering content (e.g., TikToks on “union wins” or “free college stories”) that funnels to deeper dives (e.g., podcasts on systemic racism). Use AI tools ethically for personalised recommendations, targeting disillusioned centrists with “hope hooks” like community success tales. Avoid outrage; emphasise “business offers” (e.g., “Join for better wages”). A 2025 survey shows this could sway working-class voters by addressing alienation head-on.

3. Weaponise Memes, Humour, and Emotional Storytelling

  • Lesson from Alt-Right: Irony, memes, and outrage farming (e.g., baiting replies for algorithmic boosts) create addictive engagement, polarising while evading bans. They tap anger over issues like immigration but dilute for broad appeal.
  • Action Point: Flood platforms with joyful, subversive memes (e.g., “Billionaires vs. Your Rent” cartoons) and emotional narratives (e.g., worker strike videos with uplifting arcs). Use X for “provocative but substantive” threads that provoke right-wing overreactions, then amplify the absurdity to highlight hypocrisy. Focus on “politics of substance” like cultural symbols of solidarity (e.g., union anthems remixed). In 2025, prioritise TikTok/Reels for Gen Z, where emotionally charged content drives 2x engagement.

4. Invest in Local Organising and Power-Building Networks

  • Lesson from Alt-Right: Online tactics feed offline infrastructure (e.g., rallies channelling frustration into loyalty), absorbing dissent via co-optation and purges. They build from the ground up, turning digital anger into real power.
  • Action Point: Mirror this by linking online campaigns to local “power rosters” (e.g., neighborhood groups for mutual aid). Use X/Discord for one-on-one recruitment: “What matters to you? Let’s organize.” Channel energy into sustained wins like tenant unions, not just viral moments. 2025 reports stress matching right-wing billionaire media with grassroots funding for community hubs. Avoid Alinsky-style baiting; instead, “grey rock” trolls with factual redirects.

5. Pursue Long-Term Institutional Capture and Patience

  • Lesson from Alt-Right: They play the “long game” (e.g., infiltrating education/media over decades), using feigned ignorance to waste opponents’ time and normalise via backlash. Short-term wins (e.g., elections) are secondary to cultural entrenchment.
  • Action Point: Shift from reactive “debates” to proactive institution-building (e.g., progressive media co-ops, school boards). Use “inb4” preemptive framing (e.g., “Before you ask about taxes, here’s how billionaires dodge them”) to control narratives. In 2025, amid platform toxicity, decentralise to Bluesky/Mastodon for safe scaling. Measure success by sustained engagement, not viral spikes.

Ethical Guardrails and Risks

Adaptations must prioritise anti-hate safeguards e.g., community guidelines against doxxing and fact-checking to avoid disinformation pitfalls. Risks include internal purges or echo-chamber toxicity, as seen in past left online spaces.

The goal: Turn alt-right “tactics of scarcity” into left abundance—building power through solidarity, not division. As one 2025 analysis notes, the left’s edge is substance; deploy these tools to make it viral.

Smoke and Mirrors: Forget the small boats. The Real Mass Migration is Digital.

The Fourth World is Coming. It’s Just Not What You Think.

What if the biggest migration in human history isn’t human at all? There’s a theory doing the rounds that frames the AI revolution as just that: an “unlimited, high-IQ mass migration from the fourth world.” It argues we’re witnessing the arrival of a perfect labour force—smarter than average, infinitely scalable, and working for pennies, with none of the messy human needs for housing or cultural integration. It’s a powerful idea that cuts through the jargon, but this perfect story has a fatal flaw.

The biggest lie the theory tells is one of simple replacement. It wants you to believe AI is an immigrant coming only to take your job, but this ignores the more powerful reality of AI as a collaborator. Think of a doctor using an AI to diagnose scans with a level of accuracy no human could achieve alone; the AI isn’t replacing the doctor, it’s making them better. The data shows that while millions of jobs will vanish, even more will be created, meaning the future isn’t about simple replacement, but something far more complex.

If the first mistake is economic, the second is pure Hollywood fantasy. To keep you distracted, they sell you a story about a robot apocalypse, warning that AI will “enslave and kill us all” by 2045. Frankly, this sort of talk doesn’t help. Instead of panicking, we should be focused on the very real and serious work of AI alignment right now, preventing advanced systems from developing dangerous behaviours. The focus on a fantasy villain is distracting us from the real monster already in the machine.

That monster has a name: bias. The theory celebrates AI’s “cultural neutrality,” but this is perhaps its most dangerous lie. An AI is not neutral; it is trained on the vast, messy, and deeply prejudiced dataset of human history, and without careful oversight, it will simply amplify those flaws. We already see this in AI-driven hiring and lending algorithms that perpetuate discrimination. A world run by biased AI doesn’t just automate jobs; it automates injustice.

This automated injustice isn’t a bug; it’s a feature of the system’s core philosophy. The Silicon Valley credo of ‘move fast and break things’ has always been sold as a mark of disruptive genius, but we must be clear about what they actually intend to ‘break’: labour laws, social cohesion, and ethical standards are all just friction to be optimised away. This isn’t theoretical; these same tech giants are now demanding further deregulation here in the UK, arguing that our rules are what’s slowing down their ‘progress’. They see our laws not as protections for the public, but as bugs to be patched out of the system, and they have found a government that seems dangerously willing to listen.

But while our own government seems willing to listen to this reckless philosophy, the rest of the world is building a defence. This isn’t a problem without a solution; it’s a problem with a solution they hope you’ll ignore. UNESCO’s Recommendation on the Ethics of Artificial Intelligence is the world’s first global standard on the subject—a human-centric rulebook built on core values like fairness, inclusivity, transparency, and the non-negotiable principle that a human must always be in control. It proves that a different path is possible, which means the tech giants have made one last, classic mistake.

They have assumed AI is migrating into a world without rules. It’s not. It’s migrating into a world of laws, unions, and public opinion, where international bodies and national governments are already waking up. This isn’t an unstoppable force of nature that we are powerless to resist; it is a technology that can, and must, be shaped by democratic governance. This means we still have a say in how this story ends.

So, where does this leave us? The “fourth world migration” is a brilliant, provocative warning, but it’s a poor map for the road ahead. Our job isn’t to build walls to halt this migration, but to set the terms of its arrival. We have to steer it with ethical frameworks, ground it with sensible regulation, and harness it for human collaboration, not just corporate profit. The question is no longer if it’s coming, but who will write the terms of its arrival.

This Isn’t a Drill. This is Your Guide to Resisting the Brit Card.

Feeling powerless is part of the plan. They want you to believe this is all too big, too technical, and too inevitable to fight. They are counting on your resignation as they assemble the cage around you, piece by piece, hoping you’ll be too tired or distracted to notice. But their entire, multi-billion-pound system has a fatal flaw, a single point of failure. That single point of failure is you.

We have options. They require effort, courage, and a refusal to be intimidated. Here’s a breakdown of the response options we have as citizens, from the simple to the deeply committed.

1. The Information War: Know Your Enemy and Spread the Word

First, don’t be a passive consumer of this. The primary battleground right now is awareness.

  • Educate Yourself and Others: Read everything you can. Understand the technology (Foundry, Gotham), the key players (Palantir, Peter Thiel), and the political machinations. When you talk about it, be informed. Use the facts.
  • Share Intelligently: Don’t just scream into the social media void. Share the articles and the evidence with people in your life who might listen. Send it to your family WhatsApp group. Talk about it with friends. The aim is to break this story out of the ‘conspiracy’ box and into the mainstream conversation.
  • Frame the Debate Correctly: When you talk about it, don’t let them frame it as “convenience vs. privacy.” Frame it correctly: Freedom vs. Control. It’s not about faster logins; it’s about the state’s ability to switch you off.

2. Political Pressure: Rattle the Cage

The system might feel rigged, but it’s not soundproof. They still need a veneer of public consent.

  • Your MP is Your Employee: Write to your MP. Don’t send a generic email; send a pointed one with specific questions. “Have you read Palantir’s contracts with the NHS?” “What are your specific concerns about linking a Digital ID to their software?” “Will you publicly pledge to vote against any mandatory Digital ID scheme?” Go to their local surgery and ask them face-to-face. Record their answer.
  • Support Advocacy Groups: Organisations like Big Brother Watch, the Open Rights Group, and others are fighting this at a policy level. Support them. Amplify their work. They have the resources to launch legal challenges and lobby Parliament effectively.
  • Sign and Share Petitions: While they can sometimes feel like shouting into the wind, official parliamentary petitions that reach a certain threshold must be debated. It forces the issue onto the official record.

3. Economic Resistance: Starve the Beast

This is a big one, and it’s where we have more power than we think.

  • Use Cash: This is the single most powerful act of passive resistance. Every note you spend is a small vote for privacy, for anonymity, and against a fully traceable digital currency. When shops ask you to pay by card, politely refuse where you can. Make cash a visible, normal part of daily life.
  • Scrutinise Your Services: Look at the companies you do business with. Is your bank a partner in the new identity frameworks? Does your tech provider have a record of collaboration with state surveillance? Where possible, move your money and your data away from those who are building the cage.
  • Support Privacy-First Technology: Use encrypted messaging apps like Signal. Use privacy-respecting search engines. Ditch services that harvest your data as their business model. The more of us who do this, the more we normalise privacy.

4. The Final Line of Defence: Non-Compliance

This is the sharp end of it, and it requires real resolve.

  • Refuse to Volunteer: When the Digital ID is first rolled out, it will be “optional.” Do not opt-in. Do not download the app. Do not be a guinea pig for your own cage. The lower the initial uptake, the harder it is for them to claim it has public support and the more difficult it becomes to make it mandatory.
  • Public Protest: If and when the time comes, be prepared to take to the streets. Peaceful, mass protest is a fundamental British right and a powerful part of our history. It shows the government that public anger is real and cannot be ignored.
  • Build Local Resilience: The more we rely on centralised state and corporate systems, the more power they have over us. Support local businesses. Start community skill-sharing networks. Build relationships with your neighbours. The more resilient and self-sufficient our communities are, the less we need their systems.

None of these is a magic bullet. But they are not mutually exclusive. We can do all of them. It’s about creating a multi-fronted resistance: informational, political, economic, and social.

They are counting on us to be too tired, too distracted, and too divided to fight back. Let’s disappoint them.

The easiest thing to do is sign the petition
Do not introduce Digital ID cards
https://petition.parliament.uk/petitions/730194

If you are an investor you could move holdings from the following funds to more ethical ones:

Top 10 Largest Institutional Holders of shares in Palentir. The following table lists the top holders by shares outstanding, including shares held, percentage of total shares, and approximate value (based on recent market prices around $177–$180 per share).

RankInstitution /
Fund Name
Shares Held% of Shares OutstandingValue (USD)
1Vanguard Total Stock Market Index Fund69.13M3.17%$12.28B
2Vanguard 500 Index Fund60.38M2.77%$10.72B
3Invesco QQQ Trust, Series 146.48M2.13%$8.25B
4Fidelity 500 Index Fund26.96M1.24%$4.79B
5SPDR S&P 500 ETF Trust26.02M1.19%$4.62B
6iShares Core S&P 500 ETF25.41M1.17%$4.51B
7Vanguard Growth Index Fund22.38M1.03%$3.97B
8The Technology Select Sector SPDR Fund17.13M0.79%$3.04B
9Vanguard Information Technology Index Fund13.37M0.61%$2.37B
10Vanguard Institutional Index Fund13.04M0.60%$2.32B

Palantir & Brit Card: The Final Piece of the Surveillance State.

To understand what’s coming with the mandatory “Brit Card,” you first have to understand who is already here. The scheme isn’t appearing out of thin air; it’s the logical capstone on an infrastructure that has been quietly and deliberately assembled over years by a single, dominant player: Palantir. Their involvement isn’t just possible—it’s the probable, planned outcome of a strategy that serves both their corporate interests and the UK government’s long-held ambitions.

Let’s be clear about the facts. Palantir isn’t some new bidder for a government contract; they are already embedded, their surveillance tentacles wrapped around the core functions of the British state. They have over two dozen contracts, including with the NHS to analyse patient data, the Ministry of Defence for military intelligence, and police forces for “predictive policing.” They are in the Cabinet Office, they are in local government. They are, in essence, the state’s private intelligence agency.

This is a company forged in the crucible of the CIA and the NSA, whose entire business model is to turn citizen data into surveillance gold. Their track record is one of mass surveillance, racial profiling algorithms, and profiting from border control and deportations. To believe that this company would be hired to build a simple, privacy-respecting ID system is to willfully ignore everything they are and everything they do. The “Brit Card” is not a separate project for them. It is the keystone—the final piece that will allow them to link all their disparate data streams into one terrifyingly complete surveillance engine, with every UK adult forced onto its database.

But to grasp the scale of the threat, you have to ask why this is happening here, in the UK, and not anywhere else in Europe. This isn’t a happy accident; it’s a deliberate strategy. Palantir has chosen the UK for its European Defence HQ for a very simple reason: post-Brexit Britain is actively marketing itself as a deregulated safe harbour.

The UK government is offering what the EU, with its precautionary principles and landmark AI Act, cannot: regulatory flexibility. For a company like Palantir, whose business thrives in the grey areas of ethics and law, the EU is a minefield of compliance. The UK, by contrast, is signalling that it’s willing to write the rules in collaboration with them. The government’s refusal to sign the Paris AI declaration over “national security” concerns was not a minor diplomatic snub; it was the smoking gun. It was a clear signal to Silicon Valley that Britain is open for a different kind of business, one where restrictive governance will not get in the way of profit or state power.

This brings us to the core of the arrangement: a deeply symbiotic relationship. The UK government offers a favourable legal environment and waves a giant chequebook, with an industrial policy explicitly geared towards making the country a hub for AI and defence tech. The MoD contracts and R&D funding are a direct financial lure for predatory American corporations like Palantir, Blackrock, and Blackstone, inviting them to make deep, strategic incursions into our critical public infrastructure.

This isn’t charity, of course. In return, Palantir offers the government the tools for mass surveillance under the plausible deniability of a private contract. By establishing its HQ here, Palantir satisfies all the sovereign risk and security concerns, making them the perfect “trusted” partner. It’s a perfect feedback loop: the government signals its deregulatory intent, the money flows into defence and AI, and a company like Palantir responds by embedding itself ever deeper into the fabric of the state.

This isn’t about controlling immigration. It’s about building the infrastructure to control citizens. We are sacrificing our regulatory sovereignty for a perceived edge in security and technology, and in doing so, we are rolling out the red carpet for the very companies that specialise in monitoring us. When the firm that helps the CIA track its targets is hired to build your national ID card, you’re not getting documentation. You’re getting monitored.

Your New Digital ID Isn’t For Convenience. It’s For Control.


The Digital Back Door: Why a National ID is the End of a Free Society

Every breath you take
And every move you make
Every bond you break
Every step you take
I’ll be watching you

Lyric George Sumner – The Police

There’s a pitch being sold to the British public, dressed up in the language of convenience and national security. It’s the idea of a Digital ID for every adult, a neat, modern solution to complex problems like illegal migration.

I can tell you this isn’t progress. It’s the architecture of a control system, a Trojan horse that smuggles a surveillance state in under the guise of efficiency. It is the end of a free society, and we are sleepwalking towards it.

Let’s start by dismantling the primary justification: fixing the border. The claim that a Digital ID will stop the boats is, to put it plainly, bollocks. It will not stop trafficking gangs, nor will it fix a fundamentally broken system. Criminals and their networks are, by their very nature, experts at working around systems; they adapt faster than bureaucracies can legislate. The ones who will pay the price for this vast, expensive, and dangerous infrastructure will not be the criminals, but the honest, law-abiding citizens of this country.

The fundamental flaw lies in a concept I deal with daily: centralised risk. We spend hundreds of billions a year on cybersecurity, yet the volume and severity of data breaches are breaking records. The threat grows faster than the spend. From Jaguar Land Rover to major airports, no centralised system has proven impenetrable. Now, imagine that vulnerability scaled up to a national level, with a single database linking your identity to every checkpoint of daily life: where you go, what you buy, what you read, and who you speak to.

Here is the risk that ministers will not admit. A sophisticated ransomware attack, seeded quietly through a compromised supplier or a disgruntled insider, lies dormant for months. It slowly rolls through the backups, undetected. Then, on trigger day, the live registry and every recovery set are encrypted simultaneously. The country grinds to a halt. Payments fail. Health and benefits systems stall. Borders slow to a crawl. Citizens are frozen out of their own lives until a ransom is paid or the state is forced to rebuild the nation’s identity from scratch. To centralise identity is to centralise failure.

This, however, is only the technical risk. The greater political and social danger lies in the certainty of function creep. It will begin as an optional, convenient way to log in or prove your age. But it will not end there. It will inevitably become a mandatory prerequisite for accessing money, travel, employment, and essential public services. Our fundamental rights will be turned into permissions, granted or revoked by the state and its chosen corporate contractors.

This isn’t a theoretical dystopian future; it’s a documented reality. India’s Aadhaar system, initially for welfare, now underpins everything from banking to mobile phones and has been plagued by data leaks exposing millions to fraud. We are seeing the groundwork laid in the UK with the Digital Identity and Attributes Trust Framework (DIATF), a federated model reliant on a network of private suppliers like Yoti, Hippo Digital, and IDEMIA. This multi-vendor approach doesn’t eliminate risk; it multiplies the potential points of failure through a web of interconnected APIs, each a potential back door for attackers.

Furthermore, this system is built on a foundation of exclusion. The assumption of universal digital literacy is a dangerous fiction. With a significant percentage of UK adults lacking basic digital skills, a mandatory Digital ID will create a two-tier society. The elderly, the poor, and the vulnerable—those who cannot or will not comply—risk being locked out of the services they need most, deepening inequality and fuelling social unrest.

The gravest danger, however, emerges when this infrastructure is placed in the context of a crisis. Economic collapse, social unrest, or an environmental emergency often serves as the justification for an expansion of state power. A Digital ID system provides the ready-made tool for authoritarianism. In a crisis, it could be repurposed to monitor dissent, freeze the bank accounts of protesters, or restrict the movement of individuals deemed a threat. It builds, by stealth, the machinery for a social credit system.

And this brings us to the corporate engine waiting to power this machine: Palantir. The US data-mining firm is already deeply embedded within the UK state, with contracts spanning the NHS and the Ministry of Defence. Palantir doesn’t need a specific contract for the “Brit Card”; its platforms, Foundry and Gotham, are designed to do precisely what a Digital ID enables on a mass scale: fuse disparate datasets into a single, all-encompassing profile for every citizen.

The Digital ID would be the “golden record” that connects your health data, your financial transactions, your movements, and your communications. In a crisis, Palantir’s AI could be used for predictive surveillance—flagging individuals who enter a “protest zone” or transactions to “undesirable” organisations. This isn’t just a British system; with Palantir’s deep ties to US intelligence, it becomes a system subject to foreign demands under legislation like the CLOUD Act. We would be outsourcing our national sovereignty.

The entire premise is flawed. If the government were serious about the border, it would enforce current laws, properly resource patrols and processing, and close existing loopholes. You do not need to build a panopticon to do that. We scrapped ID cards in 2010 for a reason, recognising their threat to our fundamental liberties. Reintroducing them through the digital back door, outsourced to a network of private contractors and data-mining firms, is a monumental error.

There are better ways. Decentralised alternatives using cryptographic methods like zero-knowledge proofs can verify status or identity without creating a central honeypot of data. But these privacy-first solutions lack government traction because the true, unstated goal is not security or convenience. It is control. We must not fall for the pitch. This is a system that will centralise risk and outsource blame. It will punish the vulnerable while failing to stop the criminals it targets. It is the foundation for a future where our rights are contingent on our compliance. The choice is simple: yes to privacy-first proofs, no to a database state.

Beware the all-seeing eye!

Polycrisis What Polycrisis? Metacrisis What Metacrisis?

1. Polycrisis

Core Idea: A Polycrisis is an event where multiple, distinct crises interact in a way that the overall impact is far greater than the mere sum of each crisis’s individual effects. The crises are interconnected and exacerbate one another, creating a cascading failure across systems.

Key Characteristics:

  • Multiple, Separate Crises: It begins with several identifiable crises (e.g., an energy crisis, a food crisis, a geopolitical crisis).
  • Synergistic Interaction: These crises are not happening in isolation. They are interconnected, so that one crisis worsens another.
  • Cascading Effects: A shock in one system (like finance) triggers failures in another (like supply chains), which then impacts a third (like political stability).
  • Systemic Nature: The problem is not the individual crises themselves, but the dysfunctional connections between the systems they inhabit.
  • Manageable (in theory): The individual component crises can, in principle, be addressed with existing tools and frameworks, though the interaction makes it extremely difficult.

Classic Example: The 1970s Oil Shock

  1. Geopolitical Crisis: The OPEC oil embargo.
  2. Energy Crisis: A sharp rise in oil prices, causing fuel shortages.
  3. Economic Crisis: Stagflation (high inflation + high unemployment + slow growth).
    These three crises fed into each other, creating a global polycrisis that was more severe than any one of them alone.

Recent Example: The COVID-19 Polycrisis
The pandemic interacted with and amplified pre-existing crises:

  • Health Crisis: The virus itself.
  • Supply Chain Crisis: Lockdowns disrupted global logistics.
  • Economic Crisis: Massive stimulus, leading to inflation.
  • Geopolitical Crisis: Increased tensions between major powers.
    The interaction of these elements created a global situation far more complex and damaging than the pandemic alone.

Analogy: An orchestra where several sections (strings, brass, woodwinds) all start playing the wrong notes at the same time. The result is a cacophony that is much worse than a single musician being out of tune. The problem is the combination of failures.


2. Metacrisis

Core Idea: The Metacrisis (or The Meta-Crisis) is a broader, deeper concept. It refers not to a set of interacting crises, but to the underlying, shared root system that generates these polycrises and individual crises in the first place. It’s the “crisis of crises.”

Key Characteristics:

  • A Single, Meta-Problem: The Metacrisis is itself a singular, overarching phenomenon—a failure at the level of our operating system for civilization.
  • Root Cause Focus: It points to the deep, often invisible, assumptions, values, and structures that make our systems prone to crisis. These include:
    • Short-termism in economics and politics.
    • Hyper-extractive relationship with the planet.
    • Reductionist worldview that ignores complexity and interconnectedness.
    • Outdated narratives about progress, growth, and human nature.
  • Generative: The Metacrisis doesn’t just describe current problems; it explains why we keep creating new ones. It’s the “crisis-generating system.”
  • Paradigm-Level: Solving the Metacrisis requires a fundamental shift in our consciousness, values, and paradigms—not just technical fixes or policy reforms.

Example: The Limits to Growth & Value Systems
The Metacrisis can be seen in the collision between our infinite-growth economic model and the finite boundaries of the planet (climate change, biodiversity loss). The polycrises that result are food shortages, extreme weather events, and migration crises. The Metacrisis is the underlying flaw: an economic and cultural system that is fundamentally misaligned with the biophysical reality of the Earth.

Analogy: If a computer keeps crashing due to different software errors (polycrises), the Metacrisis is the deeply flawed and outdated operating system that is the common source of all these errors. Fixing one software bug (solving one crisis) won’t help for long; the entire operating system needs an upgrade.


Comparison Table: Polycrisis vs. Metacrisis

FeaturePolycrisisMetacrisis
NatureAn event or situation of interacting crises.The underlying context or root system that generates crises.
ScopeMultiple, separate crises interacting.A single, overarching meta-problem.
FocusThe symptoms and their synergistic effects.The root causes and the “source code” of our systems.
Temporal ViewPrimarily looks at the present convergence of crises.Looks at the long-term patterns that lead to recurring crises.
Solution ApproachSystem management: Better coordination, resilience, and managing interconnections.System transformation: A fundamental shift in paradigms, values, and goals.
AnalogyMultiple organ failures in a patient, each making the others worse.The underlying chronic disease or unhealthy lifestyle that made the patient vulnerable.

In short: A Polycrisis is the terrifying storm you are trying to navigate. The Metacrisis is the broken navigation system, the faulty weather models, and the reason you built a ship unfit for the ocean in the first place. You need to manage the storm (polycrisis) to survive, but you must fix the underlying flaws (metacrisis) to avoid the next one.