
A Field Guide to Reason: Human Logic, Cognitive Bias, and the AI Mirage
In 2026, the pursuit of truth is no longer a simple matter of “common sense.” We are navigating a world where human biological biases, ancient logical errors, and the “alien” irrationality of Artificial Intelligence have collided.
Many people have “farmed out” their thinking to machines, but those machines have their own systemic flaws—and the strategies used to “fix” them are often just as broken. To maintain your intellectual sovereignty, you must master the five dimensions of modern reason.
Part I: The Field Guide to Logical Fallacies (30 Common “Dirty Tricks”)
Logical fallacies are errors in reasoning that destroy the quality of an argument. Use this list to spot when a conversation is being derailed.
1. The Personal & Origin Attacks
- Ad Hominem: Attacking the person’s character rather than their message.
- Tu Quoque: Avoiding criticism by pointing out the critic’s own flaws.
- Genetic Fallacy: Judging an idea based solely on its source or origin.
- The Straw Man: Distorting an argument into a weaker version to easily tear it down.
- No True Scotsman: Redefining a group to exclude counter-examples (moving the goalposts).
2. The Emotional Appeals
- Appeal to Emotion: Using fear, pity, or anger instead of facts to win.
- Appeal to Pity: Invoking sympathy for a hardship to support an unrelated claim.
- Appeal to Fear: Scaring the audience into agreement by exaggerating threats.
3. Authority & Tradition
- Appeal to Authority: Using an expert’s opinion as proof without supporting evidence.
- Bandwagon Fallacy: Assuming something is true because it is popular.
- Appeal to Tradition: Claiming something is right because it’s “how we’ve always done it.”
- Appeal to Novelty: Arguing that something is superior simply because it is new.
- Personal Incredulity: Rejecting an idea because you find it hard to understand.
4. Data, Cause & Probability
- Hasty Generalization: Drawing a sweeping conclusion from a tiny, anecdotal sample.
- Post Hoc Ergo Propter Hoc: Assuming that because B followed A, A must have caused B.
- The Texas Sharpshooter: Cherry-picking data to fit a story while ignoring the rest.
- Gambler’s Fallacy: Believing that past independent events affect future probability.
- Burden of Proof: Claiming something is true because it hasn’t been proven false.
- False Analogy: Comparing two things that aren’t truly alike.
5. Diversion & Balance
- Red Herring: A distraction masquerading as a relevant point to shift the topic.
- False Dilemma: Presenting two extreme options as the only possibilities.
- Slippery Slope: Insisting that one small step will inevitably lead to a catastrophe.
- Loaded Question: A “trap” question that contains a built-in presumption of guilt.
- Argument from Ignorance: Claiming truth because something hasn’t been proven otherwise.
- Argument to Moderation: Assuming the truth lies exactly in the middle of two extremes.
6. Linguistic & Circular Games
- Begging the Question: A circular argument where the conclusion is assumed in the premise.
- Equivocation: Using the same word in two different ways to mislead.
- Non-Sequitur: A conclusion that simply does not logically follow from the premise.
- Sunk Cost Fallacy: Arguing to continue a path simply because of past investment.
- The Fallacy Fallacy: Assuming a claim is false simply because it was argued poorly.
Part II: The Engine Room (15 Mappings of Bias to Fallacy)
Cognitive biases are the biological “bugs” in our brain’s software. They predispose us to commit the fallacies listed above.
| Cognitive Bias | Linked Fallacy | The Connection |
| Confirmation Bias | Cherry-Picking | We seek only the info that confirms our existing “pattern.” |
| Anchoring Bias | Part/Whole Fallacy | Our judgment is “stuck” to the first piece of info we encounter. |
| Hindsight Bias | Historian’s Fallacy | We retroactively assume the past was more predictable than it was. |
| Availability Heuristic | Hasty Generalization | We think something is common just because it’s “vivid” in our memory. |
| Sunk Cost Bias | Sunk Cost Fallacy | We irrationally weigh past effort over future utility. |
| Bandwagon Effect | Ad Populum | We equate the “majority view” with the “correct view.” |
| Authority Bias | Ad Verecundiam | We overvalue titles and credentials over raw evidence. |
| In-Group Bias | No True Scotsman | we protect our “tribe” by moving the goalposts for outsiders. |
| Belief Bias | Fallacy Fallacy | We accept a bad argument if we like the conclusion it reaches. |
| Projection Bias | Psychologist’s Fallacy | We assume everyone else shares our specific mental state. |
| Stereotyping | Genetic Fallacy | We judge an idea based on the “group” it belongs to. |
| Outcome Bias | Post Hoc | We judge the quality of a decision based solely on the result. |
| Dunning-Kruger | Personal Incredulity | Our lack of skill in an area makes us unable to see our own errors. |
| False Consensus | Ad Populum | We overestimate how much people agree with us. |
| Halo Effect | Non-Sequitur | We let one positive trait (like beauty) color our entire judgment. |
Part III: The Sophisticated Nuance (6 Truths of Logic)
Mastery of reason means knowing when the “rules” of logic are actually flexible.
- Context is King: Many “fallacies” are actually valid in certain contexts. Deferring to a scientific consensus (Appeal to Authority) is a sound way to handle uncertainty.
- The Power of the Enthymeme: Humans naturally omit “obvious” premises for efficiency. If you attack every incomplete sentence as a “Non-Sequitur,” you aren’t being logical—you’re being pedantic.
- Fallacy-Hunting as a Weapon: Over-naming fallacies is often a form of poor reasoning used to stifle debate and avoid engaging with real-world inductive evidence.
- Bad Arguments ≠ Wrong Conclusions: A person can argue for the truth using a fallacy. Don’t dismiss a true fact just because the person speaking it is a poor advocate.
- Taxonomy is Arbitrary: Logical “rules” are cultural artefacts. What the West calls an “Appeal to Tradition,” other cultures call “Cultural Continuity.”
- Fallacies are Adaptive: We aren’t “bad at logic”; we are “good at survival.” Our biases were designed to help us make split-second decisions in a dangerous world.
Part IV: The Silicon Mirror (6 Truths of AI Reasoning Bias)
AI does not think like a human. It has “alien” biases rooted in its code and architecture.
- Position Bias: LLMs overemphasise information at the start and end of a prompt. Important evidence “buried in the middle” is often ignored by AI reasoners.
- Alien Irrationality: AI doesn’t have “emotions,” but it has “probabilistic bias.” It gives inconsistent answers to logic puzzles because it predicts tokens rather than understanding concepts.
- Linguistic Neocolonialism: AI models are biased toward English- and Western-language data. Reasoning in non-Western languages or cultural contexts is significantly less accurate.
- AI-AI Bias: Models have been found to favour machine-produced text over human-produced text, risking a self-reinforcing loop that disadvantages human creativity.
- The Interpretability Paradox: “Fixing” an AI bias often introduces new ones. Debiasing a model for social fairness often degrades its performance in math and technical logic.
- Intrinsic Stereotypes: Social biases are baked into the “embeddings” of AI architecture. Fine-tuning offers surface-level fixes, but the deep stereotypes persist under the hood.
Part V: The Mitigation Mirage (6 Truths of “Fixing” AI)
When tech companies claim they have “debiased” AI, the reality is far more complicated.
- The Impossibility of Total Fairness: You cannot satisfy all fairness metrics at once. Fixing one group’s bias often accidentally increases rejections for another group.
- Internal vs. External Fixes: Telling an AI “don’t be biased” (prompting) is fragile. Editing the model’s “brain” (Concept Editing) is better, but it often makes the model less accurate overall.
- Ethical Imperialism: AI mitigation tools export Western values. A “fair” model in New York may be deeply biased and harmful when deployed in an African healthcare setting.
- Bias Washing: Companies often use cheap audits to claim their AI is “fair” while avoiding the expensive work of fixing the underlying data or architecture.
- The RLHF Trap: “Human-in-the-loop” governance often just entreats the specific subjective biases of the human curators who are training the AI.
- Model Drift: AI logic is not “set-it-and-forget-it.” As models ingest new data, old biases resurface, requiring a constant (and often ignored) cycle of expensive monitoring.
Conclusion: The Reasoning Architect
In 2026, the goal of learning logic isn’t to “win” every argument. It’s to avoid the “mud-wrestling” pits of misinformation altogether.
By understanding these 30 fallacies, the 15 biases that fuel them, the 6 philosophical nuances, and the 12 flaws of AI and its mitigation, you move from a consumer of information to a being a Reasoning Critical Thinker.
Bookmark this guide, value nuance. Remember: the goal of logic isn’t to win, it’s to see the world as it truly is.
‘Clowns to the left of me
Jokers to the right
Here I am, stuck in the middle with you‘
Lyric from: ‘Stuck In The Middle With You’
Written by: Joe Egan, Gerald Rafferty
