Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Phil McKinney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil McKinney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

How To Improve Your Logical Reasoning Skills

29:21
 
Share
 

Manage episode 513564379 series 2400655
Content provided by Phil McKinney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil McKinney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

You see a headline: “Study Shows Coffee Drinkers Live Longer.” You share it in 3 seconds flat. But here's what just happened—you confused correlation with causation, inductive observation with deductive proof, and you just became a vector for misinformation. Right now, millions of people are doing the exact same thing, spreading beliefs they think are facts, making decisions based on patterns that don't exist, all while feeling absolutely certain they're thinking clearly.

We live in a world drowning in information—but starving for truth. Every day, you're presented with hundreds of claims, arguments, and patterns. Some are solid. Most are not. And the difference between knowing which is which and just guessing? That's the difference between making good decisions and stumbling through life confused about why things keep going wrong.

Most of us have never been taught the difference between deductive and inductive reasoning. We stumble through life applying deductive certainty to inductive guesses, treating observations as proven facts, and wondering why our conclusions keep failing us. But once we understand which type of reasoning a situation demands, we gain something powerful—the ability to calibrate our confidence appropriately, recognize manipulation, and build every other thinking skill on a foundation that actually works.

By the end of this episode, you'll possess a practical toolkit for improving your logical reasoning—four core strategies, one quick-win technique, and a practice exercise you can start today.

This is Episode 2 of Thinking 101, a new 8-part series on essential thinking skills most of us never learned in school. Links to all episodes are in the description below.

What is Logical Reasoning?

But what does logical reasoning entail? At its core, there are two fundamental ways humans draw conclusions, and you're using both right now without consciously choosing between them.

Deductive reasoning moves from general principles to specific conclusions with absolute certainty. If the premises are true, the conclusion must be true. “All mammals have hearts. Dogs are mammals. Therefore, dogs have hearts.” There's no wiggle room—if those first two statements are true, the conclusion is guaranteed. This is the realm of mathematics, formal logic, and established law.

Inductive reasoning works in reverse, building from specific observations toward general principles with varying degrees of probability. You observe patterns and infer likely explanations. “I've seen 1,000 swans and they were all white, therefore all swans are probably white.” This feels certain, but it's actually just highly probable based on limited evidence. History proved this reasoning wrong when black swans were discovered in Australia.

Both are tools. Neither is “better.” The question is which tool fits the job—and whether you're using it correctly.

Loss of Logical Reasoning Skills

Why does this matter? Because across every domain of life, this reasoning confusion is costing us.

In our social media consumption, we're drowning in inductive reasoning disguised as deductive proof. Researchers at MIT found that fake news spreads ten times faster than accurate reporting. Why? Because misleading content exploits this confusion. You see a viral post claiming “New study proves smartphones cause depression in teenagers,” with graphs and official-looking citations. What you're actually seeing is inductive correlation presented as deductive causation—researchers observed that depressed teenagers often use smartphones more, but that doesn't prove smartphones caused the depression.

And this is where it gets truly terrifying—I need you to hear this carefully:

In 2015, researchers tried to replicate 100 psychology studies published in top scientific journals. Only 36% held up. Read that again: Nearly two-thirds of peer-reviewed, published research couldn't be reproduced. And those false studies? Still being cited. Still shaping policy. Still being shared as “science proves.” You're building your worldview on a foundation where 64% of the bricks are made of air.

In our personal relationships, we constantly make inductive inferences about people's intentions and treat them as deductive facts. Your partner forgets to text back three times this week. You observe the pattern, inductively infer “they're losing interest,” then act with deductive certainty—becoming distant, accusatory, or defensive. But what if those three instances had three different explanations? What if the pattern we detected isn't actually a pattern at all? We say “you always” or “you never” based on three data points. We end relationships over patterns that never existed.

So why didn't anyone teach us this? Traditional schooling focuses on teaching us what to think—facts, formulas, established knowledge. Deductive reasoning gets attention in math class as a mechanical process for solving equations. Inductive reasoning gets buried in science class, completely disconnected from actual decision-making. We graduated with facts crammed into our heads but no framework for evaluating new claims.

But that changes now.

How To Improve Your Logical Reasoning

You now understand the two reasoning systems and why mixing them up is costing you. Let's fix that. These five strategies will give you immediate control over your logical reasoning—starting with the most foundational skill and building to a technique you can use in your next conversation.

Label Your Reasoning Type

The first step to improving your logical reasoning is becoming aware of which system you're using—and we rarely stop to check.

We flip between deductive and inductive thinking dozens of times per day without realizing it. You see your colleague get promoted after working late, and you instantly conclude that working late leads to promotion—that's inductive. But you're treating it like a deductive rule: “If I work late, I WILL get promoted.” The moment you label which type you're using, you regain control.

  1. Start with a daily reasoning journal. At the end of each day, write down three conclusions you made—about people, work, news, anything.
  2. For each conclusion, ask: “What evidence led me here?” If it's general rules applied to specifics (all mammals have hearts, dogs are mammals), you used deduction. If it's patterns from observations (I've seen this three times), you used induction.
  3. Label each one: “D” for deductive, “I” for inductive. This creates conscious awareness. You'll likely find 80-90% of your daily reasoning is inductive—but you've been treating it as deductive certainty.
  4. When you catch yourself saying “always,” “never,” “definitely,” stop and ask: “Is this deductive certainty or inductive probability?” That single pause changes everything.
  5. Practice in real-time during conversations. When someone makes a claim, silently label it: deductive or inductive? Weak reasoning becomes obvious instantly.
  6. After one week of journaling, review your entries. Patterns emerge in your reasoning errors—specific topics where you consistently overstate certainty, or people you make assumptions about. This awareness is the foundation for improvement.

Calibrate Your Confidence

Once you've labeled your reasoning type, the next step is matching your certainty level to the strength of your evidence.

Here's where most people fail: they feel 100% certain about conclusions built on three observations. Your brain doesn't naturally calibrate—it defaults to “this feels true, therefore it IS true.” But when you explicitly assign probability levels to inductive conclusions, you stop making the most common reasoning error: treating patterns as proven facts.

  1. For every inductive conclusion, assign a percentage. “Given these five observations, I'm 60% confident this pattern is real.” Never use 100% for inductive reasoning—by definition, inductive conclusions are probabilistic, not certain.
  2. Use this language shift in conversations: Replace “You always ignore my suggestions” with “I've brought up ideas in the last two meetings and haven't heard feedback, which makes me about 40% confident there's a communication pattern worth discussing.” Replace “This definitely works” with “From what I've seen, I'm 70% confident this approach is effective.”
  3. Create a certainty threshold for action. Decide: “I need 70% confidence before I make a major decision based on inductive reasoning.” This prevents impulsive moves based on weak patterns. Below 50%? Keep observing. Above 80%? Worth acting on.
  4. Keep a confidence log for one week. Write your predictions with probability levels (“80% confident it will rain tomorrow,” “60% confident this project will succeed”). Then check if you were right. This trains your calibration. You'll discover whether you're overstating or understating your certainty—and you can adjust.
  5. When someone presents “definitive” claims based on inductive evidence, ask: “What certainty level would you assign that? 60%? 90%?” Watch them realize they've been overstating their case. This question immediately disrupts manipulation.

Hunt for Contradictions

Your brain naturally seeks confirming evidence and ignores contradictions—this strategy forces you to do the opposite.

Confirmation bias is the enemy of good inductive reasoning. Once you believe something, your brain becomes a heat-seeking missile for evidence that supports it. The only antidote? Actively hunt for evidence that contradicts your conclusion. It's uncomfortable, yes, but it's the difference between being right and feeling right.

  1. For every inductive conclusion you reach, set a 24-hour “contradiction hunt.” Your job is to find at least two pieces of evidence that contradict your conclusion. If you believe “remote work increases productivity,” you must find credible sources claiming the opposite.
  2. Use search terms designed to find opposites. Search for “remote work decreases productivity study” or “evidence against intermittent fasting.” Force-feed yourself the other side. Google's algorithm wants to confirm your beliefs—you have to actively fight it.
  3. Create a contradiction column in your reasoning journal. For each conclusion (left column), list contradicting evidence (right column). If you can't find any contradictions, you haven't looked hard enough—or you're in an echo chamber.
  4. In debates or discussions, argue the opposite position for 5 minutes. Seriously. If you believe X, spend 5 minutes making the best possible case for NOT X. This breaks confirmation bias and reveals holes in your reasoning you couldn't see before.
  5. Before sharing anything on social media, spend 2 minutes actively searching for contradicting evidence. Search “[claim] debunked” or “[claim] false” or look for the opposite perspective. If you find credible contradictions, pause. The claim is disputed. Either don't share it, or share it with context like “Interesting claim, though [credible source] disputes this because…” This habit trains you to think critically before becoming a misinformation vector.

Question the Sample

Most bad inductive reasoning fails the sample size test—and almost no one thinks to ask.

Here's the manipulation technique you need to spot: Someone shows you three examples and declares a universal truth. “I know three people who got rich with crypto, therefore crypto makes everyone rich.” Three examples. Seven billion people. Your brain treats this as evidence—until you ask about the total number. This question alone dismantles 90% of weak arguments.

  1. Every time someone makes an inductive claim, ask out loud: “How many observations is that based on?” Three? Thirty? Three thousand? The number matters enormously. One person's experience is an anecdote. Ten similar experiences start to suggest a pattern. A hundred becomes meaningful. A thousand builds real confidence.
  2. Learn the rough sample sizes for different certainty levels. For casual patterns: 10-20 observations. For moderate confidence: 100-500. For high confidence: 1,000+. For scientific certainty: 10,000+. Five examples claiming certainty? That's weak, and now you know it.
  3. Always check the total number—whether it's called sample size, denominator, or population. When someone shows examples or cites a study, ask: “Out of how many total?” Three testimonials mean nothing without knowing if it's 3 out of 10 (30% success rate) or 3 out of 10,000 (0.03%). When reading headlines like “Study shows X,” click through and find the sample size. “Study of 12 people” is not the same as “Study of 12,000 people.” The total number is usually hidden because it reveals how weak the claim really is.
  4. In your own reasoning, track your sample. Before concluding “this restaurant is always slow,” count: how many times have you been there? Three? That's not “always”—that's barely data. You need at least 10 visits across different times and days before you can claim a pattern.
  5. Challenge yourself: Can you find a larger sample that contradicts your small sample? If your three experiences clash with 3,000 online reviews saying the opposite, which should you trust? The larger sample wins unless you have specific reasons to believe it's biased.

The One-Word Test (Quick Win)

Here's a technique you can implement in the next 30 seconds that will immediately improve your logical reasoning: stop using absolute language.

Every time you're about to say “always” or “never,” catch yourself and replace it with “usually” or “rarely.” Every time you're about to say “definitely” or “certainly,” use “probably” or “likely” instead.

This single word swap trains your brain to think probabilistically. It acknowledges that most of your reasoning is inductive—based on patterns, not guarantees. And here's the bonus: people will perceive you as more credible because you're not overstating your case.

Try it right now in your next conversation. Watch how often you reach for absolute language—and how much clearer your thinking becomes when you don't use it.

Practice

The most effective way to internalize these strategies is through practice with real-world scenarios.

The Pattern Detective Challenge

  1. Find three claims from your social media feed today—anything that declares a pattern, trend, or “truth” (health advice, political claims, life advice, product recommendations).
  2. For each claim, identify: Is this deductive or inductive reasoning? Write it down. Most will be inductive disguised as deductive. “This supplement WILL boost your energy” sounds deductive, but it's based on inductive observations.
  3. If inductive, assess the sample size. How many observations is this based on? One person's testimonial? A study? How many participants? Is the sample representative of the broader population?
  4. Assign a certainty level. Given the sample size and quality of evidence, what probability would you assign this claim? 30%? 60%? 90%? Be honest—most will be below 70%.
  5. Hunt for contradictions. Spend 5 minutes finding evidence that contradicts the claim. Can you find it? How credible is it? Does it have a larger sample size than the original claim?
  6. Rewrite the claim with calibrated language. Change “Intermittent fasting WILL make you healthier” to “From studies of X people, intermittent fasting appears to improve some health markers for some people, though individual results vary—confidence level: 65%.”
  7. Share your analysis with someone. Explain your reasoning process. Teaching others reinforces your own learning and reveals gaps you didn't notice.
  8. Repeat this exercise 3 times per week for one month. By the end, automatic evaluation becomes second nature. You won't need to think about it—it just happens.

The Rewards

The journey of improving your logical reasoning is ongoing, but the rewards compound quickly.

You become nearly impossible to manipulate. When you can spot the difference between inductive observation and deductive proof, 90% of manipulation tactics stop working. The car salesman's pitch falls flat. The political ad looks transparent. The social media rage-bait loses its power.

Your relationships improve dramatically. When you stop saying “you always” and start saying “I've noticed this three times,” you create space for understanding instead of defensiveness. Conflicts become conversations. Assumptions become questions.

Your professional credibility skyrockets. Leaders who can distinguish between strong deductive arguments and weak inductive patterns make better strategic decisions. When you speak with calibrated confidence—saying “I'm 70% confident” instead of “I'm absolutely certain”—people trust your judgment more, not less.

You build a foundation for every other thinking skill. Spotting logical fallacies, evaluating evidence, resisting cognitive biases, asking better questions—all of these depend on understanding which type of reasoning you're using and which type the situation demands.

You're not just learning a thinking skill—you're installing psychological armor that most people don't even know exists. And in a world where manipulation is the norm, that makes you dangerous to anyone trying to control you.

Every week on Substack, I go deeper—sharing personal examples, failed experiments, and lessons I couldn't fit in the video. It's like the director's cut.

This week's Substack deep dive into a logical reasoning failure can be found at: https://philmckinney.substack.com/p/kroger-copied-hps-innovation-playbook

Your Thinking 101 Journey

This is Episode 2 of Thinking 101: The Essential Skills They Never Taught You—an 8-part foundation series where each episode unlocks the next.

If you missed Episode 1, “Why Thinking Skills Matter Now More Than Ever,” start there. It explains why this entire skillset has become essential.

Up next: Episode 3, “Causal Thinking: Beyond Correlation.” You'll learn how to distinguish between things that simply happen together and things that actually cause each other—transforming how you evaluate health claims, business strategies, and relationship patterns.

Hit that subscribe button so you don’t miss any future episodes. Also – hit the like and notification bell. It helps with the algorithm so others see our content. Why not share this video with a coworker or a family member who you think would benefit from it? …

Because right now, while you've been watching this, someone just shared a lie that felt like truth. The only question is: will you be able to tell the difference?

To learn more about improving your logical thinking skills, listen to this week's show: How To Improve Your Logical Reasoning Skills.

Get the tools to fuel your innovation journey → Innovation.Tools https://innovation.tools

RELATED: Subscribe To The Newsletter and Killer Innovations Podcast


SOURCES CITED IN THIS EPISODE

  1. MIT Media Lab – Misinformation Spread Rate
    Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
    https://doi.org/10.1126/science.aap9559
  2. Indiana University – Misinformation Superspreaders
    DeVerna, M. R., Aiyappa, R., Pacheco, D., Bryden, J., & Menczer, F. (2024). Identifying and characterizing superspreaders of low-credibility content on Twitter. PLOS ONE, 19(5), e0302201.
    https://doi.org/10.1371/journal.pone.0302201
  3. Open Science Collaboration – The Replication Crisis
    Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
    https://doi.org/10.1126/science.aac4716

ADDITIONAL READING

On Inductive Reasoning and Uncertainty
Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.

On Cognitive Biases and Decision-Making
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

On Confirmation Bias
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220.
https://doi.org/10.1037/1089-2680.2.2.175

On Scientific Reproducibility
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), e124.
https://doi.org/10.1371/journal.pmed.0020124

Note: All sources cited in this episode have been accessed and verified as of October 2025. The studies referenced are peer-reviewed academic research published in reputable scientific journals, including Science and PLOS ONE.

  continue reading

275 episodes

Artwork
iconShare
 
Manage episode 513564379 series 2400655
Content provided by Phil McKinney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil McKinney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

You see a headline: “Study Shows Coffee Drinkers Live Longer.” You share it in 3 seconds flat. But here's what just happened—you confused correlation with causation, inductive observation with deductive proof, and you just became a vector for misinformation. Right now, millions of people are doing the exact same thing, spreading beliefs they think are facts, making decisions based on patterns that don't exist, all while feeling absolutely certain they're thinking clearly.

We live in a world drowning in information—but starving for truth. Every day, you're presented with hundreds of claims, arguments, and patterns. Some are solid. Most are not. And the difference between knowing which is which and just guessing? That's the difference between making good decisions and stumbling through life confused about why things keep going wrong.

Most of us have never been taught the difference between deductive and inductive reasoning. We stumble through life applying deductive certainty to inductive guesses, treating observations as proven facts, and wondering why our conclusions keep failing us. But once we understand which type of reasoning a situation demands, we gain something powerful—the ability to calibrate our confidence appropriately, recognize manipulation, and build every other thinking skill on a foundation that actually works.

By the end of this episode, you'll possess a practical toolkit for improving your logical reasoning—four core strategies, one quick-win technique, and a practice exercise you can start today.

This is Episode 2 of Thinking 101, a new 8-part series on essential thinking skills most of us never learned in school. Links to all episodes are in the description below.

What is Logical Reasoning?

But what does logical reasoning entail? At its core, there are two fundamental ways humans draw conclusions, and you're using both right now without consciously choosing between them.

Deductive reasoning moves from general principles to specific conclusions with absolute certainty. If the premises are true, the conclusion must be true. “All mammals have hearts. Dogs are mammals. Therefore, dogs have hearts.” There's no wiggle room—if those first two statements are true, the conclusion is guaranteed. This is the realm of mathematics, formal logic, and established law.

Inductive reasoning works in reverse, building from specific observations toward general principles with varying degrees of probability. You observe patterns and infer likely explanations. “I've seen 1,000 swans and they were all white, therefore all swans are probably white.” This feels certain, but it's actually just highly probable based on limited evidence. History proved this reasoning wrong when black swans were discovered in Australia.

Both are tools. Neither is “better.” The question is which tool fits the job—and whether you're using it correctly.

Loss of Logical Reasoning Skills

Why does this matter? Because across every domain of life, this reasoning confusion is costing us.

In our social media consumption, we're drowning in inductive reasoning disguised as deductive proof. Researchers at MIT found that fake news spreads ten times faster than accurate reporting. Why? Because misleading content exploits this confusion. You see a viral post claiming “New study proves smartphones cause depression in teenagers,” with graphs and official-looking citations. What you're actually seeing is inductive correlation presented as deductive causation—researchers observed that depressed teenagers often use smartphones more, but that doesn't prove smartphones caused the depression.

And this is where it gets truly terrifying—I need you to hear this carefully:

In 2015, researchers tried to replicate 100 psychology studies published in top scientific journals. Only 36% held up. Read that again: Nearly two-thirds of peer-reviewed, published research couldn't be reproduced. And those false studies? Still being cited. Still shaping policy. Still being shared as “science proves.” You're building your worldview on a foundation where 64% of the bricks are made of air.

In our personal relationships, we constantly make inductive inferences about people's intentions and treat them as deductive facts. Your partner forgets to text back three times this week. You observe the pattern, inductively infer “they're losing interest,” then act with deductive certainty—becoming distant, accusatory, or defensive. But what if those three instances had three different explanations? What if the pattern we detected isn't actually a pattern at all? We say “you always” or “you never” based on three data points. We end relationships over patterns that never existed.

So why didn't anyone teach us this? Traditional schooling focuses on teaching us what to think—facts, formulas, established knowledge. Deductive reasoning gets attention in math class as a mechanical process for solving equations. Inductive reasoning gets buried in science class, completely disconnected from actual decision-making. We graduated with facts crammed into our heads but no framework for evaluating new claims.

But that changes now.

How To Improve Your Logical Reasoning

You now understand the two reasoning systems and why mixing them up is costing you. Let's fix that. These five strategies will give you immediate control over your logical reasoning—starting with the most foundational skill and building to a technique you can use in your next conversation.

Label Your Reasoning Type

The first step to improving your logical reasoning is becoming aware of which system you're using—and we rarely stop to check.

We flip between deductive and inductive thinking dozens of times per day without realizing it. You see your colleague get promoted after working late, and you instantly conclude that working late leads to promotion—that's inductive. But you're treating it like a deductive rule: “If I work late, I WILL get promoted.” The moment you label which type you're using, you regain control.

  1. Start with a daily reasoning journal. At the end of each day, write down three conclusions you made—about people, work, news, anything.
  2. For each conclusion, ask: “What evidence led me here?” If it's general rules applied to specifics (all mammals have hearts, dogs are mammals), you used deduction. If it's patterns from observations (I've seen this three times), you used induction.
  3. Label each one: “D” for deductive, “I” for inductive. This creates conscious awareness. You'll likely find 80-90% of your daily reasoning is inductive—but you've been treating it as deductive certainty.
  4. When you catch yourself saying “always,” “never,” “definitely,” stop and ask: “Is this deductive certainty or inductive probability?” That single pause changes everything.
  5. Practice in real-time during conversations. When someone makes a claim, silently label it: deductive or inductive? Weak reasoning becomes obvious instantly.
  6. After one week of journaling, review your entries. Patterns emerge in your reasoning errors—specific topics where you consistently overstate certainty, or people you make assumptions about. This awareness is the foundation for improvement.

Calibrate Your Confidence

Once you've labeled your reasoning type, the next step is matching your certainty level to the strength of your evidence.

Here's where most people fail: they feel 100% certain about conclusions built on three observations. Your brain doesn't naturally calibrate—it defaults to “this feels true, therefore it IS true.” But when you explicitly assign probability levels to inductive conclusions, you stop making the most common reasoning error: treating patterns as proven facts.

  1. For every inductive conclusion, assign a percentage. “Given these five observations, I'm 60% confident this pattern is real.” Never use 100% for inductive reasoning—by definition, inductive conclusions are probabilistic, not certain.
  2. Use this language shift in conversations: Replace “You always ignore my suggestions” with “I've brought up ideas in the last two meetings and haven't heard feedback, which makes me about 40% confident there's a communication pattern worth discussing.” Replace “This definitely works” with “From what I've seen, I'm 70% confident this approach is effective.”
  3. Create a certainty threshold for action. Decide: “I need 70% confidence before I make a major decision based on inductive reasoning.” This prevents impulsive moves based on weak patterns. Below 50%? Keep observing. Above 80%? Worth acting on.
  4. Keep a confidence log for one week. Write your predictions with probability levels (“80% confident it will rain tomorrow,” “60% confident this project will succeed”). Then check if you were right. This trains your calibration. You'll discover whether you're overstating or understating your certainty—and you can adjust.
  5. When someone presents “definitive” claims based on inductive evidence, ask: “What certainty level would you assign that? 60%? 90%?” Watch them realize they've been overstating their case. This question immediately disrupts manipulation.

Hunt for Contradictions

Your brain naturally seeks confirming evidence and ignores contradictions—this strategy forces you to do the opposite.

Confirmation bias is the enemy of good inductive reasoning. Once you believe something, your brain becomes a heat-seeking missile for evidence that supports it. The only antidote? Actively hunt for evidence that contradicts your conclusion. It's uncomfortable, yes, but it's the difference between being right and feeling right.

  1. For every inductive conclusion you reach, set a 24-hour “contradiction hunt.” Your job is to find at least two pieces of evidence that contradict your conclusion. If you believe “remote work increases productivity,” you must find credible sources claiming the opposite.
  2. Use search terms designed to find opposites. Search for “remote work decreases productivity study” or “evidence against intermittent fasting.” Force-feed yourself the other side. Google's algorithm wants to confirm your beliefs—you have to actively fight it.
  3. Create a contradiction column in your reasoning journal. For each conclusion (left column), list contradicting evidence (right column). If you can't find any contradictions, you haven't looked hard enough—or you're in an echo chamber.
  4. In debates or discussions, argue the opposite position for 5 minutes. Seriously. If you believe X, spend 5 minutes making the best possible case for NOT X. This breaks confirmation bias and reveals holes in your reasoning you couldn't see before.
  5. Before sharing anything on social media, spend 2 minutes actively searching for contradicting evidence. Search “[claim] debunked” or “[claim] false” or look for the opposite perspective. If you find credible contradictions, pause. The claim is disputed. Either don't share it, or share it with context like “Interesting claim, though [credible source] disputes this because…” This habit trains you to think critically before becoming a misinformation vector.

Question the Sample

Most bad inductive reasoning fails the sample size test—and almost no one thinks to ask.

Here's the manipulation technique you need to spot: Someone shows you three examples and declares a universal truth. “I know three people who got rich with crypto, therefore crypto makes everyone rich.” Three examples. Seven billion people. Your brain treats this as evidence—until you ask about the total number. This question alone dismantles 90% of weak arguments.

  1. Every time someone makes an inductive claim, ask out loud: “How many observations is that based on?” Three? Thirty? Three thousand? The number matters enormously. One person's experience is an anecdote. Ten similar experiences start to suggest a pattern. A hundred becomes meaningful. A thousand builds real confidence.
  2. Learn the rough sample sizes for different certainty levels. For casual patterns: 10-20 observations. For moderate confidence: 100-500. For high confidence: 1,000+. For scientific certainty: 10,000+. Five examples claiming certainty? That's weak, and now you know it.
  3. Always check the total number—whether it's called sample size, denominator, or population. When someone shows examples or cites a study, ask: “Out of how many total?” Three testimonials mean nothing without knowing if it's 3 out of 10 (30% success rate) or 3 out of 10,000 (0.03%). When reading headlines like “Study shows X,” click through and find the sample size. “Study of 12 people” is not the same as “Study of 12,000 people.” The total number is usually hidden because it reveals how weak the claim really is.
  4. In your own reasoning, track your sample. Before concluding “this restaurant is always slow,” count: how many times have you been there? Three? That's not “always”—that's barely data. You need at least 10 visits across different times and days before you can claim a pattern.
  5. Challenge yourself: Can you find a larger sample that contradicts your small sample? If your three experiences clash with 3,000 online reviews saying the opposite, which should you trust? The larger sample wins unless you have specific reasons to believe it's biased.

The One-Word Test (Quick Win)

Here's a technique you can implement in the next 30 seconds that will immediately improve your logical reasoning: stop using absolute language.

Every time you're about to say “always” or “never,” catch yourself and replace it with “usually” or “rarely.” Every time you're about to say “definitely” or “certainly,” use “probably” or “likely” instead.

This single word swap trains your brain to think probabilistically. It acknowledges that most of your reasoning is inductive—based on patterns, not guarantees. And here's the bonus: people will perceive you as more credible because you're not overstating your case.

Try it right now in your next conversation. Watch how often you reach for absolute language—and how much clearer your thinking becomes when you don't use it.

Practice

The most effective way to internalize these strategies is through practice with real-world scenarios.

The Pattern Detective Challenge

  1. Find three claims from your social media feed today—anything that declares a pattern, trend, or “truth” (health advice, political claims, life advice, product recommendations).
  2. For each claim, identify: Is this deductive or inductive reasoning? Write it down. Most will be inductive disguised as deductive. “This supplement WILL boost your energy” sounds deductive, but it's based on inductive observations.
  3. If inductive, assess the sample size. How many observations is this based on? One person's testimonial? A study? How many participants? Is the sample representative of the broader population?
  4. Assign a certainty level. Given the sample size and quality of evidence, what probability would you assign this claim? 30%? 60%? 90%? Be honest—most will be below 70%.
  5. Hunt for contradictions. Spend 5 minutes finding evidence that contradicts the claim. Can you find it? How credible is it? Does it have a larger sample size than the original claim?
  6. Rewrite the claim with calibrated language. Change “Intermittent fasting WILL make you healthier” to “From studies of X people, intermittent fasting appears to improve some health markers for some people, though individual results vary—confidence level: 65%.”
  7. Share your analysis with someone. Explain your reasoning process. Teaching others reinforces your own learning and reveals gaps you didn't notice.
  8. Repeat this exercise 3 times per week for one month. By the end, automatic evaluation becomes second nature. You won't need to think about it—it just happens.

The Rewards

The journey of improving your logical reasoning is ongoing, but the rewards compound quickly.

You become nearly impossible to manipulate. When you can spot the difference between inductive observation and deductive proof, 90% of manipulation tactics stop working. The car salesman's pitch falls flat. The political ad looks transparent. The social media rage-bait loses its power.

Your relationships improve dramatically. When you stop saying “you always” and start saying “I've noticed this three times,” you create space for understanding instead of defensiveness. Conflicts become conversations. Assumptions become questions.

Your professional credibility skyrockets. Leaders who can distinguish between strong deductive arguments and weak inductive patterns make better strategic decisions. When you speak with calibrated confidence—saying “I'm 70% confident” instead of “I'm absolutely certain”—people trust your judgment more, not less.

You build a foundation for every other thinking skill. Spotting logical fallacies, evaluating evidence, resisting cognitive biases, asking better questions—all of these depend on understanding which type of reasoning you're using and which type the situation demands.

You're not just learning a thinking skill—you're installing psychological armor that most people don't even know exists. And in a world where manipulation is the norm, that makes you dangerous to anyone trying to control you.

Every week on Substack, I go deeper—sharing personal examples, failed experiments, and lessons I couldn't fit in the video. It's like the director's cut.

This week's Substack deep dive into a logical reasoning failure can be found at: https://philmckinney.substack.com/p/kroger-copied-hps-innovation-playbook

Your Thinking 101 Journey

This is Episode 2 of Thinking 101: The Essential Skills They Never Taught You—an 8-part foundation series where each episode unlocks the next.

If you missed Episode 1, “Why Thinking Skills Matter Now More Than Ever,” start there. It explains why this entire skillset has become essential.

Up next: Episode 3, “Causal Thinking: Beyond Correlation.” You'll learn how to distinguish between things that simply happen together and things that actually cause each other—transforming how you evaluate health claims, business strategies, and relationship patterns.

Hit that subscribe button so you don’t miss any future episodes. Also – hit the like and notification bell. It helps with the algorithm so others see our content. Why not share this video with a coworker or a family member who you think would benefit from it? …

Because right now, while you've been watching this, someone just shared a lie that felt like truth. The only question is: will you be able to tell the difference?

To learn more about improving your logical thinking skills, listen to this week's show: How To Improve Your Logical Reasoning Skills.

Get the tools to fuel your innovation journey → Innovation.Tools https://innovation.tools

RELATED: Subscribe To The Newsletter and Killer Innovations Podcast


SOURCES CITED IN THIS EPISODE

  1. MIT Media Lab – Misinformation Spread Rate
    Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
    https://doi.org/10.1126/science.aap9559
  2. Indiana University – Misinformation Superspreaders
    DeVerna, M. R., Aiyappa, R., Pacheco, D., Bryden, J., & Menczer, F. (2024). Identifying and characterizing superspreaders of low-credibility content on Twitter. PLOS ONE, 19(5), e0302201.
    https://doi.org/10.1371/journal.pone.0302201
  3. Open Science Collaboration – The Replication Crisis
    Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
    https://doi.org/10.1126/science.aac4716

ADDITIONAL READING

On Inductive Reasoning and Uncertainty
Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.

On Cognitive Biases and Decision-Making
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

On Confirmation Bias
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220.
https://doi.org/10.1037/1089-2680.2.2.175

On Scientific Reproducibility
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), e124.
https://doi.org/10.1371/journal.pmed.0020124

Note: All sources cited in this episode have been accessed and verified as of October 2025. The studies referenced are peer-reviewed academic research published in reputable scientific journals, including Science and PLOS ONE.

  continue reading

275 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play