Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by El Podcast Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by El Podcast Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

E163: Why AI Still Loses to Humans: Renowned Psychologist Explains - Dr. Gerd Gigerenzer

1:03:34
 
Share
 

Manage episode 515649890 series 3662382
Content provided by El Podcast Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by El Podcast Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

A candid conversation with psychologist Gerd Gigerenzer on why human judgment outperforms AI, the “stable world” limits of machine intelligence, and how surveillance capitalism reshapes society.

Guest bio: Dr. Gerd Gigerenzer is a German psychologist, director emeritus at the Max Planck Institute for Human Development, a leading scholar on decision-making and heuristics, and an intellectual interlocutor of B. F. Skinner and Herbert Simon.

Topics discussed:

  • Why large language models rely on correlations, not understanding
  • The “stable world principle” and where AI actually works (chess, translation)
  • Uncertainty, human behavior, and why prediction doesn’t improve much
  • Surveillance capitalism, privacy erosion, and “tech paternalism”
  • Level-4 vs. level-5 autonomy and city redesign for robo-taxis
  • Education, attention, and social media’s effects on cognition and mental health
  • Dynamic pricing, right-to-repair, and value extraction vs. true innovation
  • Simple heuristics beating big data (elections, flu prediction)
  • Optimism vs. pessimism about democratic pushback
  • Books to read: How to Stay Smart in a Smart World, The Intelligence of Intuition; “AI Snake Oil”

Main points:

  • Human intelligence is categorically different from machine pattern-matching; LLMs don’t “understand.”
  • AI excels in stable, rule-bound domains; it struggles under real-world uncertainty and shifting conditions.
  • Claims of imminent AGI and fully general self-driving are marketing hype; progress is gated by world instability, not just compute.
  • The business model of personalized advertising drives surveillance, addiction loops, and attention erosion.
  • Complex models can underperform simple, well-chosen rules in uncertain domains.
  • Europe is pushing regulation; tech lobbying and consumer convenience still tilt the field toward surveillance.
  • The deeper risk isn’t “AI takeover” but the dumbing-down of people and loss of autonomy.
  • Careers: follow what you love—humans remain essential for oversight, judgment, and creativity.
  • Likely mobility future is constrained autonomy (level-4) plus infrastructure changes, not human-free level-5 everywhere.
  • To “stay smart,” individuals must reclaim attention, understand how systems work, and demand alternatives (including paid, non-ad models).

Top quotes:

  • “Large language models work by correlations between words; that’s not understanding.”
  • “AI works well where tomorrow is like yesterday; under uncertainty, it falters.”
  • “The problem isn’t AI—it’s the dumbing-down of people.”
  • “We should become customers again, not the product.”

🎙 The Pod is hosted by Jesse Wright
💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

Thanks for listening!

  continue reading

165 episodes

Artwork
iconShare
 
Manage episode 515649890 series 3662382
Content provided by El Podcast Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by El Podcast Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

A candid conversation with psychologist Gerd Gigerenzer on why human judgment outperforms AI, the “stable world” limits of machine intelligence, and how surveillance capitalism reshapes society.

Guest bio: Dr. Gerd Gigerenzer is a German psychologist, director emeritus at the Max Planck Institute for Human Development, a leading scholar on decision-making and heuristics, and an intellectual interlocutor of B. F. Skinner and Herbert Simon.

Topics discussed:

  • Why large language models rely on correlations, not understanding
  • The “stable world principle” and where AI actually works (chess, translation)
  • Uncertainty, human behavior, and why prediction doesn’t improve much
  • Surveillance capitalism, privacy erosion, and “tech paternalism”
  • Level-4 vs. level-5 autonomy and city redesign for robo-taxis
  • Education, attention, and social media’s effects on cognition and mental health
  • Dynamic pricing, right-to-repair, and value extraction vs. true innovation
  • Simple heuristics beating big data (elections, flu prediction)
  • Optimism vs. pessimism about democratic pushback
  • Books to read: How to Stay Smart in a Smart World, The Intelligence of Intuition; “AI Snake Oil”

Main points:

  • Human intelligence is categorically different from machine pattern-matching; LLMs don’t “understand.”
  • AI excels in stable, rule-bound domains; it struggles under real-world uncertainty and shifting conditions.
  • Claims of imminent AGI and fully general self-driving are marketing hype; progress is gated by world instability, not just compute.
  • The business model of personalized advertising drives surveillance, addiction loops, and attention erosion.
  • Complex models can underperform simple, well-chosen rules in uncertain domains.
  • Europe is pushing regulation; tech lobbying and consumer convenience still tilt the field toward surveillance.
  • The deeper risk isn’t “AI takeover” but the dumbing-down of people and loss of autonomy.
  • Careers: follow what you love—humans remain essential for oversight, judgment, and creativity.
  • Likely mobility future is constrained autonomy (level-4) plus infrastructure changes, not human-free level-5 everywhere.
  • To “stay smart,” individuals must reclaim attention, understand how systems work, and demand alternatives (including paid, non-ad models).

Top quotes:

  • “Large language models work by correlations between words; that’s not understanding.”
  • “AI works well where tomorrow is like yesterday; under uncertainty, it falters.”
  • “The problem isn’t AI—it’s the dumbing-down of people.”
  • “We should become customers again, not the product.”

🎙 The Pod is hosted by Jesse Wright
💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

Thanks for listening!

  continue reading

165 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play