Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Andrew Keen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andrew Keen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Idiocracy Trap: Why Smart Machines are making Humans Dumb & Dumber

46:27
 
Share
 

Manage episode 508419058 series 2502547
Content provided by Andrew Keen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andrew Keen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Jacob Ward warned us. Back in January 2022, the Oakland-based tech journalist published The Loop, a warning about how AI is creating a world without choices. He even came on this show to warn about AI’s threat to humanity. Three years later, we’ve all caught up with Ward. So where is he now on AI? Moderately vindicated but more pessimistic. His original thesis has proven disturbingly accurate - we’re outsourcing decisions to AI at an accelerating pace. But he admits his book’s weakest section was “how to fight back,” and he still lacks concrete solutions. His fear has evolved: less worried about robot overlords, he is now more concerned about an “Idiocracy” of AI human serfs. It’s a dystopian scenario where humans become so stupid that they won’t even be able to appreciate Gore Vidal’s quip that “I told you so” are the four most beautiful words in the English language.

I couldn’t resist asking Anthropic’s Claude about Ward’s conclusions (not, of course, that I rely on it for anything). “Anecdotal” is how it countered with characteristic coolness. Well Claude wouldn’t say that, wouldn’t it?

1. The “Idiocracy” threat is more immediate than AGI concerns Ward argues we should fear humans becoming cognitively dependent rather than superintelligent machines taking over. He’s seeing this now - Berkeley students can’t distinguish between reading books and AI summaries.

2. AI follows market incentives, not ethical principles Despite early rhetoric about responsible development, Ward observes the industry prioritizing profit over principles. Companies are openly betting on when single-person billion-dollar businesses will emerge, signaling massive job displacement.

3. The resistance strategy remains unclear Ward admits his book’s weakness was the “how to fight back” section, and he still lacks concrete solutions. The few examples of resistance he cites - like Signal’s president protecting user data from training algorithms - require significant financial sacrifice.

4. Economic concentration creates systemic risk The massive capital investments (Nvidia’s $100 billion into OpenAI) create dangerous loops where AI companies essentially invest in themselves. Ward warns this resembles classic bubble dynamics that could crash the broader economy.

5. “Weak perfection” is necessary for human development Ward argues we need friction and inefficiency in our systems to maintain critical thinking skills. AI’s promise to eliminate all cognitive work may eliminate the mental exercise that keeps humans intellectually capable.

Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

  continue reading

1372 episodes

Artwork
iconShare
 
Manage episode 508419058 series 2502547
Content provided by Andrew Keen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andrew Keen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Jacob Ward warned us. Back in January 2022, the Oakland-based tech journalist published The Loop, a warning about how AI is creating a world without choices. He even came on this show to warn about AI’s threat to humanity. Three years later, we’ve all caught up with Ward. So where is he now on AI? Moderately vindicated but more pessimistic. His original thesis has proven disturbingly accurate - we’re outsourcing decisions to AI at an accelerating pace. But he admits his book’s weakest section was “how to fight back,” and he still lacks concrete solutions. His fear has evolved: less worried about robot overlords, he is now more concerned about an “Idiocracy” of AI human serfs. It’s a dystopian scenario where humans become so stupid that they won’t even be able to appreciate Gore Vidal’s quip that “I told you so” are the four most beautiful words in the English language.

I couldn’t resist asking Anthropic’s Claude about Ward’s conclusions (not, of course, that I rely on it for anything). “Anecdotal” is how it countered with characteristic coolness. Well Claude wouldn’t say that, wouldn’t it?

1. The “Idiocracy” threat is more immediate than AGI concerns Ward argues we should fear humans becoming cognitively dependent rather than superintelligent machines taking over. He’s seeing this now - Berkeley students can’t distinguish between reading books and AI summaries.

2. AI follows market incentives, not ethical principles Despite early rhetoric about responsible development, Ward observes the industry prioritizing profit over principles. Companies are openly betting on when single-person billion-dollar businesses will emerge, signaling massive job displacement.

3. The resistance strategy remains unclear Ward admits his book’s weakness was the “how to fight back” section, and he still lacks concrete solutions. The few examples of resistance he cites - like Signal’s president protecting user data from training algorithms - require significant financial sacrifice.

4. Economic concentration creates systemic risk The massive capital investments (Nvidia’s $100 billion into OpenAI) create dangerous loops where AI companies essentially invest in themselves. Ward warns this resembles classic bubble dynamics that could crash the broader economy.

5. “Weak perfection” is necessary for human development Ward argues we need friction and inefficiency in our systems to maintain critical thinking skills. AI’s promise to eliminate all cognitive work may eliminate the mental exercise that keeps humans intellectually capable.

Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

  continue reading

1372 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play