Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Accelerator Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Accelerator Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Scott Aaronson x Zvi Mowshowitz | Why the AI Revolution Won’t Look Like You Expect—And Why That’s More Dangerous

1:40:01
 
Share
 

Manage episode 480010543 series 3662691
Content provided by Accelerator Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Accelerator Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This podcast is produced by volunteers at Accelerator Media, a nonprofit educational media organization. Our work is supported by listeners and viewers like you. If you’d like to help us ignite curiosity and inspire long-term thinking about our shared future, please consider making a donation: https://acceleratormedia.org/donate/

In this episode of Curiosity Entangled, theoretical computer scientist Scott Aaronson and writer Zvi Mowshowitz confront one of the biggest questions of our time: what happens when humanity builds tools that can outthink us? In a wide-ranging and unsparing conversation, they explore the realities of AI risk, gradual human disempowerment, the complexities of steering technological progress, and why alignment efforts may fall short when it matters most.

Scott and Zvi examine the unique nature of the AI revolution—how it’s different from past technological shifts—and why traditional assumptions about progress and control may no longer apply. They tackle the pitfalls of today’s AI safety approaches, the psychological challenge of thinking clearly about diffuse, slow-moving risks, and the educational, societal, and epistemic shifts that the AI era demands. This is a conversation for anyone grappling with the future of intelligence, agency, and civilization itself.

5 Questions This Episode Might Leave You With

1. How could AI lead to humanity’s gradual loss of agency without an obvious “takeover” moment?

2. Why is it so difficult to steer or slow down transformative technologies once they are unleashed?

3. What makes today’s AI fundamentally different from previous technological revolutions?

4. Are current AI safety and interpretability efforts enough—or are we fooling ourselves?

5. How can we cultivate deeper skepticism, clearer thinking, and better education in the age of AI?

Learn more about the guests

Scott Aaronson – Professor of Computer Science at the University of Texas at Austin, expert in quantum computing and theoretical foundations of AI alignment. https://scottaaronson.blog/

Zvi Mowshowitz – Writer and strategic thinker focusing on decision theory, AI forecasting, and the societal impact of emerging technologies.

https://thezvi.substack.com/https://x.com/TheZvi

https://www.balsaresearch.com/

Timestamps

00:00:50 – Why this technological revolution leaves no obvious human niche

00:04:00 – How Zvi’s writing method mirrors real-time information processing

00:09:54 – Rethinking AI risk: gradual disempowerment vs. sudden takeover

00:14:10 – Why AI disruption is uniquely hard to govern—and harder to discuss

00:17:00 – GPT-4o, AI as research assistant, and the shifting cognitive landscape

00:21:15 – Why steering is harder than halting in technological revolutions

00:26:05 – Verifying claims and detecting “crank” proofs with AI

00:34:50 – Concrete examples vs. abstract theorizing about AI risk

00:37:10 – Strategic deception: when AIs learn to lie convincingly

00:43:50 – Lessons from past technological disruptions—and why AI is different

00:50:00 – The future of AI alignment: Scott’s new center at UT Austin

00:55:00 – Why pouring cold water on false hope matters for alignment

01:00:25 – Out-of-distribution reasoning: what models guess when data is scarce

01:11:00 – Education in an AI-saturated world: challenges and possibilities

01:17:00 – Learning, motivation, and the loss of intellectual environments

01:23:20 – Oscillating extremism, cultural breakdown, and the AI era

01:30:00 – Keeping focus: resisting distractions in a world of manufactured outrage

Follow Accelerator Media:https://x.com/xceleratormedia

https://instagram.com/xcelerator.media/

https://linkedin.com/company/accelerator-media-org

  continue reading

6 episodes

Artwork
iconShare
 
Manage episode 480010543 series 3662691
Content provided by Accelerator Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Accelerator Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This podcast is produced by volunteers at Accelerator Media, a nonprofit educational media organization. Our work is supported by listeners and viewers like you. If you’d like to help us ignite curiosity and inspire long-term thinking about our shared future, please consider making a donation: https://acceleratormedia.org/donate/

In this episode of Curiosity Entangled, theoretical computer scientist Scott Aaronson and writer Zvi Mowshowitz confront one of the biggest questions of our time: what happens when humanity builds tools that can outthink us? In a wide-ranging and unsparing conversation, they explore the realities of AI risk, gradual human disempowerment, the complexities of steering technological progress, and why alignment efforts may fall short when it matters most.

Scott and Zvi examine the unique nature of the AI revolution—how it’s different from past technological shifts—and why traditional assumptions about progress and control may no longer apply. They tackle the pitfalls of today’s AI safety approaches, the psychological challenge of thinking clearly about diffuse, slow-moving risks, and the educational, societal, and epistemic shifts that the AI era demands. This is a conversation for anyone grappling with the future of intelligence, agency, and civilization itself.

5 Questions This Episode Might Leave You With

1. How could AI lead to humanity’s gradual loss of agency without an obvious “takeover” moment?

2. Why is it so difficult to steer or slow down transformative technologies once they are unleashed?

3. What makes today’s AI fundamentally different from previous technological revolutions?

4. Are current AI safety and interpretability efforts enough—or are we fooling ourselves?

5. How can we cultivate deeper skepticism, clearer thinking, and better education in the age of AI?

Learn more about the guests

Scott Aaronson – Professor of Computer Science at the University of Texas at Austin, expert in quantum computing and theoretical foundations of AI alignment. https://scottaaronson.blog/

Zvi Mowshowitz – Writer and strategic thinker focusing on decision theory, AI forecasting, and the societal impact of emerging technologies.

https://thezvi.substack.com/https://x.com/TheZvi

https://www.balsaresearch.com/

Timestamps

00:00:50 – Why this technological revolution leaves no obvious human niche

00:04:00 – How Zvi’s writing method mirrors real-time information processing

00:09:54 – Rethinking AI risk: gradual disempowerment vs. sudden takeover

00:14:10 – Why AI disruption is uniquely hard to govern—and harder to discuss

00:17:00 – GPT-4o, AI as research assistant, and the shifting cognitive landscape

00:21:15 – Why steering is harder than halting in technological revolutions

00:26:05 – Verifying claims and detecting “crank” proofs with AI

00:34:50 – Concrete examples vs. abstract theorizing about AI risk

00:37:10 – Strategic deception: when AIs learn to lie convincingly

00:43:50 – Lessons from past technological disruptions—and why AI is different

00:50:00 – The future of AI alignment: Scott’s new center at UT Austin

00:55:00 – Why pouring cold water on false hope matters for alignment

01:00:25 – Out-of-distribution reasoning: what models guess when data is scarce

01:11:00 – Education in an AI-saturated world: challenges and possibilities

01:17:00 – Learning, motivation, and the loss of intellectual environments

01:23:20 – Oscillating extremism, cultural breakdown, and the AI era

01:30:00 – Keeping focus: resisting distractions in a world of manufactured outrage

Follow Accelerator Media:https://x.com/xceleratormedia

https://instagram.com/xcelerator.media/

https://linkedin.com/company/accelerator-media-org

  continue reading

6 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play