Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Electronic Frontier Foundation and Electronic Frontier Foundation (EFF). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Electronic Frontier Foundation and Electronic Frontier Foundation (EFF) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Separating AI Hope from AI Hype

39:01
 
Share
 

Manage episode 499920424 series 2824229
Content provided by Electronic Frontier Foundation and Electronic Frontier Foundation (EFF). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Electronic Frontier Foundation and Electronic Frontier Foundation (EFF) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive.

In this episode you’ll learn about:

  • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
  • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
  • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
  • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
  • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line

Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University's Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton's Web Transparency and Accountability Project, uncovering how companies collect and use our personal information.

  continue reading

65 episodes

Artwork

Separating AI Hope from AI Hype

How to Fix the Internet

12,367 subscribers

published

iconShare
 
Manage episode 499920424 series 2824229
Content provided by Electronic Frontier Foundation and Electronic Frontier Foundation (EFF). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Electronic Frontier Foundation and Electronic Frontier Foundation (EFF) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive.

In this episode you’ll learn about:

  • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
  • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
  • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
  • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
  • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line

Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University's Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton's Web Transparency and Accountability Project, uncovering how companies collect and use our personal information.

  continue reading

65 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play