Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Quartz Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Quartz Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI hallucinations: Turn on, tune in, beep boop

27:13
 
Share
 

Manage episode 363146142 series 3461234
Content provided by Quartz Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Quartz Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

ChatGPT isn’t always right. In fact, it’s often very wrong, giving faulty biographical information about a person or whiffing on the answers to simple questions. But instead of saying it doesn’t know, ChatGPT often makes stuff up. Chatbots can’t actually lie, but researchers sometimes call these untruthful performances “hallucinations”—not quite a lie, but a vision of something that isn’t there. So, what’s really happening here and what does it tell us about the way that AI systems err?

Presented by Deloitte

Episode art by Vicky Leta

  continue reading

75 episodes

Artwork
iconShare
 
Manage episode 363146142 series 3461234
Content provided by Quartz Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Quartz Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

ChatGPT isn’t always right. In fact, it’s often very wrong, giving faulty biographical information about a person or whiffing on the answers to simple questions. But instead of saying it doesn’t know, ChatGPT often makes stuff up. Chatbots can’t actually lie, but researchers sometimes call these untruthful performances “hallucinations”—not quite a lie, but a vision of something that isn’t there. So, what’s really happening here and what does it tell us about the way that AI systems err?

Presented by Deloitte

Episode art by Vicky Leta

  continue reading

75 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play