Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by SpokenLayer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SpokenLayer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Are bad incentives to blame for AI hallucinations?

5:23
 
Share
 

Manage episode 505203736 series 1321951
Content provided by SpokenLayer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SpokenLayer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate, and whether anything can be done to reduce those hallucinations. In a blog post summarizing the paper, OpenAI defines hallucinations as plausible but false statements generated by language models, and it acknowledges that despite improvements, hallucinations remain a fundamental challenge for all large language models, one that will never be completely eliminated.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

  continue reading

5948 episodes

Artwork
iconShare
 
Manage episode 505203736 series 1321951
Content provided by SpokenLayer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SpokenLayer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate, and whether anything can be done to reduce those hallucinations. In a blog post summarizing the paper, OpenAI defines hallucinations as plausible but false statements generated by language models, and it acknowledges that despite improvements, hallucinations remain a fundamental challenge for all large language models, one that will never be completely eliminated.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

  continue reading

5948 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play