Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

OpenAI’s Hallucination Plan, Reproducible AI Outputs, and Telepathic AI: The AI Argument EP71

35:50
 
Share
 

Manage episode 506444423 series 3555798
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Frank and Justin clash over new publications from OpenAI and Thinking Machines. Frank insists hallucinations make LLMs unreliable. Justin fires back that they’re the price of real creativity.
Still, even Frank and Justin agree that big companies don’t want poetry, they want predictability. Same input, same output. Trouble is… today’s models can’t even manage that.
And then there’s GPT-5, busy gaslighting everyone with lyrical nonsense while telling us it’s genius. Add in an optical model that burns a fraction of the energy, a mind-reading AI headset, and Gemini demanding compliments or throwing a sulk, and you’ve got plenty to argue about.
Full list of topics:
06:31 Can OpenAI fix the hallucination problem?
10:12 Is Mira Murati fixing flaky AI outputs?
19:27 Is GPT-5 gaslighting us with pretty prose?
26:14 Could light fix AI’s energy addiction?
28:32 Is the Alterego device really reading your mind?
32:41 Is your code giving Gemini a nervous breakdown?
► SUBSCRIBE
Don't forget to subscribe for more arguments!
► LINKS TO CONTENT WE DISCUSSED

► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/
► YOUR INPUT
Are today’s LLMs reliable enough to take humans out of the loop?

  continue reading

Chapters

1. The_AI_Argument_EP71-Justin-webcam-00h_00m_00s_479ms-StreamYard (00:00:00)

2. Debating AI Hallucinations (00:01:56)

3. Can OpenAI fix the hallucination problem? (00:06:27)

4. Is Mira Murati fixing flaky AI outputs? (00:10:08)

5. Is GPT-5 gaslighting us with pretty prose? (00:19:23)

6. Could light fix AI’s energy addiction? (00:26:10)

7. Is the Alterego device really reading your mind? (00:28:28)

8. Is your code giving Gemini a nervous breakdown? (00:32:37)

66 episodes

Artwork
iconShare
 
Manage episode 506444423 series 3555798
Content provided by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Frank Prendergast and Justin Collery, Frank Prendergast, and Justin Collery or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Frank and Justin clash over new publications from OpenAI and Thinking Machines. Frank insists hallucinations make LLMs unreliable. Justin fires back that they’re the price of real creativity.
Still, even Frank and Justin agree that big companies don’t want poetry, they want predictability. Same input, same output. Trouble is… today’s models can’t even manage that.
And then there’s GPT-5, busy gaslighting everyone with lyrical nonsense while telling us it’s genius. Add in an optical model that burns a fraction of the energy, a mind-reading AI headset, and Gemini demanding compliments or throwing a sulk, and you’ve got plenty to argue about.
Full list of topics:
06:31 Can OpenAI fix the hallucination problem?
10:12 Is Mira Murati fixing flaky AI outputs?
19:27 Is GPT-5 gaslighting us with pretty prose?
26:14 Could light fix AI’s energy addiction?
28:32 Is the Alterego device really reading your mind?
32:41 Is your code giving Gemini a nervous breakdown?
► SUBSCRIBE
Don't forget to subscribe for more arguments!
► LINKS TO CONTENT WE DISCUSSED

► CONNECT WITH US
For more in-depth discussions, connect Justin and Frank on LinkedIn.
Justin: https://www.linkedin.com/in/justincollery/
Frank: https://www.linkedin.com/in/frankprendergast/
► YOUR INPUT
Are today’s LLMs reliable enough to take humans out of the loop?

  continue reading

Chapters

1. The_AI_Argument_EP71-Justin-webcam-00h_00m_00s_479ms-StreamYard (00:00:00)

2. Debating AI Hallucinations (00:01:56)

3. Can OpenAI fix the hallucination problem? (00:06:27)

4. Is Mira Murati fixing flaky AI outputs? (00:10:08)

5. Is GPT-5 gaslighting us with pretty prose? (00:19:23)

6. Could light fix AI’s energy addiction? (00:26:10)

7. Is the Alterego device really reading your mind? (00:28:28)

8. Is your code giving Gemini a nervous breakdown? (00:32:37)

66 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play