Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LLM scaling: Is GPT-5 near the end of exponential growth?

22:42
 
Share
 

Manage episode 501118519 series 3475282
Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

The release of OpenAI GPT-5 marks a significant turning point in AI development, but maybe not the one most enthusiasts had envisioned. The latest version seems to reveal the natural ceiling of current language model capabilities with incremental rather than revolutionary improvements over GPT-4.

Sid and Andrew call back to some of the model-building basics that have led to this point to give their assessment of the early days of the GPT-5 release.
AI's version of Moore's Law is slowing down dramatically with GPT-5
• OpenAI appears to be experiencing an identity crisis, uncertain whether to target consumers or enterprises
• Running out of human-written data is a fundamental barrier to continued exponential improvement
Synthetic data cannot provide the same quality as original human content
Health-related usage of LLMs presents particularly dangerous applications
• Users developing dependencies on specific model behaviors face disruption when models change
• Model outputs are now being verified rather than just inputs, representing a small improvement in safety
• The next phase of AI development may involve revisiting reinforcement learning and expert systems
* Review the GPT-5 system card for further information

Follow The AI Fundamentalists on your favorite podcast app for more discussions on the direction of generative AI and building better AI systems.

This summary was AI-generated from the original transcript of the podcast that is linked to this episode.

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chapters

1. Introduction to GPT-5 discussion (00:00:00)

2. Moore's Law and AI (00:02:15)

3. OpenAI's market identity crisis (00:04:48)

4. LLMs in healthcare: Worth the risk? (00:07:34)

5. The synthetic data problem (00:10:20)

6. The high watermark of LLM yype (00:15:10)

7. The future of AI beyond LLMs (00:20:54)

36 episodes

Artwork
iconShare
 
Manage episode 501118519 series 3475282
Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

The release of OpenAI GPT-5 marks a significant turning point in AI development, but maybe not the one most enthusiasts had envisioned. The latest version seems to reveal the natural ceiling of current language model capabilities with incremental rather than revolutionary improvements over GPT-4.

Sid and Andrew call back to some of the model-building basics that have led to this point to give their assessment of the early days of the GPT-5 release.
AI's version of Moore's Law is slowing down dramatically with GPT-5
• OpenAI appears to be experiencing an identity crisis, uncertain whether to target consumers or enterprises
• Running out of human-written data is a fundamental barrier to continued exponential improvement
Synthetic data cannot provide the same quality as original human content
Health-related usage of LLMs presents particularly dangerous applications
• Users developing dependencies on specific model behaviors face disruption when models change
• Model outputs are now being verified rather than just inputs, representing a small improvement in safety
• The next phase of AI development may involve revisiting reinforcement learning and expert systems
* Review the GPT-5 system card for further information

Follow The AI Fundamentalists on your favorite podcast app for more discussions on the direction of generative AI and building better AI systems.

This summary was AI-generated from the original transcript of the podcast that is linked to this episode.

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chapters

1. Introduction to GPT-5 discussion (00:00:00)

2. Moore's Law and AI (00:02:15)

3. OpenAI's market identity crisis (00:04:48)

4. LLMs in healthcare: Worth the risk? (00:07:34)

5. The synthetic data problem (00:10:20)

6. The high watermark of LLM yype (00:15:10)

7. The future of AI beyond LLMs (00:20:54)

36 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play