Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning

30:35
 
Share
 

Manage episode 489949389 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic.

Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder.
Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking."
Read the paper: The Illusion of Thinking
Read the response: The Illusion of the Illusion of Thinking
Explore more AI research and sign up for future readings

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

  continue reading

52 episodes

Artwork
iconShare
 
Manage episode 489949389 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic.

Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder.
Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking."
Read the paper: The Illusion of Thinking
Read the response: The Illusion of the Illusion of Thinking
Explore more AI research and sign up for future readings

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

  continue reading

52 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play