Reasoning Models are Bad at Thinking, Benchmarking LLMs for Medical and Physical World Understanding
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on March 29, 2025 10:04 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 464518449 series 3568650
Today we explore how artificial intelligence may be rushing to conclusions instead of thinking deeply, as researchers discover that language models often jump between thoughts too quickly to solve complex problems. Scientists are developing new techniques to make AI pause and ponder, while a challenging new medical exam reveals just how far these systems still need to go to match human doctors' careful reasoning. These stories raise important questions about balancing AI's speed with the methodical thinking needed for critical tasks in healthcare and beyond. Links to all the papers we discussed: Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs, Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch, MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding, PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding, WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training, Large Language Models Think Too Fast To Explore Effectively
145 episodes