Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by PocketPod. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PocketPod or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Models Get Smaller But Mightier, Language Models Learn Social Skills, and Memory Upgrades Promise Smarter AI

10:20
 
Share
 

Manage episode 466317411 series 3568650
Content provided by PocketPod. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PocketPod or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
In a surprising turn of events, researchers discover that smaller AI models can outperform their massive counterparts when given the right tools, challenging the 'bigger is better' assumption in artificial intelligence. Meanwhile, AI systems are learning to navigate complex social situations and engage in natural conversations, while new memory-enhanced models show dramatic improvements in reasoning abilities - developments that could reshape how we think about machine intelligence and its role in society. Links to all the papers we discussed: SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators, Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling, Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning, Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning, CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging, LM2: Large Memory Models
  continue reading

145 episodes

Artwork
iconShare
 
Manage episode 466317411 series 3568650
Content provided by PocketPod. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PocketPod or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
In a surprising turn of events, researchers discover that smaller AI models can outperform their massive counterparts when given the right tools, challenging the 'bigger is better' assumption in artificial intelligence. Meanwhile, AI systems are learning to navigate complex social situations and engage in natural conversations, while new memory-enhanced models show dramatic improvements in reasoning abilities - developments that could reshape how we think about machine intelligence and its role in society. Links to all the papers we discussed: SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators, Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling, Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning, Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning, CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging, LM2: Large Memory Models
  continue reading

145 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play