General Purpose Reinforcement Learning, Speech Tech Gets More Human, and Mobile Devices LLMs
Archived series ("Inactive feed" status)
When? This feed was archived on December 19, 2025 13:12 (). Last successful fetch was on March 29, 2025 10:04 ()
Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 463729827 series 3568650
Today's tech landscape sees major breakthroughs as researchers unveil new AI models that can process unprecedented amounts of text while making speech generation more natural than ever. As these advances reshape how machines understand and communicate with humans, a parallel revolution in mobile computing shows how complex AI systems are being streamlined for the devices in our pockets, potentially transforming how we interact with technology in our daily lives. Links to all the papers we discussed: Baichuan-Omni-1.5 Technical Report, Qwen2.5-1M Technical Report, Towards General-Purpose Model-Free Reinforcement Learning, ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer, Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for Speech Generation, iFormer: Integrating ConvNet and Transformer for Mobile Application
145 episodes