Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Mike Breault. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mike Breault or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Mixture of Experts Unpacked: The Sparse Engine Behind Today's Giant AI Models

6:04
 
Share
 

Manage episode 513457712 series 3690682
Content provided by Mike Breault. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mike Breault or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

A deep dive into Mixture of Experts (MoE): how sparse routing selects a tiny subset of experts for each input, enabling trillion-parameter models to run efficiently. We trace the idea from early Metapi networks to modern neural sparsity, explore load-balancing tricks, and see how MoE powers NLP, vision, and diffusion models. A practical guide to why selective computation is reshaping scalable AI.

Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.

Sponsored by Embersilk LLC

  continue reading

1380 episodes

Artwork
iconShare
 
Manage episode 513457712 series 3690682
Content provided by Mike Breault. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mike Breault or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

A deep dive into Mixture of Experts (MoE): how sparse routing selects a tiny subset of experts for each input, enabling trillion-parameter models to run efficiently. We trace the idea from early Metapi networks to modern neural sparsity, explore load-balancing tricks, and see how MoE powers NLP, vision, and diffusion models. A practical guide to why selective computation is reshaping scalable AI.

Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.

Sponsored by Embersilk LLC

  continue reading

1380 episodes

Kaikki jaksot

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play