Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Intel Corporation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Intel Corporation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

From Cloud Dependency to Local Intelligence: The Future of Accessible AI

24:55
 
Share
 

Manage episode 516349916 series 3570727
Content provided by Intel Corporation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Intel Corporation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

As AI models grow more powerful, the question of where they run is becoming just as important as what they do. In this episode, Brandon Weng, Co-Founder and CEO of Fluid Inference, unpacks what it takes to move AI from massive data centers to everyday devices—and why that shift matters.

Brandon shares the story behind Fluid Inference, a company focused on making it easier for developers to deploy large AI models like transformers on consumer hardware. From pivoting away from his previous project, Slipbox, to the technical and philosophical choices that shaped Fluid's direction, he walks us through the thinking behind local-first AI. We explore the tradeoffs between cloud-based and on-device inference—touching on privacy, cost, control, and performance—and the hardware breakthroughs that are making edge AI more viable, including integrated NPUs in devices like Intel's AI PCs.

#EdgeAI #OnDeviceInference #AIOptimization #PrivacyFirst #OpenSourceAI #LocalAI

  continue reading

133 episodes

Artwork
iconShare
 
Manage episode 516349916 series 3570727
Content provided by Intel Corporation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Intel Corporation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

As AI models grow more powerful, the question of where they run is becoming just as important as what they do. In this episode, Brandon Weng, Co-Founder and CEO of Fluid Inference, unpacks what it takes to move AI from massive data centers to everyday devices—and why that shift matters.

Brandon shares the story behind Fluid Inference, a company focused on making it easier for developers to deploy large AI models like transformers on consumer hardware. From pivoting away from his previous project, Slipbox, to the technical and philosophical choices that shaped Fluid's direction, he walks us through the thinking behind local-first AI. We explore the tradeoffs between cloud-based and on-device inference—touching on privacy, cost, control, and performance—and the hardware breakthroughs that are making edge AI more viable, including integrated NPUs in devices like Intel's AI PCs.

#EdgeAI #OnDeviceInference #AIOptimization #PrivacyFirst #OpenSourceAI #LocalAI

  continue reading

133 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play