Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Craig S. Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Craig S. Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#275 Nandan Nayampally: How Baya Systems is Fixing the Biggest Bottleneck in AI Chips (Data Flow)

46:33
 
Share
 

Manage episode 497590542 series 2455219
Content provided by Craig S. Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Craig S. Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What if the biggest challenge in AI isn't how fast chips can compute, but how quickly data can move? In this episode of Eye on AI, Nandan Nayampally, Chief Commercial Officer at Baya Systems, shares how the next era of computing is being shaped by smarter architecture, not just raw processing power. With experience leading teams at ARM, Amazon Alexa, and BrainChip, Nandan brings a rare perspective on how modern chip design is evolving. We dive into the world of chiplets, network-on-chip (NoC) technology, silicon photonics, and neuromorphic computing. Nandan explains why the traditional path of scaling transistors is no longer enough, and how Baya Systems is solving the real bottlenecks in AI hardware through efficient data movement and modular design. From punch cards to AGI, this conversation maps the full arc of computing innovation. If you want to understand how to build hardware for the future of AI, this episode is a must-listen. Subscribe to Eye on AI for more conversations on the future of artificial intelligence and system design. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI’s Bottleneck Is Data Movement (01:26) Nandan’s Background and Semiconductor Career (03:06) What Baya Systems Does: Network-on-Chip + Software (08:40) A Brief History of Computing: From Punch Cards to AGI (11:47) Silicon Photonics and the Evolution of Data Transfer (20:04) How Baya Is Solving Real AI Hardware Challenges (22:13) Understanding CPUs, GPUs, and NPUs in AI Workloads (24:09) Building Efficient Chips: Cost, Speed, and Customization (27:17) Performance, Power, and Area (PPA) in Chip Design (30:55) Partnering to Build Next-Gen Photonic and Copper Systems (32:29) Why Moore’s Law Has Slowed and What Comes Next (34:49) Wafer-Scale vs Traditional Die: Where Baya Fits In (36:10) Chiplet Stacking and Composability Explained (39:44) The Future of On-Chip Networking (41:10) Neuromorphic Computing: Energy-Efficient AI (43:02) Edge AI, Small Models, and Structured State Spaces

  continue reading

279 episodes

Artwork
iconShare
 
Manage episode 497590542 series 2455219
Content provided by Craig S. Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Craig S. Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What if the biggest challenge in AI isn't how fast chips can compute, but how quickly data can move? In this episode of Eye on AI, Nandan Nayampally, Chief Commercial Officer at Baya Systems, shares how the next era of computing is being shaped by smarter architecture, not just raw processing power. With experience leading teams at ARM, Amazon Alexa, and BrainChip, Nandan brings a rare perspective on how modern chip design is evolving. We dive into the world of chiplets, network-on-chip (NoC) technology, silicon photonics, and neuromorphic computing. Nandan explains why the traditional path of scaling transistors is no longer enough, and how Baya Systems is solving the real bottlenecks in AI hardware through efficient data movement and modular design. From punch cards to AGI, this conversation maps the full arc of computing innovation. If you want to understand how to build hardware for the future of AI, this episode is a must-listen. Subscribe to Eye on AI for more conversations on the future of artificial intelligence and system design. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI’s Bottleneck Is Data Movement (01:26) Nandan’s Background and Semiconductor Career (03:06) What Baya Systems Does: Network-on-Chip + Software (08:40) A Brief History of Computing: From Punch Cards to AGI (11:47) Silicon Photonics and the Evolution of Data Transfer (20:04) How Baya Is Solving Real AI Hardware Challenges (22:13) Understanding CPUs, GPUs, and NPUs in AI Workloads (24:09) Building Efficient Chips: Cost, Speed, and Customization (27:17) Performance, Power, and Area (PPA) in Chip Design (30:55) Partnering to Build Next-Gen Photonic and Copper Systems (32:29) Why Moore’s Law Has Slowed and What Comes Next (34:49) Wafer-Scale vs Traditional Die: Where Baya Fits In (36:10) Chiplet Stacking and Composability Explained (39:44) The Future of On-Chip Networking (41:10) Neuromorphic Computing: Energy-Efficient AI (43:02) Edge AI, Small Models, and Structured State Spaces

  continue reading

279 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play