Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Tech ONTAP Podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tech ONTAP Podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 405 - NVIDIA SuperPod Updates - Fall 2025

46:36
 
Share
 

Manage episode 505700661 series 98131
Content provided by Tech ONTAP Podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tech ONTAP Podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
We’re diving deep into AI at scale with NetApp experts Bobby Oomen and David Arnette. From NVIDIA SuperPod (think AI factories powering massive LLM training) to FlexPod solutions that bring inference into everyday enterprise workloads, we unpack what’s happening at the cutting edge of AI infrastructure. You’ll hear how NetApp and NVIDIA are collaborating to solve one of AI’s biggest challenges—data management—with tools like SnapMirror, FlexCache, and FlexClone. We also explore why inference is becoming just as important (if not more) than training, and what that shift means for enterprises looking to integrate AI into their operations. Whether you’re curious about NVIDIA Cloud Partner (NCP) offerings, KV cache innovations, or how Cisco and NetApp are pushing FlexPod into the AI era, this episode is packed with insights you won’t want to miss. Tune in to learn how enterprises can scale AI securely, efficiently, and flexibly—with NetApp at the core.
  continue reading

430 episodes

Artwork
iconShare
 
Manage episode 505700661 series 98131
Content provided by Tech ONTAP Podcast. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tech ONTAP Podcast or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
We’re diving deep into AI at scale with NetApp experts Bobby Oomen and David Arnette. From NVIDIA SuperPod (think AI factories powering massive LLM training) to FlexPod solutions that bring inference into everyday enterprise workloads, we unpack what’s happening at the cutting edge of AI infrastructure. You’ll hear how NetApp and NVIDIA are collaborating to solve one of AI’s biggest challenges—data management—with tools like SnapMirror, FlexCache, and FlexClone. We also explore why inference is becoming just as important (if not more) than training, and what that shift means for enterprises looking to integrate AI into their operations. Whether you’re curious about NVIDIA Cloud Partner (NCP) offerings, KV cache innovations, or how Cisco and NetApp are pushing FlexPod into the AI era, this episode is packed with insights you won’t want to miss. Tune in to learn how enterprises can scale AI securely, efficiently, and flexibly—with NetApp at the core.
  continue reading

430 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play