Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by NETINT Technologies. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by NETINT Technologies or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Hyperscale for Video | Stop Asking GPUs to Be Everything at Once

18:42
 
Share
 

Manage episode 514097069 series 3615023
Content provided by NETINT Technologies. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by NETINT Technologies or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What if video finally got its own processor, and your streaming costs dropped while quality and features went up?

In this episode, we dig into the rise of the Video Processing Unit (VPU) - silicon built entirely for video - and explore how it’s transforming everything from edge contribution to multi-view sports. Instead of paying for general-purpose compute and GPU graphics overhead, VPUs put every square millimeter of the die to work on encoding, scaling, and compositing. The result is surprising gains in density, power efficiency, and cost.

We look at where GPUs fall short for large-scale streaming and why CPUs hit a wall on cost per channel. Then we follow encoding as it moves into the network, building ABR ladders directly at venues, pushing streams straight to the CDN, and cutting both latency and egress costs. You’ll hear real numbers from cost-normalized tests, including a VPU-powered instance delivering six HEVC ladders for about the cost of one CPU ladder, plus a side-by-side look at AWS VT1/U30 and current VPU options.

The discussion also covers multi-layer AV1 for dynamic overlays and interactive ad units, and how compact edge servers with SDI capture bring premium live workflows into portable, power-efficient form factors.

We break down practical deployment choices such as U.2 form factors that slide into NVMe bays, mini servers designed for the edge, and PCIe cards for dense racks. Integration remains familiar with FFmpeg and GStreamer plugins, robust APIs, and a simple application layer for large-scale configuration.

The message is clear: when video runs on purpose-built silicon, you unlock hyperscale streaming capabilities - multi-view, AV1 interactivity, UHD ladders - at a cost that finally makes business sense. If you’re rethinking your pipeline or planning your next live event, this is your field guide to the new streaming stack.

If this episode gives you new ideas for your workflow, follow the show, share it with your team, and leave a quick review so others can find it.

Key topics
• GPUs, CPUs, and VPUs - why video needs purpose-built silicon
• What 100% video-dedicated silicon enables for density and power
• Encoding inside the network to cut latency and egress
• Multi-layer AV1 for interactive ads and overlays
• Multi-view sports made affordable and reliable
• Edge contribution from venues using compact servers
• Product lineup: U.2, mini, and PCIe form factors
• Benchmarks comparing CPU, VPU, and AWS VT1/U30
• Cloud options with Akamai and i3D, including egress math
• Integration with FFmpeg, GStreamer, SDKs, and Bitstreams

Download presentation: https://info.netint.com/hubfs/downloads/IBC25-VPU-Introduction.pdf

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

  continue reading

Chapters

1. Setting The Stakes: Hyperscale Streaming (00:00:00)

2. Why GPUs Fall Short For Video (00:00:26)

3. Defining The VPU Category (00:01:56)

4. Who Else Builds VPUs? (00:03:14)

5. Real-World Use Cases Overview (00:04:13)

6. Encoding In-Network: IaaS Shift (00:04:52)

7. Pricing Example: 32 Live Streams (00:06:00)

8. Multi‑Layer AV1 & Interactive Ads (00:07:05)

9. Edge Live Contribution From Venues (00:08:18)

10. Multi‑View Sports At Scale (00:09:33)

11. Product Line: T1U/T1M/T1A/T2A (00:10:57)

12. Performance & Power Efficiency (00:12:28)

13. Servers, NVMe, And Easy Integration (00:13:30)

14. Cloud Options: Akamai And i3D (00:14:50)

15. Cost-Normalized Benchmarks Vs CPU (00:16:20)

16. Comparing AWS VT1/U30 (00:18:05)

54 episodes

Artwork
iconShare
 
Manage episode 514097069 series 3615023
Content provided by NETINT Technologies. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by NETINT Technologies or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What if video finally got its own processor, and your streaming costs dropped while quality and features went up?

In this episode, we dig into the rise of the Video Processing Unit (VPU) - silicon built entirely for video - and explore how it’s transforming everything from edge contribution to multi-view sports. Instead of paying for general-purpose compute and GPU graphics overhead, VPUs put every square millimeter of the die to work on encoding, scaling, and compositing. The result is surprising gains in density, power efficiency, and cost.

We look at where GPUs fall short for large-scale streaming and why CPUs hit a wall on cost per channel. Then we follow encoding as it moves into the network, building ABR ladders directly at venues, pushing streams straight to the CDN, and cutting both latency and egress costs. You’ll hear real numbers from cost-normalized tests, including a VPU-powered instance delivering six HEVC ladders for about the cost of one CPU ladder, plus a side-by-side look at AWS VT1/U30 and current VPU options.

The discussion also covers multi-layer AV1 for dynamic overlays and interactive ad units, and how compact edge servers with SDI capture bring premium live workflows into portable, power-efficient form factors.

We break down practical deployment choices such as U.2 form factors that slide into NVMe bays, mini servers designed for the edge, and PCIe cards for dense racks. Integration remains familiar with FFmpeg and GStreamer plugins, robust APIs, and a simple application layer for large-scale configuration.

The message is clear: when video runs on purpose-built silicon, you unlock hyperscale streaming capabilities - multi-view, AV1 interactivity, UHD ladders - at a cost that finally makes business sense. If you’re rethinking your pipeline or planning your next live event, this is your field guide to the new streaming stack.

If this episode gives you new ideas for your workflow, follow the show, share it with your team, and leave a quick review so others can find it.

Key topics
• GPUs, CPUs, and VPUs - why video needs purpose-built silicon
• What 100% video-dedicated silicon enables for density and power
• Encoding inside the network to cut latency and egress
• Multi-layer AV1 for interactive ads and overlays
• Multi-view sports made affordable and reliable
• Edge contribution from venues using compact servers
• Product lineup: U.2, mini, and PCIe form factors
• Benchmarks comparing CPU, VPU, and AWS VT1/U30
• Cloud options with Akamai and i3D, including egress math
• Integration with FFmpeg, GStreamer, SDKs, and Bitstreams

Download presentation: https://info.netint.com/hubfs/downloads/IBC25-VPU-Introduction.pdf

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

  continue reading

Chapters

1. Setting The Stakes: Hyperscale Streaming (00:00:00)

2. Why GPUs Fall Short For Video (00:00:26)

3. Defining The VPU Category (00:01:56)

4. Who Else Builds VPUs? (00:03:14)

5. Real-World Use Cases Overview (00:04:13)

6. Encoding In-Network: IaaS Shift (00:04:52)

7. Pricing Example: 32 Live Streams (00:06:00)

8. Multi‑Layer AV1 & Interactive Ads (00:07:05)

9. Edge Live Contribution From Venues (00:08:18)

10. Multi‑View Sports At Scale (00:09:33)

11. Product Line: T1U/T1M/T1A/T2A (00:10:57)

12. Performance & Power Efficiency (00:12:28)

13. Servers, NVMe, And Easy Integration (00:13:30)

14. Cloud Options: Akamai And i3D (00:14:50)

15. Cost-Normalized Benchmarks Vs CPU (00:16:20)

16. Comparing AWS VT1/U30 (00:18:05)

54 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play