Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by The New Stack Podcast and The New Stack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The New Stack Podcast and The New Stack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

How Can We Solve Observability's Data Capture and Spending Problem?

22:21
 
Share
 

Manage episode 520353490 series 75006
Content provided by The New Stack Podcast and The New Stack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The New Stack Podcast and The New Stack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.

AI’s rapid integration into applications introduces new observability challenges. Yackenovich described two patterns: add-on AI services, such as chatbots, whose failures don’t disrupt core workflows, and blocking-style AI components embedded in essential processes like fraud detection, where errors directly affect application function.

Rising cloud and ingestion costs further complicate telemetry strategies. Yackenovich cautioned against limiting visibility for budget reasons, advocating instead for predictable, fixed-price observability models that let organizations innovate without financial uncertainty.

Learn more from The New Stack about the latest in observability:

Introduction to Observability

Observability 2.0? Or Just Logs All Over Again?

Building an Observability Culture: Getting Everyone Onboard

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

  continue reading

910 episodes

Artwork
iconShare
 
Manage episode 520353490 series 75006
Content provided by The New Stack Podcast and The New Stack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The New Stack Podcast and The New Stack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

DevOps practitioners — whether developers, operators, SREs or business stakeholders — increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBM’s Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.

AI’s rapid integration into applications introduces new observability challenges. Yackenovich described two patterns: add-on AI services, such as chatbots, whose failures don’t disrupt core workflows, and blocking-style AI components embedded in essential processes like fraud detection, where errors directly affect application function.

Rising cloud and ingestion costs further complicate telemetry strategies. Yackenovich cautioned against limiting visibility for budget reasons, advocating instead for predictable, fixed-price observability models that let organizations innovate without financial uncertainty.

Learn more from The New Stack about the latest in observability:

Introduction to Observability

Observability 2.0? Or Just Logs All Over Again?

Building an Observability Culture: Getting Everyone Onboard

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

  continue reading

910 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play