Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EP241 From Black Box to Building Blocks: More Modern Detection Engineering Lessons from Google

31:33
 
Share
 

Manage episode 503751053 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Guest:

Topics:

  • On the 3rd anniversary of Curated Detections, you've grown from 70 rules to over 4700. Can you walk us through that journey? What were some of the key inflection points and what have been the biggest lessons learned in scaling a detection portfolio so massively?
  • Historically the SecOps Curated Detection content was opaque, which led to, understandably, a bit of customer friction. We've recently made nearly all of that content transparent and editable by users. What were the challenges in that transition?
  • You make a distinction between "Detection-as-Code" and a more mature "Software Engineering" paradigm. What gets better for a security team when they move beyond just version control and a CI/CD pipeline and start incorporating things like unit testing, readability reviews, and performance testing for their detections?
  • The idea of a "Goldilocks Zone" for detections is intriguing – not too many, not too few. How do you find that balance, and what are the metrics that matter when measuring the effectiveness of a detection program? You mentioned customer feedback is important, but a confusion matrix isn't possible, why is that?
  • You talk about enabling customers to use your "building blocks" to create their own detections. Can you give us a practical example of how a customer might use a building block for something like detecting VPN and Tor traffic to augment their security?
  • You have started using LLMs for reviewing the explainability of human-generated metadata. Can you expand on that? What have you found are the ripe areas for AI in detection engineering, and can you share any anecdotes of where AI has succeeded and where it has failed?

Resources

  continue reading

252 episodes

Artwork
iconShare
 
Manage episode 503751053 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Guest:

Topics:

  • On the 3rd anniversary of Curated Detections, you've grown from 70 rules to over 4700. Can you walk us through that journey? What were some of the key inflection points and what have been the biggest lessons learned in scaling a detection portfolio so massively?
  • Historically the SecOps Curated Detection content was opaque, which led to, understandably, a bit of customer friction. We've recently made nearly all of that content transparent and editable by users. What were the challenges in that transition?
  • You make a distinction between "Detection-as-Code" and a more mature "Software Engineering" paradigm. What gets better for a security team when they move beyond just version control and a CI/CD pipeline and start incorporating things like unit testing, readability reviews, and performance testing for their detections?
  • The idea of a "Goldilocks Zone" for detections is intriguing – not too many, not too few. How do you find that balance, and what are the metrics that matter when measuring the effectiveness of a detection program? You mentioned customer feedback is important, but a confusion matrix isn't possible, why is that?
  • You talk about enabling customers to use your "building blocks" to create their own detections. Can you give us a practical example of how a customer might use a building block for something like detecting VPN and Tor traffic to augment their security?
  • You have started using LLMs for reviewing the explainability of human-generated metadata. Can you expand on that? What have you found are the ripe areas for AI in detection engineering, and can you share any anecdotes of where AI has succeeded and where it has failed?

Resources

  continue reading

252 episodes

Tous les épisodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play