Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Varun Kumar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Varun Kumar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

MITRE ATLAS Framework - Securing AI Systems

17:27
 
Share
 

Manage episode 493716373 series 3667853
Content provided by Varun Kumar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Varun Kumar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.

As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount.

Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems.

In this episode, we'll break down the core components of MITRE ATLAS:

Tactics: These are the high-level objectives of attackers – the "why" behind their actions.

MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).

Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging.

Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.

Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors.

We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.

Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model.

To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include:

Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts.

Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts.

Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity.

Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.

You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.

The Certified AI Security Professional Course comprehensively covers the MITRE ATLAS Framework, offering practical experience to implement these defences effectively.

  continue reading

8 episodes

Artwork
iconShare
 
Manage episode 493716373 series 3667853
Content provided by Varun Kumar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Varun Kumar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.

As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount.

Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems.

In this episode, we'll break down the core components of MITRE ATLAS:

Tactics: These are the high-level objectives of attackers – the "why" behind their actions.

MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).

Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging.

Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.

Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors.

We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.

Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model.

To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include:

Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts.

Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts.

Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity.

Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.

You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.

The Certified AI Security Professional Course comprehensively covers the MITRE ATLAS Framework, offering practical experience to implement these defences effectively.

  continue reading

8 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play