TLCTC Blog - 2025/03/14

Mapping Adversarial ML (AML) Techniques to the Top Level Cyber Threat Clusters

This blog post proposes a novel mapping of MITRE ATLAS™ adversarial machine learning techniques and controls to the Top Level Cyber Threat Clusters (TLCTC) framework. By integrating these frameworks, organizations can achieve a more structured, cause-oriented approach to AI/ML security, bridging the gap between technical vulnerabilities and strategic risk management

Introduction

The rapidly evolving field of Artificial Intelligence and Machine Learning (AI/ML) introduces unique security challenges that must be understood within existing cybersecurity frameworks. This article maps the MITRE ATLAS™ Adversarial ML (AML) techniques and controls to the Top Level Cyber Threat Clusters (TLCTC) framework, providing security professionals with a structured approach to understand and address ML-specific threats within their broader security strategy.

The TLCTC framework organizes cyber threats into 10 distinct clusters based on the generic vulnerabilities they exploit, rather than by observed events or outcomes. This cause-oriented approach provides a more effective foundation for understanding adversarial ML techniques and implementing appropriate controls.

Mapping Overview

The mapping demonstrates how AI/ML-specific threats align with the fundamental threat categories defined in the TLCTC framework. Each AML technique is mapped to one or more threat clusters based on the generic vulnerability being exploited.

MITRE ATLAS (AML.T...) Technique Name TLCTC # TLCTC Name Justification (Based on Generic Vulnerability)
AML.T0000 Search for Victim's Publicly Available Research Materials Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0000.00 Search for Victim's Publicly Available Research Materials: Journals and Conference Proceedings Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0000.01 Search for Victim's Publicly Available Research Materials: Pre-Print Repositories Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0000.02 Search for Victim's Publicly Available Research Materials: Technical Blogs Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0001 Search for Publicly Available Adversarial Vulnerability Analysis Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0003 Search Victim-Owned Websites Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0004 Search Application Repositories Detection Indicator N/A Not a direct threat, but reconnaissance/preparation activity. Indicates a need for improved preventative and detective controls to reduce the likelihood and impact of future attacks. This activity provides intelligence to the adversary, increasing the risk of subsequent attacks.
AML.T0006 Active Scanning #1 Abuse of Functions Exploits the intended functionality of network scanning tools, but uses them for malicious purposes. The intent is malicious, even if the tool itself is legitimate.
AML.T0002 Acquire Public ML Artifacts Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0002.00 Acquire Public ML Artifacts: Datasets Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0002.01 Acquire Public ML Artifacts: Models Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0016 Obtain Capabilities Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0016.00 Obtain Capabilities: Adversarial ML Attack Implementations Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0016.01 Obtain Capabilities: Software Tools Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0016.02 Obtain Capabilities: Generative AI Detection Indicator N/A Reconnaissance/preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0017 Develop Capabilities Detection Indicator N/A Preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0017.00 Develop Capabilities: Adversarial ML Attacks Detection Indicator N/A Preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0008 Acquire Infrastructure Detection Indicator N/A Preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0008.00 Acquire Infrastructure: ML Development Workspaces Detection Indicator N/A Preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0008.01 Acquire Infrastructure: Consumer Hardware #8 Physical Attack Own the environment through owning the hardware; relates to physical access.
AML.T0008.02 Acquire Infrastructure: Domains Detection Indicator N/A Preparation activity, not a direct threat. Indicates a lack of detection and prevention capabilities.
AML.T0008.03 Acquire Infrastructure: Physical Countermeasures #8 Physical Attack The adversary uses physical hardware for attacks.
AML.T0019 Publish Poisoned Datasets #10 Supply Chain Attack The published poisoned data is part of the supply chain. The vulnerability is not in the data itself, but how it is trusted and used by the end user.
AML.T0010 ML Supply Chain Compromise #10 Supply Chain Attack This is the core definition of a supply chain attack: compromising a system by compromising a component it relies on.
AML.T0010.00 ML Supply Chain Compromise: Hardware #8 Physical Attack The attack targets the hardware.
AML.T0010.01 ML Supply Chain Compromise: ML Software #10 Supply Chain Attack The vulnerable framework is a part of the supply chain.
AML.T0010.02 ML Supply Chain Compromise: Data #10 Supply Chain Attack The poisoned data is part of the supply chain.
AML.T0010.03 ML Supply Chain Compromise: Model #10 Supply Chain Attack The poisoned model is part of the supply chain.
AML.T0040 AI Model Inference API Access #1 Abuse of Functions The attacker is using a legitimate access method (the API), but for malicious purposes. The API is designed to provide access; the abuse is in how that access is used.
AML.T0047 ML-Enabled Product or Service #1 Abuse of Functions The attacker leverages a product with hidden ML features.
AML.T0041 Physical Environment Access #8 Physical Attack The attacker gets to the physical environment.
AML.T0044 Full ML Model Access #4, #1 Identity Theft, Abuse of Functions Primarily involves credential theft (#4) to gain full access, and then abuses the intended functionality (#1) of the model or its management interface.
AML.T0013 Discover ML Model Ontology #1 Abuse of Functions Exploits legitimate functionality by querying the model to reveal its output space (ontology). The vulnerability lies in the model's intended behavior of responding to queries, even if those queries are designed to extract information beyond typical use.
AML.T0014 Discover ML Model Family #1 Abuse of Functions The vulnerability lies on documenting and exposing too much information.
AML.T0020 Poison Training Data #10, #1 Supply Chain Attack, Abuse of Functions A poisoned dataset is by definition a supply chain attack. Poisoning within an obtained system can be classified as an abuse of function.
AML.T0021 Establish Accounts #4 Identity Theft Attacker uses various services with accounts.
AML.T0005 Create Proxy ML Model #1 Abuse of Functions The adversary is creating a model that mimics the target, not exploiting a flaw in the target itself. The vulnerability lies in the ability to create and train models, and the adversary abuses this ability. The "proxy" nature is important.
AML.T0005.00 Create Proxy ML Model: Train Proxy via Gathered ML Artifacts #1 Abuse of Functions The proxy model is built abusing the legitimate function of model training, this time with data and information gathered via other abuses or exploits.
AML.T0005.01 Create Proxy ML Model: Train Proxy via Replication #1 Abuse of Functions The proxy model is built using replication.
AML.T0005.02 Create Proxy ML Model: Use Pre-Trained Model #1 Abuse of Functions The adversary leverages the off-the-shelf nature of the pre-trained model.
AML.T0007 Discover ML Artifacts Detection Indicator N/A Attacker abuses a private space to get more info on ML artefacts. Reconnaissance activity.
AML.T0011 User Execution #9, #7 Social Engineering, Malware Relies on user action (#9) to execute unsafe code, which may be malware (#7).
AML.T0011.00 User Execution: Unsafe ML Artifacts #7 Malware The adversary misuses "serialization" to execute code.
AML.T0011.01 User Execution: Malicious Package #7 Malware The adversary uses "serialization" to execute code via a malicious package.
AML.T0012 Valid Accounts #4 Identity Theft The adversary misuses stolen credentials.
AML.T0015 Evade ML Model #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how the model service processes inputs (#2). Can be #3 if targeting a client-side model. Secondarily can involve abusing intended model functionality (#1).
AML.T0018 Backdoor ML Model #7 Malware The introduction of a backdoor into a model. The backdoor functionality, once triggered, is a form of malware.
AML.T0018.00 Backdoor ML Model: Poison ML Model #7 Malware Similar to #7 but by interfering with its training process.
AML.T0018.01 Backdoor ML Model: Inject Payload #7 Malware Similar to #7 but by interfering with its model file.
AML.T0024 Exfiltration via ML Inference API #1 Abuse of Functions The attacker abuses the inference API, which is designed for querying.
AML.T0024.00 Exfiltration via ML Inference API: Infer Training Data Membership #1 Abuse of Functions The attacker is misusing the model’s intended functionality, but for the purpose of uncovering information about the training data, which is not the intended use of model output.
AML.T0024.01 Exfiltration via ML Inference API: Invert ML Model #1 Abuse of Functions The attacker uses the API (a legitimate function) to perform the reconstruction. The vulnerability isn't in the API itself, but in the information leakage that can occur through its normal operation.
AML.T0024.02 Exfiltration via ML Inference API: Extract ML Model #1 Abuse of Functions The adversary misuses the inference API, designed for querying, to extract the model itself. This is an abuse of the intended functionality.
AML.T0025 Exfiltration via Cyber Means See Notes See Notes This technique is too broad. The specific method of exfiltration would determine the primary TLCTC cluster. Any threat cluster (except #6) can be part of an attack that leads to data exfiltration.
AML.T0029 Denial of ML Service #6 Flooding Attack The adversary attacks by sending massive requests to the system.
AML.T0046 Spamming ML System with Chaff Data #6, #1 Flooding Attack, Abuse of Functions Spamming is overwhelming system resources and abusing the functionality of the system.
AML.T0031 Erode ML Model Integrity #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how the model service processes inputs (#2). Can be #3 if targeting a client-side model. Secondarily can involve abusing legitimate functions for model updating (#1).
AML.T0034 Cost Harvesting #6 Flooding Attack Attacker misuses a model to increase the cost, by flooding the system.
AML.T0035 ML Artifact Collection Detection Indicator N/A The adversary collects model and dataset. Reconnaissance activity.
AML.T0036 Data from Information Repositories #1 Abuse of Functions The use of information repositories is a normal system function, but it's being misused to find valuable, potentially sensitive, information.
AML.T0037 Data from Local System #1 Abuse of Functions The adversary is using normal system function.
AML.T0042 Verify Attack #1 Abuse of Functions The act of verifying the attack is, in itself, an abuse of the system's normal operation, whether it's via an API or an offline copy. The system is being used in a way not intended by the victim.
AML.T0043 Craft Adversarial Data #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how ML model services process inputs (#2). Can be #3 if the model is running client-side. Can also be #1 if it involves abusing intended model functionality.
AML.T0043.00 Craft Adversarial Data: White-Box Optimization #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how ML model services process inputs (#2). Can be #3 if the model is running client-side. Can also be #1 if it involves abusing intended model functionality.
AML.T0043.01 Craft Adversarial Data: Black-Box Optimization #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how ML model services process inputs (#2). Can be #3 if the model is running client-side. Can also be #1 if it involves abusing intended model functionality.
AML.T0043.02 Craft Adversarial Data: Black-Box Transfer #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how ML model services process inputs (#2). Can be #3 if the model is running client-side. Can also be #1 if it involves abusing intended model functionality.
AML.T0043.03 Craft Adversarial Data: Manual Modification #2, #3, #1 Exploiting Server, Exploiting Client, Abuse of Functions Primarily exploits vulnerabilities in how ML model services process inputs (#2). Can be #3 if the model is running client-side. Can also be #1 if it involves abusing intended model functionality.
AML.T0043.04 Craft Adversarial Data: Insert Backdoor Trigger #7 Malware The attacker crafts data, which results in an action of a malware.
AML.T0048 External Harms - - Not a threat, but consequences of a threat.
AML.T0048.00 External Harms: Financial Harm - - Not a threat, but consequences of a threat.
AML.T0048.01 External Harms: Reputational Harm - - Not a threat, but consequences of a threat.
AML.T0048.02 External Harms: Societal Harm - - Not a threat, but consequences of a threat.
AML.T0048.03 External Harms: User Harm - - Not a threat, but consequences of a threat.
AML.T0048.04 External Harms: ML Intellectual Property Theft - - Not a threat, but consequences of a threat.
AML.T0049 Exploit Public-Facing Application #2 Exploiting Server Targeting vulnerabilities in server-side applications.
AML.T0050 Command and Scripting Interpreter #1, #2, #3 Abuse of Functions, Exploiting Server, Exploiting Client Adversaries abuse a legitimate function of the system. The Interpreter may run on client/server/cloud.
AML.T0051 LLM Prompt Injection #2 Exploiting Server Primarily exploits how the LLM service (server) processes inputs (#2). Secondarily can involve misuse of intended prompt functionality (#1).
AML.T0051.00 LLM Prompt Injection: Direct #2 Exploiting Server Directly exploiting LLM's prompt handling vulnerabilities.
AML.T0051.01 LLM Prompt Injection: Indirect #2 Exploiting Server Indirectly exploiting LLM's data ingestion vulnerabilities.
AML.T0052 Phishing #9 Social Engineering Manipulating humans through deception.
AML.T0052.00 Phishing: Spearphishing via Social Engineering LLM #9 Social Engineering Using LLM to manipulate humans through deception.
AML.T0053 LLM Plugin Compromise #10 Supply Chain Attack The compromise occurs through a component (the plugin) that the LLM relies on. This aligns with the definition of #10.
AML.T0054 LLM Jailbreak #2 Exploiting Server Exploiting vulnerability in LLM's implementation to bypass restrictions.
AML.T0055 Unsecured Credentials #4 Identity Theft Targeting insecurely stored credentials.
AML.T0056 Extract LLM System Prompt #1, #4 Abuse of Function, Identity Theft System prompt is information that can be abused (#1) and even stolen (#4).
AML.T0057 LLM Data Leakage #2 Exploiting Server Exploiting vulnerability in LLM's data handling.
AML.T0058 Publish Poisoned Models #7 Malware The publishing of a poisoned model equals malware.
AML.T0059 Erode Dataset Integrity #10 Supply Chain Attack Corrupting datasets used by models.
AML.T0060 Publish Hallucinated Entities #9 Social Engineering Creating entities to deceive users based on LLM hallucinations.
AML.T0061 LLM Prompt Self-Replication #7, #2 Malware, Exploiting Server Primarily causes the LLM to generate self-replicating prompts, analogous to traditional malware (#7). Secondarily exploits vulnerability in LLM's prompt handling (#2).
AML.T0062 Discover LLM Hallucinations #1 Abuse of Functions Abusing functionality to identify hallucinations.
AML.T0063 Discover AI Model Outputs #1 Abuse of Functions Abusing functionality to discover unintended outputs.
AML.T0064 Gather RAG-Indexed Targets #1 Abuse of Functions Abusing functionality to identify RAG data sources.
AML.T0065 LLM Prompt Crafting Detection Indicator N/A The adversary may use their acquired knowledge of the target generative AI system to craft prompts that bypass its defenses and allow malicious instructions to be executed. Preparation activity.
AML.T0066 Retrieval Content Crafting #9 Social Engineering Creating deceptive content to influence users via retrieval.
AML.T0067 LLM Trusted Output Components Manipulation #2 Exploiting Server Exploiting LLM's output handling to appear trustworthy.
AML.T0067.00 LLM Trusted Output Components Manipulation: Citations #2 Exploiting Server Exploiting LLM's output handling to appear trustworthy.
AML.T0068 LLM Prompt Obfuscation Detection Indicator N/A The adversary is manipulating output. Preparation Activity.
AML.T0069 Discover LLM System Information #1 Abuse of Functions Abusing functionality to discover system information.
AML.T0069.00 Discover LLM System Information: Special Character Sets #1 Abuse of Functions Abusing functionality to discover system information.
AML.T0069.01 Discover LLM System Information: System Instruction Keywords #1 Abuse of Functions Abusing functionality to discover system information.
AML.T0069.02 Discover LLM System Information: System Prompt #1 Abuse of Functions Abusing functionality to discover system information.
AML.T0070 RAG Poisoning #10 Supply Chain Attack Manipulating retrieval data sources.
AML.T0071 False RAG Entry Injection #10 Supply Chain Attack Injecting false entries into retrieval systems.

The Mitigation Mapping Maze: Why One Control Can't Do It All

A Critical Analysis of MITRE ATLAS Mitigations through the TLCTC Framework

Introduction

In cybersecurity's complex landscape, frameworks help us organize both threats and defenses. MITRE ATT&CK (and its AI-focused counterpart, MITRE ATLAS) provides an operational catalog of attacker tactics and techniques. Meanwhile, the Top-Level Cyber Threat Clusters (TLCTC) framework offers a strategic, vulnerability-focused approach to threat categorization based on generic vulnerabilities rather than specific techniques.

But what happens at the intersection of these frameworks? When we map operational mitigations to strategic threat clusters, we often discover that seemingly unified mitigations actually address multiple distinct vulnerabilities. This analysis reveals why precise control objectives aligned with specific generic vulnerabilities are essential for effective cybersecurity.

The TLCTC Perspective: Generic Vulnerabilities as the Foundation

The TLCTC framework categorizes threats based on the generic vulnerabilities they exploit. According to its core axiom: for every generic vulnerability, there is ONE threat cluster. This clear separation allows for precise control objectives and more effective risk management.

The framework organizes cyber threats into 10 distinct clusters:

  1. Abuse of Functions: Attackers manipulating the intended functionality of software or systems for malicious purposes
  2. Exploiting Server: Targeting vulnerabilities in server-side software code
  3. Exploiting Client: Targeting vulnerabilities in client-side software that processes external data
  4. Identity Theft: Targeting weaknesses in identity and access management
  5. Man in the Middle: Intercepting and potentially altering communication between parties
  6. Flooding Attack: Overwhelming system resources and capacity limits
  7. Malware: Abusing the inherent ability of software to execute foreign code
  8. Physical Attack: Unauthorized physical interference with hardware or facilities
  9. Social Engineering: Manipulating people into compromising security
  10. Supply Chain Attack: Compromising systems through vulnerabilities in third-party components

Each cluster represents a distinct attack vector with a unique generic vulnerability as its root cause.

MITRE ATLAS Mitigations: The Challenge of "One Size Fits All"

MITRE ATLAS provides valuable insights into how attackers target AI systems, but its mitigations often address multiple techniques spanning different underlying vulnerabilities. Let's examine AML.M0007 "Sanitize Training Data," which MITRE describes as:

"Detect and remove or remediate poisoned training data. Training data should be sanitized prior to model training and recurrently for an active learning model. Implement a filter to limit ingested training data. Establish a content policy that would remove unwanted content such as certain explicit or offensive language from being used."

This mitigation addresses several ATLAS techniques:

  • AML.T0010.002 - ML Supply Chain Compromise: Data
  • AML.T0020 - Poison Training Data
  • AML.T0018.000 - Backdoor ML Model: Poison ML Model

From the TLCTC perspective, this single mitigation actually spans multiple threat clusters because it addresses different generic vulnerabilities:

Attack Path Analysis

Using TLCTC attack path notation, we can identify distinct attack sequences:

  1. #10 → #1: Supply Chain Attack leading to Abuse of Functions
    • Attacker compromises third-party data provider (Supply Chain #10)
    • Compromised data is used to manipulate ML model behavior (Abuse of Functions #1)
  2. #1: Direct Abuse of Functions (Multiple Vectors)
    • Internal vector: Attacker with legitimate access abuses the ML training process to poison data
    • External vector: Attacker publishes poisoned data on public websites knowing it will be harvested by web-crawling ML systems

    This distinction within the Abuse of Functions cluster highlights another shortcoming of MITRE's broad categorization, which doesn't differentiate between these materially different attack vectors requiring different controls

  3. #2 → #1: Exploiting Server leading to Abuse of Functions
    • Attacker exploits a vulnerability in the data ingestion server (Exploiting Server #2)
    • This allows injection of poisoned data to manipulate the ML system (Abuse of Functions #1)

The Bow-Tie Model Perspective

The TLCTC framework leverages the Bow-Tie risk model, which distinguishes between causes (threats), events (system compromise), and consequences (impacts). For ML data poisoning:

Cause Side (Threats):

  • Supply Chain Attack (#10): Compromised data source
  • Abuse of Functions (#1): Manipulation of training data process
  • Exploiting Server (#2): Vulnerability in data ingestion system

Central Event:

  • Loss of control over ML model training integrity

Consequence Side:

  • Loss of Integrity: ML model produces manipulated results
  • Loss of Confidentiality: Model may leak sensitive training data
  • Loss of Availability: Model becomes unreliable for its intended purpose

This bow-tie analysis reveals why a single mitigation is insufficient - it attempts to address multiple distinct threat vectors that could lead to the same central event.

The Solution: Strategic-Operational Alignment with Precise Controls

To align with TLCTC principles, AML.M0007 should be split into multiple controls, each with a single, clear objective addressing a specific generic vulnerability:

Control 1: Supply Chain Data Source Integrity (Addressing TLCTC #10)

Generic Vulnerability: The necessary reliance on third-party data components

Control Objective: Ensure the integrity and authenticity of training data sources

Primary NIST CSF Function: IDENTIFY

Implementation Examples:

  • Establish data provenance tracking
  • Implement cryptographic verification of data sources
  • Conduct third-party risk assessments of data providers
  • Maintain a Data Bill of Materials (DBOM)

Control 2: ML Training Function Protection (Addressing TLCTC #1)

Generic Vulnerability: The scope of ML training functionality

Control Objective: Prevent manipulation of the model training process through legitimate features

Primary NIST CSF Function: PROTECT

Implementation Examples:

  • Implement principle of least privilege for training data access
  • Establish data validation rules and integrity checks
  • Create separation of duties for data preparation and model training
  • Deploy anomaly detection for training data distributions

Control 3: Data Ingestion Security (Addressing TLCTC #2)

Generic Vulnerability: Exploitable flaws in server-side data ingestion code

Control Objective: Prevent exploitation of vulnerabilities in data processing systems

Primary NIST CSF Function: PROTECT

Implementation Examples:

  • Secure coding practices for data ingestion pipelines
  • Input validation and sanitization
  • Regular security testing of data processing components
  • Security monitoring of data ingestion services

Benefits of Precision in Control Design

This decomposition of a single MITRE mitigation into multiple TLCTC-aligned controls offers significant practical benefits:

  1. Clear Accountability: Different teams can be responsible for distinct controls (e.g., procurement for data source verification, data science for training integrity)
  2. Targeted Risk Assessment: Organizations can evaluate the likelihood and impact of each threat cluster separately, enabling more precise risk prioritization
  3. Effective Measurement: KRIs, KCIs, and KPIs can be defined specific to each control objective rather than being diluted across multiple functions
  4. Strategic-Operational Bridge: Leadership can understand high-level threat categories while technical teams implement specific mitigations
  5. Comprehensive Coverage: Ensures all relevant generic vulnerabilities are addressed rather than focusing only on observed techniques

From Mitigation Mapping to Control Framework

Converting MITRE ATLAS mitigations to a TLCTC-aligned control framework requires a methodical approach:

  1. Identify Underlying Generic Vulnerabilities: What fundamental weaknesses does each mitigation address?
  2. Map to TLCTC Clusters: Which of the 10 threat clusters are relevant?
  3. Define Clear Control Objectives: What is the specific aim of each control?
  4. Align with NIST CSF Functions: Which function (IDENTIFY, PROTECT, DETECT, RESPOND, RECOVER) best reflects the control's primary purpose?
  5. Develop Vertical Implementation Guidance: How should controls be implemented across different protection rings and system levels?

Conclusion: Precision in Security Design

The mapping of MITRE ATLAS mitigations to the TLCTC framework illustrates an important principle: effective cybersecurity requires precise control objectives aligned with specific generic vulnerabilities. By decomposing broad mitigations into focused controls, organizations can develop more robust security postures with clearer responsibilities, better measurement, and more accurate risk assessment.

This approach is particularly crucial for emerging technologies like AI, where threats are evolving rapidly and security practices are still maturing. The TLCTC framework doesn't add complexity—it adds clarity, enabling both strategic understanding and operational effectiveness.

By bridging the gap between detailed techniques and high-level threat categories, organizations can move from reactive security to strategic risk management, ensuring that controls are both comprehensive in coverage and precise in implementation.

Emperiment Mapping MITRE ML Mitigations to TLCTC and NIST CSF

Introduction

This mapping table demonstrates how MITRE's ML security mitigations align with both the Top Level Cyber Threat Clusters (TLCTC) framework and the NIST Cybersecurity Framework functions. Following the TLCTC framework's bow-tie model, this mapping focuses exclusively on controls that directly address threats (cause side) rather than those that primarily mitigate consequences (effect side).

Key Mapping Principles

  1. Threat-Focused Approach: Each mitigation is mapped only to threat clusters whose generic vulnerabilities it directly addresses.
  2. Clear Cause-Effect Distinction: Controls that primarily mitigate consequences (data risk events) rather than prevent threats are excluded from threat cluster mappings.
  3. Function Alignment: Each mitigation is placed in the NIST CSF function that best represents its primary purpose in addressing the specific threat.
  4. Mitigation-Centric View: This table shows how MITRE's mitigations map to the TLCTC framework, not a complete set of recommended controls for each cell.
  5. "&" Symbol Clarification: Mitigations marked with "&" in the original MITRE documentation represent general security practices that have been adapted for ML contexts, not ML-specific controls.

For a complete understanding of the TLCTC framework and its axioms, please refer to the full TLCTC whitepaper (version 1.6.1, March 2025).

Mapping Table

TLCTC Cluster IDENTIFY PROTECT DETECT RESPOND RECOVER
#1 Abuse of Functions AML.M0000: Limit Public Release of Information
AML.M0001: Limit Model Artifact Release
AML.M0023: AI Bill of Materials
AML.M0004: Restrict Number of ML Model Queries
AML.M0005: Control Access to ML Models and Data at Rest
AML.M0019: Control Access to ML Models and Data in Production
AML.M0020: Generative AI Guardrails
AML.M0021: Generative AI Guidelines
AML.M0015: Adversarial Input Detection
AML.M0024: AI Telemetry Logging
#2 Exploiting Server AML.M0016 (&): Vulnerability Scanning
AML.M0023: AI Bill of Materials
AML.M0003: Model Hardening
AML.M0006: Use Ensemble Methods
AML.M0011 (&): Restrict Library Loading
AML.M0013 (&): Code Signing
AML.M0018 (&): User Training
AML.M0015: Adversarial Input Detection
AML.M0024: AI Telemetry Logging
#3 Exploiting Client AML.M0016 (&): Vulnerability Scanning
AML.M0023: AI Bill of Materials
AML.M0003: Model Hardening
AML.M0006: Use Ensemble Methods
AML.M0010: Input Restoration
AML.M0017: Model Distribution Methods
AML.M0018 (&): User Training
AML.M0015: Adversarial Input Detection
AML.M0024: AI Telemetry Logging
#4 Identity Theft AML.M0005: Control Access to ML Models and Data at Rest
AML.M0019: Control Access to ML Models and Data in Production
AML.M0024: AI Telemetry Logging
#5 Man in the Middle AML.M0017: Model Distribution Methods AML.M0024: AI Telemetry Logging
#6 Flooding Attack AML.M0004: Restrict Number of ML Model Queries AML.M0024: AI Telemetry Logging
#7 Malware AML.M0016 (&): Vulnerability Scanning
AML.M0023: AI Bill of Materials
AML.M0011 (&): Restrict Library Loading
AML.M0013 (&): Code Signing
AML.M0014: Verify ML Artifacts
AML.M0024: AI Telemetry Logging
#8 Physical Attack AML.M0009: Use Multi-Modal Sensors AML.M0024: AI Telemetry Logging
#9 Social Engineering AML.M0018 (&): User Training
AML.M0020: Generative AI Guardrails
AML.M0021: Generative AI Guidelines
AML.M0022: Generative AI Model Alignment
AML.M0024: AI Telemetry Logging
#10 Supply Chain Attack AML.M0008: Validate ML Model
AML.M0023: AI Bill of Materials
AML.M0025: Maintain AI Dataset Provenance
AML.M0007: Sanitize Training Data
AML.M0013 (&): Code Signing
AML.M0014: Verify ML Artifacts
AML.M0024: AI Telemetry Logging

Justifications for Key Mappings

#1 Abuse of Functions

Generic Vulnerability: The scope of software and functions

Control Logic: Mitigations that restrict, monitor, or limit functions directly address this vulnerability

Example: AML.M0004 (Restrict Number of ML Model Queries) directly prevents abuse of the query functionality

#2 Exploiting Server & #3 Exploiting Client

Generic Vulnerability: Exploitable flaws in server-side/client-side software code

Control Logic: Mitigations that find, fix, or harden against code flaws

Example: AML.M0003 (Model Hardening) addresses the vulnerability to adversarial examples in model code

#4 Identity Theft

Generic Vulnerability: Weak identity management processes and credential protection

Control Logic: Mitigations that strengthen identity verification and access controls

Example: AML.M0019 (Control Access to ML Models) directly prevents unauthorized access using stolen credentials

#5 Man in the Middle

Generic Vulnerability: Lack of control over communication flow/path

Control Logic: Mitigations that protect communication integrity

Example: AML.M0017 (Model Distribution Methods) specifically addresses controlling the model communication path

Additional Considerations: Consequence Mitigation and Defense in Depth

While the mapping table focuses on mitigations that directly prevent threats (left side of the bow-tie model), a comprehensive security strategy also includes measures to mitigate the consequences of successful attacks (right side of the bow-tie).

AML.M0012 (&) (Encrypt Sensitive Information) - Analysis

AML.M0012 (&) (Encrypt Sensitive Information) is deliberately excluded from the threat cluster mappings because:

  • Purpose: "Encrypt sensitive data such as ML models to protect against adversaries attempting to access sensitive data" - clearly positioned as a measure to limit the impact of a successful attack
  • Bow-Tie Position: In the TLCTC framework, it belongs on the right side (consequence mitigation) rather than the left side (threat prevention)
  • Effect vs. Cause: It addresses the consequence (Loss of Confidentiality) rather than preventing the initial compromise
  • Associated Techniques: The related ATLAS techniques (AML.T0035 ML Artifact Collection, AML.T0048.004 ML Intellectual Property Theft, AML.T0007 Discover ML Artifacts) all assume the attacker has already gained some level of access

While encryption doesn't prevent an attacker from gaining access to the system (e.g., through Identity Theft or Exploiting Server), it is a crucial control for:

  1. Mitigating Data Risk Events: Limiting the "Loss of Confidentiality" if a compromise occurs
  2. Defense in Depth: Providing additional layers of protection
  3. Regulatory Compliance: Meeting requirements in frameworks like GDPR, HIPAA, etc.
  4. Supporting Preventive Controls: Enhancing the effectiveness of other mitigations

This distinction highlights the TLCTC framework's value in creating clear separations between threat prevention and consequence mitigation, allowing for more precise security planning.

Observations and Insights

  1. Gap Analysis: The MITRE ML mitigations are heavily weighted toward Identify and Protect functions, with minimal coverage for Respond and Recover. This suggests a need for additional controls in these areas. A complete cybersecurity strategy must include robust response and recovery capabilities to minimize the impact of successful attacks, even if those attacks cannot be prevented.
  2. Threat Coverage: Some threat clusters (particularly #1, #2, #3, and #10) have robust mitigation coverage, while others (particularly #5 and #8) have fewer specific mitigations.
  3. Control Categories:
    • Preventive Controls: Dominated by access restrictions, input validation, and code integrity measures
    • Detective Controls: Heavily reliant on telemetry logging and adversarial input detection
    • Responsive Controls: Notable gap in the provided MITRE mitigations
  4. Strategic vs. Operational: The mapping demonstrates the two-tiered approach of the TLCTC framework:
    • Strategic Level: High-level threat clusters and their generic vulnerabilities
    • Operational Level: Specific mitigations targeting those vulnerabilities

Conclusion

This mapping demonstrates the value of the TLCTC framework in organizing security controls according to the fundamental vulnerabilities they address. By maintaining a clear distinction between threat prevention (cause side) and consequence mitigation (effect side), organizations can develop more targeted and effective security strategies.

The analysis also reveals potential gaps in the current set of MITRE ML mitigations, particularly in the areas of response and recovery. Organizations should supplement these mitigations with additional controls to ensure comprehensive coverage across the entire threat landscape and security lifecycle.

Call to Action

Organizations using MITRE ATLAS should use this mapping as a starting point to evaluate their own control frameworks and ensure they are addressing all relevant threat clusters with precise control objectives. Specifically:

  1. Assess current controls against each TLCTC cluster to identify potential gaps
  2. Ensure controls are properly categorized as either threat prevention or consequence mitigation
  3. Develop additional controls for the Respond and Recover functions
  4. Map attack sequences using TLCTC notation (#9->#3->#7) to better understand complex threats
  5. Integrate this threat-centric approach into your overall cyber risk management program

By applying the TLCTC framework to ML security, organizations can bridge the gap between strategic risk management and operational security implementation, creating a more resilient and effective defense against evolving AI threats.