TLCTC Blog - 2025/05/09
ATLAS to TLCTC Mapping: Aligning AI Threats with Standardized Cyber Threat Clusters
Introduction
This table maps techniques from the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) to the Top Level Cyber Threat Clusters (TLCTC) framework and identifies the potential data risk events that could result from each threat.
- The TLCTC Mapping column identifies the relevant threat cluster(s)
- Remarks column provides additional context about the mapping
Mapping Table
ATLAS ID | Technique Name | TLCTC | Mapping Argument |
---|---|---|---|
AML.T0000 | Search Open Technical Databases | N/A | Reconnaissance activity. Does not exploit a vulnerability directly, but gathers information for future attacks. |
AML.T0000.000 | Journals and Conference Proceedings | N/A | Sub-technique of reconnaissance. Information gathering from academic sources. |
AML.T0000.001 | Pre-Print Repositories | N/A | Sub-technique of reconnaissance. Information gathering from pre-print sources. |
AML.T0000.002 | Technical Blogs | N/A | Sub-technique of reconnaissance. Information gathering from blog sources. |
AML.T0001 | Search Open AI Vulnerability Analysis | N/A | Reconnaissance of known vulnerabilities. Preparation phase, not direct exploitation. |
AML.T0003 | Search Victim-Owned Websites | N/A | Reconnaissance of victim infrastructure. Information gathering phase. |
AML.T0004 | Search Application Repositories | N/A | Reconnaissance of application repositories. Information gathering for targeting. |
AML.T0006 | Active Scanning | N/A | Active reconnaissance. While involving direct interaction, it's probing rather than exploitation. |
AML.T0002 | Acquire Public AI Artifacts | N/A | Resource development activity. Acquiring artifacts for later use in attacks. |
AML.T0002.000 | Datasets | N/A | Resource development - acquiring datasets for attack staging. |
AML.T0002.001 | Models | N/A | Resource development - acquiring models for attack staging. |
AML.T0016 | Obtain Capabilities | N/A | Resource development - obtaining attack tools and capabilities. |
AML.T0016.000 | Adversarial AI Attack Implementations | N/A | Resource development - obtaining pre-built AI attack tools. |
AML.T0016.001 | Software Tools | N/A | Resource development - obtaining general software tools for attacks. |
AML.T0017 | Develop Capabilities | N/A | Resource development - creating custom attack capabilities. |
AML.T0017.000 | Adversarial AI Attacks | N/A | Resource development - developing custom adversarial AI attacks. |
AML.T0008 | Acquire Infrastructure | N/A | Resource development - acquiring infrastructure for attack operations. |
AML.T0008.000 | AI Development Workspaces | N/A | Resource development - acquiring compute resources for attack development. |
AML.T0008.001 | Consumer Hardware | N/A | Resource development - acquiring hardware for attack operations. |
AML.T0019 | Publish Poisoned Datasets | #10 | Supply Chain Attack - publishing compromised datasets for later integration into victim systems via trust in third-party components. |
AML.T0010 | AI Supply Chain Compromise | #10 | Direct mapping to Supply Chain Attack - compromising trusted third-party AI components, data, or services. |
AML.T0010.000 | Hardware | #10 | Supply Chain Attack via hardware compromise - targeting AI-specific hardware in the supply chain. |
AML.T0010.001 | AI Software | #10 | Supply Chain Attack via software frameworks and libraries used in AI systems. |
AML.T0010.002 | Data | #10 | Supply Chain Attack via data sources - compromising datasets and labeling services. |
AML.T0010.003 | Model | #10 | Supply Chain Attack via model repositories - compromising pre-trained models and fine-tuning sources. |
AML.T0040 | AI Model Inference API Access | N/A | Legitimate access to API. The access method itself isn't exploiting a vulnerability - subsequent actions determine TLCTC mapping. |
AML.T0047 | AI-Enabled Product or Service | N/A | Using legitimate AI services. The usage itself isn't exploiting a vulnerability. |
AML.T0041 | Physical Environment Access | #8 | Physical Attack - manipulating the physical environment where AI data collection occurs, exploiting physical accessibility vulnerabilities. |
AML.T0044 | Full AI Model Access | N/A | Access level description. The method of gaining this access would determine TLCTC mapping (could be #4, #10, etc.). |
AML.T0013 | Discover AI Model Ontology | N/A | Discovery phase after gaining access. Information gathering rather than initial exploitation. |
AML.T0014 | Discover AI Model Family | N/A | Discovery phase. Information gathering about model architecture and family. |
AML.T0020 | Poison Training Data | #10 / #1 | Context-dependent: External data sources = #10 Supply Chain Attack (compromising trusted data suppliers). Internal datasets = #1 Abuse of Functions (misusing data ingestion processes after system access). |
AML.T0021 | Establish Accounts | N/A | Resource development - creating legitimate accounts for attack operations. |
AML.T0005 | Create Proxy AI Model | N/A | Attack staging activity. Creating proxy models for attack development and testing. |
AML.T0005.000 | Train Proxy via Gathered AI Artifacts | N/A | Attack staging using acquired artifacts. |
AML.T0005.001 | Train Proxy via Replication | #1 | Abuse of Functions - extensively querying victim's inference API beyond normal use patterns to extract model intellectual property, subverting intended service scope. |
AML.T0005.002 | Use Pre-Trained Model | N/A | Attack staging using existing pre-trained models as proxies. |
AML.T0007 | Discover AI Artifacts | N/A | Discovery phase after gaining access. Information gathering about available AI artifacts. |
AML.T0011 | User Execution | #9→#7 | Attack sequence: Social Engineering to manipulate user into action, followed by Malware execution of foreign malicious code/scripts. |
AML.T0011.000 | Unsafe AI Artifacts | #7 | Malware - unsafe artifacts execute foreign malicious code when loaded, abusing the environment's designed capability to execute code. |
AML.T0012 | Valid Accounts | #4 | Identity Theft - using compromised or stolen credentials to gain unauthorized access as legitimate identities. |
AML.T0015 | Evade AI Model | #1 | Abuse of Functions - misusing the model's intended classification/detection functionality by providing crafted inputs that subvert expected behavior. |
AML.T0018 | Manipulate AI Model | #10 / #1 | Context-dependent: External/integrated models = #10 Supply Chain Attack (exploiting trust in third-party models). Internal/self-developed models = #1 Abuse of Functions (misusing designed model update capabilities). |
AML.T0018.000 | Poison AI Model | #10 / #1 | Context-dependent: External models = #10 Supply Chain Attack (poisoning models before victim integration). Internal models = #1 Abuse of Functions (misusing designed learning/updating mechanisms). |
AML.T0018.001 | Modify AI Model Architecture | #10 / #1 | Context-dependent: External models = #10 Supply Chain Attack (modifying models before distribution). Internal models = #1 Abuse of Functions (misusing architectural flexibility). |
AML.T0024 | Exfiltration via AI Inference API | #1 | Abuse of Functions - misusing the intended API functionality to extract information beyond designed scope. |
AML.T0024.000 | Infer Training Data Membership | #1 | Abuse of Functions - misusing inference API to determine training data membership. |
AML.T0024.001 | Invert AI Model | #1 | Abuse of Functions - misusing inference API to reconstruct training data. |
AML.T0024.002 | Extract AI Model | #1 | Abuse of Functions - misusing inference API to replicate model functionality. |
AML.T0025 | Exfiltration via Cyber Means | N/A | Generic exfiltration reference. Specific method would determine TLCTC mapping. |
AML.T0029 | Denial of AI Service | #6 | Flooding Attack - overwhelming AI services with requests to exhaust computational resources and cause denial of service. |
AML.T0046 | Spamming AI System with Chaff Data | #6 | Flooding Attack - overwhelming the system with excessive, useless data to degrade performance and waste analyst resources. |
AML.T0031 | Erode AI Model Integrity | #1 | Abuse of Functions - misusing the model's intended learning/adaptation capabilities to gradually degrade performance. |
AML.T0034 | Cost Harvesting | #6 | Flooding Attack - overwhelming services with computationally expensive queries to increase operational costs. |
AML.T0035 | AI Artifact Collection | N/A | Collection phase after gaining access. The collection method would determine TLCTC mapping. |
AML.T0036 | Data from Information Repositories | N/A | Collection technique. The access method to repositories would determine TLCTC mapping. |
AML.T0037 | Data from Local System | N/A | Collection technique. The method of gaining system access would determine TLCTC mapping. |
AML.T0042 | Verify Attack | N/A | Attack staging activity. Testing attack effectiveness before deployment. |
AML.T0043 | Craft Adversarial Data | N/A | Attack staging activity. Creating adversarial examples for later use in #1 Abuse of Functions attacks. |
AML.T0043.000 | White-Box Optimization | N/A | Attack staging method for crafting adversarial data. |
AML.T0043.001 | Black-Box Optimization | N/A | Attack staging method for crafting adversarial data. |
AML.T0043.002 | Black-Box Transfer | N/A | Attack staging method using proxy models for adversarial data generation. |
AML.T0043.003 | Manual Modification | N/A | Attack staging method for manually crafting adversarial inputs. |
AML.T0043.004 | Insert Backdoor Trigger | N/A | Attack staging for backdoor activation. Used in conjunction with poisoned models (#10) or function abuse (#1). |
AML.T0048 | External Harms | IMP | Impact category describing consequences rather than threat techniques. Represents data risk events or business consequences. |
AML.T0048.000 | Financial Harm | IMP | Business impact/consequence rather than a threat technique. |
AML.T0048.001 | Reputational Harm | IMP | Business impact/consequence rather than a threat technique. |
AML.T0048.002 | Societal Harm | IMP | Social impact/consequence rather than a threat technique. |
AML.T0048.003 | User Harm | IMP | Individual impact/consequence rather than a threat technique. |
AML.T0048.004 | AI Intellectual Property Theft | IMP | Describes the consequence of exfiltration rather than the threat technique used to achieve it. |
AML.T0049 | Exploit Public-Facing Application | #2 | Exploiting Server - targeting vulnerabilities in server-side applications and services to gain unauthorized access. |
AML.T0050 | Command and Scripting Interpreter | #1→#7 | Attack sequence: Abuse of Functions (misusing system's legitimate interpreter availability) followed by Malware (executing foreign malicious scripts via interpreters). |
AML.T0051 | LLM Prompt Injection | #1→[various] | Attack sequence: Abuse of Functions (misusing prompt processing functionality) leading to various outcomes like data leakage, privilege escalation, or system compromise. |
AML.T0051.000 | Direct | #1 | Abuse of Functions - direct manipulation of LLM through crafted prompts that misuse legitimate prompt interface. |
AML.T0051.001 | Indirect | #1 | Abuse of Functions - indirect prompt injection via data sources, misusing the LLM's designed data ingestion capabilities. |
AML.T0052 | Phishing | #9→[various] | Attack sequence: Social Engineering via deceptive communications, leading to various outcomes like #4 (credential theft), #7 (malware), or #3 (client exploits). |
AML.T0052.000 | Spearphishing via Social Engineering LLM | #9→[various] | Attack sequence: AI-assisted Social Engineering to manipulate targets more effectively, leading to credential theft, malware delivery, or other compromises. |
AML.T0053 | LLM Plugin Compromise | #1→[various] | Attack sequence: Abuse of Functions (misusing LLM's legitimate plugin capabilities) leading to various outcomes like code execution, data access, or privilege escalation. |
AML.T0054 | LLM Jailbreak | #1 | Abuse of Functions - misusing prompt processing to bypass intended restrictions and guardrails. |
AML.T0055 | Unsecured Credentials | #4 | Identity Theft - exploiting weak credential storage mechanisms to acquire authentication materials. |
AML.T0056 | Extract LLM System Prompt | #1 | Abuse of Functions - misusing the LLM's response generation to reveal system-level instructions. |
AML.T0057 | LLM Data Leakage | #1 | Abuse of Functions - misusing prompt processing to extract sensitive information beyond intended scope. |
AML.T0058 | Publish Poisoned Models | #10 | Supply Chain Attack - publishing compromised models to public repositories for integration into victim systems. |
AML.T0059 | Erode Dataset Integrity | #1 | Abuse of Functions - misusing data processing/ingestion mechanisms to degrade dataset quality. |
AML.T0011.001 | Malicious Package | #10 | Supply Chain Attack - distributing malicious packages through trusted software repositories. |
AML.T0060 | Publish Hallucinated Entities | #10→#9 | Attack sequence: Supply Chain Attack (creating fake entities matching AI hallucinations) followed by Social Engineering (exploiting user trust in AI recommendations). |
AML.T0061 | LLM Prompt Self-Replication | #1 | Abuse of Functions - misusing the LLM's text generation to create self-propagating prompts. |
AML.T0062 | Discover LLM Hallucinations | N/A | Discovery activity to identify hallucinated entities for potential exploitation. |
AML.T0008.002 | Domains | N/A | Resource development - acquiring domains for attack infrastructure. |
AML.T0008.003 | Physical Countermeasures | N/A | Resource development for physical attacks. The deployment would map to #8 Physical Attack. |
AML.T0063 | Discover AI Model Outputs | N/A | Discovery activity to identify unintended model outputs for potential exploitation. |
AML.T0016.002 | Generative AI | N/A | Resource development - acquiring generative AI tools for attack operations. |
AML.T0064 | Gather RAG-Indexed Targets | N/A | Reconnaissance of RAG system data sources for targeting purposes. |
AML.T0065 | LLM Prompt Crafting | N/A | Resource development - creating malicious prompts for later use in #1 attacks. |
AML.T0066 | Retrieval Content Crafting | #1 | Abuse of Functions - misusing RAG system's content ingestion mechanisms to inject malicious content. |
AML.T0067 | LLM Trusted Output Components Manipulation | #1 | Abuse of Functions - misusing LLM's response generation to manipulate trust indicators like citations and metadata. |
AML.T0068 | LLM Prompt Obfuscation | N/A | Defense evasion technique for hiding malicious prompts rather than a primary threat. |
AML.T0069 | Discover LLM System Information | N/A | Discovery activity to understand LLM system configuration and capabilities. |
AML.T0069.000 | Special Character Sets | N/A | Discovery of system delimiters and special characters for later exploitation. |
AML.T0069.001 | System Instruction Keywords | N/A | Discovery of system keywords for manipulation purposes. |
AML.T0069.002 | System Prompt | N/A | Discovery of system prompts, same as AML.T0056 but focused on discovery phase. |
AML.T0070 | RAG Poisoning | #1 | Abuse of Functions - misusing RAG system's data indexing functionality to inject malicious content. |
AML.T0071 | False RAG Entry Injection | #1 | Abuse of Functions - misusing RAG's content processing to inject false document entries. |
AML.T0067.000 | Citations | #1 | Abuse of Functions - misusing LLM's citation generation functionality to manipulate trust indicators. |
AML.T0018.002 | Embed Malware | #7 | Malware - embedding foreign malicious code in model files that executes when the model is loaded, abusing execution capabilities. |
AML.T0010.004 | Container Registry | #10 | Supply Chain Attack - compromising container registries to distribute malicious AI containers. |
AML.T0072 | Reverse Shell | N/A | Command and control technique. The method of establishing the shell would determine TLCTC mapping. |
AML.T0073 | Impersonation | #9 | Social Engineering - impersonating trusted entities to manipulate targets into performing desired actions. |
AML.T0074 | Masquerading | N/A | Defense evasion technique for making malicious artifacts appear legitimate rather than a primary threat. |
AML.T0075 | Cloud Service Discovery | N/A | Discovery activity after gaining access to enumerate cloud services and resources. |
AML.T0076 | Corrupt AI Model | N/A | Defense evasion technique for evading model scanners rather than a primary threat. |
AML.T0077 | LLM Response Rendering | #1 | Abuse of Functions - misusing LLM's response rendering capabilities to exfiltrate data through hidden elements. |
AML.T0008.004 | Serverless | N/A | Resource development - acquiring serverless infrastructure for attack operations. |
AML.T0078 | Drive-by Compromise | #1→#3/#7 | Attack sequence: Abuse of Functions (misusing browser's designed web content processing) leading to Client Exploitation or Malware execution via malicious web content. |
AML.T0079 | Stage Capabilities | N/A | Resource development - staging attack capabilities on controlled infrastructure. |
Key Observations
- Attack Sequences Dominate: Many AI attacks involve multi-stage sequences (e.g., #9→#7, #1→#7) rather than single-cluster techniques, reflecting AI's complex attack surface.
- Heavy Focus on #1 Abuse of Functions: AI-specific attacks predominantly misuse legitimate functionality (LLM prompts, inference APIs, model features) rather than exploiting code flaws.
- Trust Boundary Principle: Model/data manipulation techniques map differently based on trust boundaries - external/integrated components = #10 Supply Chain, internal systems = #1 Abuse of Functions.
- Foreign Code Distinction: Clear separation between function abuse (#1) using standard inputs and malware execution (#7) involving foreign code execution is critical for accurate mapping.
- Supply Chain Prominence: AI systems heavily rely on third-party components (models, datasets, frameworks), making #10 Supply Chain Attack highly relevant.
- Limited Traditional Exploits: Fewer techniques map to #2/#3 (Exploiting Server/Client) compared to traditional cybersecurity, reflecting AI's unique attack surface.
- MITRE Mixes Concepts: ATLAS includes reconnaissance, resource development, staging, impacts, and actual threats - requiring careful distinction in TLCTC mapping.
- Impact vs. Threat Confusion: Several techniques (AML.T0048 series) describe consequences rather than threat vectors, highlighting the importance of TLCTC's cause-focused approach.
Mapping Challenges
- Activity vs. Threat: Many ATLAS techniques describe preparatory activities (reconnaissance, resource development) rather than direct exploitation of vulnerabilities.
- Context Dependency: Some techniques can map to different TLCTC clusters depending on implementation (e.g., data poisoning via supply chain vs. direct system access).
- Novel Attack Vectors: AI introduces new forms of "Abuse of Functions" that traditional frameworks don't explicitly address.
Applying the Mapping
Organizations can use this mapping to:
- Translate ATLAS techniques into a risk management framework using TLCTC
- Identify the generic vulnerabilities that underlie specific AI attack techniques
- Apply appropriate controls based on the TLCTC framework
- Understand attack sequences and chains in AI systems
- Develop more comprehensive threat models that incorporate both frameworks
Conclusion
The ATLAS to TLCTC mapping provides a critical bridge between AI-specific threats and general cybersecurity frameworks. By translating specialized AI attack techniques into the standardized TLCTC framework, organizations can better integrate AI security into their broader security operations, apply consistent risk management approaches, and develop more effective countermeasures against emerging AI threats.
This integration highlights the patterns in AI attacks, particularly the preponderance of threats that abuse legitimate functions rather than exploiting code vulnerabilities. It also underscores the importance of supply chain security in AI systems and the need for comprehensive attack sequence modeling that captures the full lifecycle of AI-specific threats.