TLCTC Blog - 2025/05/09

Bridging the Chasm: How TLCTC Unifies NIST AI Risk Management with MITRE's Cyber Threat Intelligence

Artificial Intelligence (AI) is rapidly transforming our world, bringing immense opportunities alongside a complex new risk landscape. Guiding organizations through this is the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1, January 2023), further detailed for generative AI in the NIST AI RMF Generative AI Profile (NIST AI 600-1, July 2024). These vital documents provide a high-level structure for identifying, assessing, and managing AI risks.

Simultaneously, the cybersecurity community relies on tactical frameworks like MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) to understand specific adversary techniques against AI. While NIST provides the "what" and "why" of AI risk management, and MITRE details the "how" of AI attacks, a crucial gap often exists: a standardized, strategic way to categorize the cyber threats that underpin many AI risks.

This is where the Top Level Cyber Threat Clusters (TLCTC) framework (White Paper Version 1.6.3, April 2025) offers a powerful solution, acting as a Rosetta Stone to connect these vital perspectives.

The NIST AI Risk Landscape: A Call for Cyber Clarity

NIST's AI RMF 1.0 establishes core trustworthiness characteristics, including the critical "Secure and Resilient" pillar. It rightly acknowledges in Appendix B that AI systems introduce unique cyber risks (e.g., model evasion, data poisoning, membership inference) not comprehensively covered by traditional software risk frameworks.

The Generative AI Profile (NIST AI 600-1) delves deeper, identifying risks "novel to or exacerbated by GAI." A key area is "Information Security," which includes threats like:

  • GAI enabling the creation of malware or sophisticated phishing.
  • Prompt injection attacks.
  • Data poisoning of training sets.
  • Increased attack surface for the GAI systems themselves.

While NIST provides invaluable guidance and suggested actions within its Govern, Map, Measure, and Manage functions, the cyber-specific, cause-side threats often remain at a high conceptual level. How do we systematically categorize the underlying cyber vectors that make an AI system not "Secure and Resilient," or that enable "Information Security" breaches in GAI?

Enter TLCTC: Structuring AI Cyber Risks

The TLCTC framework's 10 cause-oriented clusters provide the necessary granularity to dissect the cyber dimensions of AI risks:

AI as an IT System:

At its core, an AI system (models, data, infrastructure, APIs) is an IT system. As such, it is inherently vulnerable to the 10 TLCTCs:

  • #1 Abuse of Functions: This perfectly describes adversarial inputs or prompt injection attacks that manipulate the AI's intended functionality to produce unintended or malicious outputs.
  • #2 Exploiting Server / #3 Exploiting Client: Vulnerabilities in the AI model's serving infrastructure, APIs, or any client-side components interacting with the AI.
  • #4 Identity Theft: Compromised credentials used to access, modify, or exfiltrate AI models, training data, or control systems.
  • #10 Supply Chain Attack: This directly maps to risks like "data poisoning" of training data or the use of compromised pre-trained models or libraries. The "ATLAS to TLCTC Mapping" blog (TLCTC Blog, 2025/05/09) highlights MITRE ATLAS techniques like AML.T0019 (Publish Poisoned Datasets) falling squarely under TLCTC #10.

AI as a Powerful Threat Actor:

The Generative AI Profile (NIST AI 600-1) notes GAI can lower barriers for creating malicious content. TLCTC clarifies this by showing GAI as a potent enabler for existing cyber threat clusters:

  • GAI creating hyper-personalized phishing emails enhances #9 Social Engineering with unprecedented scale and effectiveness.
  • GAI generating polymorphic malware aids #7 Malware by creating novel variants that evade signature-based detection.
  • GAI automating vulnerability discovery accelerates exploitation under #2 Exploiting Server / #3 Exploiting Client.
  • GAI creating deepfakes of executives amplifies #4 Identity Theft and #9 Social Engineering for business email compromise attacks.

AI as a Powerful Defender:

Conversely, AI systems can strengthen defenses against the same TLCTC categories:

  • AI-powered anomaly detection can identify unusual patterns indicating #1 Abuse of Functions or #4 Identity Theft.
  • AI-based endpoint protection can detect and block novel #7 Malware through behavioral analysis rather than signatures.
  • AI systems can analyze communication patterns to detect sophisticated #9 Social Engineering attempts that might bypass traditional filters.
  • AI-driven supply chain risk management can help identify potential #10 Supply Chain Attack vectors before they impact systems.

The TLCTC framework provides a consistent taxonomy to categorize both offensive and defensive AI capabilities, enabling organizations to map threats and corresponding defensive measures within the same strategic framework.

TLCTC: The Strategic Connector for NIST and MITRE

The true power of integrating TLCTC emerges when we see it as a bridge:

NIST (Strategic AI Risk Management):

  • AI RMF 1.0 Goal: AI systems should be "Secure and Resilient."
  • GAI Profile Risk: Manage "Information Security" risks like prompt injection and data poisoning.

TLCTC (Operational Strategic Cyber Threat Categorization):

  • Translates: "Secure and Resilient" for AI means protecting against the 10 TLCTCs. "Prompt injection" is an instance of #1 Abuse of Functions. "Data poisoning" is an instance of #10 Supply Chain Attack.

MITRE ATLAS/ATT&CK (Tactical Adversary Behavior):

  • Details: Specific TTPs from ATLAS (e.g., AML.T0051 "LLM Prompt Injection" or AML.T0054 "LLM Jailbreak" for #1 Abuse of Functions; AML.T0019 "Publish Poisoned Datasets" for #10 Supply Chain Attack) show how these TLCTC clusters manifest against AI systems.

This creates a clear, actionable lineage:

NIST (AI Risk Goal) → TLCTC (Cyber Threat Category) → MITRE (Specific Attack Technique)

Framework Level Purpose Example
NIST AI RMF/GAI Profile Strategic AI Risk Management "Secure and Resilient" or "Information Security"
TLCTC Framework Operational Strategic Cyber Threat Categorization TLCTC-01.00 (Abuse of Functions), TLCTC-10.00 (Supply Chain Attack)
MITRE ATLAS/ATT&CK Tactical Adversary Behavior AML.T0051 (LLM Prompt Injection), AML.T0019 (Publish Poisoned Datasets)

Benefits of Integrating TLCTC with NIST's AI Risk Frameworks

  • Enhanced Clarity & Specificity: Moves beyond broad terms like "Information Security" to specific, cause-oriented cyber threat clusters.
  • Structured Risk Assessment (Map): Provides a consistent taxonomy to map AI cyber risks.
  • Targeted Measurement (Measure): Enables development of metrics for resilience against specific TLCTCs.
  • Actionable Mitigation (Manage): Allows for more precise control selection by understanding the root cyber threat vector.
  • Balanced Approach to AI's Dual Nature: Provides a framework to understand both AI threats and AI-powered defenses within the same taxonomy.
  • Improved Communication: Offers a common language for diverse stakeholders, from AI developers and risk managers to cybersecurity operations.
  • Stronger Bridge to Tactical Intelligence: Directly connects NIST's strategic guidance to the operational details provided by MITRE ATLAS and ATT&CK.

Moving Forward with Structured AI Cyber Risk Management

NIST's AI RMF 1.0 and the Generative AI Profile are foundational for responsible AI adoption. By integrating the TLCTC framework, organizations can bring a new level of structure and clarity to the cybersecurity dimensions of AI risk. This cause-side approach not only demystifies the threats but also empowers more effective, targeted, and strategic defense.

The emerging reality of AI as both a powerful threat actor and a powerful defender requires frameworks that can accommodate this duality. TLCTC provides the necessary structure to assess both the offensive capabilities that must be defended against and the defensive capabilities that can be leveraged, all within a consistent taxonomy that bridges strategic risk management and tactical security operations.

As AI continues to evolve, leveraging a universal cyber threat taxonomy like TLCTC will be indispensable for navigating its complexities and ensuring that AI systems are, indeed, secure and resilient, while also maximizing their potential as defenders against the very threats they might otherwise enable.

References

  • National Institute of Standards and Technology (NIST). (2023, January). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). https://doi.org/10.6028/NIST.AI.100-1
  • National Institute of Standards and Technology (NIST). (2024, July). Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1). https://doi.org/10.6028/NIST.AI.600-1
  • TLCTC.net. (2025, April). Top Level Cyber Threat Clusters White Paper Version 1.6.3. White Paper
  • TLCTC.net. (2025, May 09). ATLAS to TLCTC Mapping: Aligning AI Threats with Standardized Cyber Threat Clusters. Blog Post