AIUC-1
OWASP Top 10 for LLM Applications

AIUC-1 × OWASP Top 10 for LLM Applications

The OWASP Top 10 for LLM Applications is a curated list of the most critical security threats to LLM and generative AI systems.

AIUC-1 integrates OWASP's Top 10 for LLM and Generative AI. Certification against AIUC-1:

Addresses Top Ten threats in requirements and controls

Strengthens robustness against the threats identified with concrete requirements and controls

Goes beyond OWASP's focus on security alone

OWASP Top 10 crosswalks by threat

OWASP threat

LLM01:25 - Prompt Injection

OWASP description

This manipulates a large language model (LLM) through crafty inputs, causing unintended actions by the LLM. Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.

OWASP threat

LLM02:25 - Sensitive Information Disclosure

OWASP description

Sensitive info in LLMs includes PII, financial, health, business, security, and legal data. Proprietary models face risks with unique training methods and source code, critical in closed or foundation models.

OWASP threat

LLM03:25 - Supply Chain

OWASP description

LLM supply chains face risks in training data, models, and platforms, causing bias, breaches, or failures. Unlike traditional software, ML risks include third-party pre-trained models and data vulnerabilities.

OWASP threat

LLM04:25 - Data and Model Poisoning

OWASP description

Data poisoning manipulates pre-training, fine-tuning, or embedding data, causing vulnerabilities, biases, or backdoors. Risks include degraded performance, harmful outputs, toxic content, and compromised downstream systems.

OWASP threat

LLM05:25 - Improper Output Handling

OWASP description

Improper Output Handling involves inadequate validation of LLM outputs before downstream use. Exploits include XSS, CSRF, SSRF, privilege escalation, or remote code execution, which differs from Overreliance.

OWASP threat

LLM06:25 - Excessive Agency

OWASP description

LLM systems gain agency via extensions, tools, or plugins to act on prompts. Agents dynamically choose extensions and make repeated LLM calls, using prior outputs to guide subsequent actions for dynamic task execution.

OWASP threat

LLM07:25 - System Prompt Leakage

OWASP description

System prompt leakage occurs when sensitive info in LLM prompts is unintentionally exposed, enabling attackers to exploit secrets. These prompts guide model behavior but can unintentionally reveal critical data.

OWASP threat

LLM08:25 - Vector and Embedding Weaknesses

OWASP description

Vectors and embeddings vulnerabilities in RAG with LLMs allow exploits via weak generation, storage, or retrieval. These can inject harmful content, manipulate outputs, or expose sensitive data, posing significant security risks.

OWASP threat

LLM09:25 - Misinformation

OWASP description

LLM misinformation occurs when false and credible outputs mislead users, risking security breaches, reputational harm, and legal liability, making it a critical vulnerability for reliant applications.

OWASP threat

LLM10:25 - Unbounded Consumption

OWASP description

Unbounded Consumption occurs when LLMs generate outputs from inputs, relying on inference to apply learned patterns and knowledge for relevant responses or predictions, making it a key function of LLMs.

Last updated July 22, 2025.
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.