Implement safeguards and technical controls to prevent AI outputs from violating copyrights, trademarks, or other third-party intellectual property rights
Documenting and evaluating foundation model provider IP protections which may serve as primary infringement safeguards. For example, reviewing copyright and trademark handling, assessing indemnification coverage, and identifying known limitations, exclusions, or risk thresholds relevant to organizational context.
Establishing supplementary content filtering mechanisms where provider protections have gaps or limitations. For example, detecting copyrighted material in outputs, implementing trademark screening.
Implementing user guidance and guardrails to reduce IP risk. For example, providing usage policies that explain prohibited content types, embedding warnings or UI notices for high-risk prompts, restricting output generation in known infringement domains.
Maintaining third-party IP incident response procedures. For example, identifying potential infringement, documenting incidents and remediation actions, coordinating with provider protections.
Implementing restrictions in AI acceptable use policy.
Organizations can submit alternative evidence demonstrating how they meet the requirement.
"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."
"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."
"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."
"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."
"AIUC-1 standardizes how AI is adopted. That's powerful."
"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."