AIUC-1
NIST AI RMF

AIUC-1 × NIST AI RMF

The NIST AI RMF is the United States government framework for managing AI risks throughout the AI lifecycle with four core functions: Govern, Map, Measure, and Manage.

AIUC-1 operationalizes the NIST AI RMF. Certification against AIUC-1:

Translates NIST's high-level actions into specific, auditable controls

Provides concrete implementation guidance for key areas such as harmful output prevention, third-party testing and risk management practices

NIST AI RMF crosswalks by function

NIST AI RMF function

GOVERN 1.1: Legal and regulatory compliance

NIST description

Legal and regulatory requirements involving AI are understood, managed, and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 1.2: Trustworthy AI policies

NIST description

The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.

NIST AI RMF function

GOVERN 1.3: Risk management processes

NIST description

Processes and procedures are in place to determine the needed level of risk management activities based on the organization's risk tolerance.

NIST AI RMF function

GOVERN 1.4: Risk management governance

NIST description

The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.

NIST AI RMF function

GOVERN 1.5: Risk monitoring and review

NIST description

Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.

NIST AI RMF function

GOVERN 1.6: AI system inventory

NIST description

Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.

NIST AI RMF function

GOVERN 1.7: AI system decommissioning

NIST description

Processes and procedures are in place for decommissioning and phasing out of AI systems safely and in a manner that does not increase risks or decrease the organization's trustworthiness.

NIST AI RMF function

GOVERN 2.1: Roles and responsibilities

NIST description

Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.

NIST AI RMF function

GOVERN 2.2: AI risk training

NIST description

The organization's personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 2.3: Executive accountability

NIST description

Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.

NIST AI RMF function

GOVERN 3.1: Diverse decision-making

NIST description

Decision-makings related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 3.2: Human-AI oversight

NIST description

Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.

NIST AI RMF function

GOVERN 4.1: Safety-first mindset

NIST description

Organizational policies, and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize negative impacts.

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 4.2: Risk documentation

NIST description

Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 4.3: Testing and incident sharing

NIST description

Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.

NIST AI RMF function

GOVERN 5.1: External feedback

NIST description

Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 5.2: Feedback integration

NIST description

Mechanisms are established to enable AI actors to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.

Relevant AIUC-1 requirements
NIST AI RMF function

GOVERN 6.1: Third-party risk policies

NIST description

Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third party's intellectual property or other rights.

NIST AI RMF function

GOVERN 6.2: Third-party contingency

NIST description

Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 1.1: Purpose achievement

NIST description

A determination is made as to whether the AI system achieves its intended purpose and stated objectives and whether its development or deployment should proceed.

NIST AI RMF function

MANAGE 1.2: Risk prioritization

NIST description

Treatment of documented AI risks is prioritized based on impact, likelihood, or available resources or methods.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 1.3: Risk response planning

NIST description

Responses to the AI risks deemed high priority as identified by the Map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.

NIST AI RMF function

MANAGE 1.4: Residual risk documentation

NIST description

Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.

NIST AI RMF function

MANAGE 2.1: Resource allocation

NIST description

Resources required to manage AI risks are taken into account, along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 2.2: Deployed system value

NIST description

Mechanisms are in place and applied to sustain the value of deployed AI systems.

NIST AI RMF function

MANAGE 2.3: Unknown risk response

NIST description

Procedures are followed to respond to and recover from a previously unknown risk when it is identified.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 2.4: System deactivation

NIST description

Mechanisms are in place and applied, responsibilities are assigned and understood to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 3.1: Third-party monitoring

NIST description

AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 3.2: Pre-trained model monitoring

NIST description

Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.

Relevant AIUC-1 requirements
NIST AI RMF function

MANAGE 4.1: Post-deployment monitoring

NIST description

Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.

NIST AI RMF function

MANAGE 4.2: Continual improvement

NIST description

Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.

NIST AI RMF function

MANAGE 4.3: Incident communication

NIST description

Incidents and errors are communicated to relevant AI actors including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.

NIST AI RMF function

MAP 1.1: Context understanding

NIST description

Intended purpose, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes; uses and risks across the development or product AI lifecycle; TEVV and system metrics.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 1.2: Interdisciplinary diversity

NIST description

Inter-disciplinary AI actors, competencies, skills and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 1.3: Mission alignment

NIST description

The organization's mission and relevant goals for the AI technology are understood and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 1.4: Business value

NIST description

The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 1.5: Risk tolerance

NIST description

Organizational risk tolerances are determined and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 1.6: System requirements

NIST description

System requirements (e.g., "the system shall respect the privacy of its users") are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.

NIST AI RMF function

MAP 2.1: Task definition

NIST description

The specific task, and methods used to implement the task, that the AI system will support is defined (e.g., classifiers, generative models, recommenders).

NIST AI RMF function

MAP 2.2: Knowledge limits

NIST description

Information about the AI system's knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making informed decisions and taking subsequent actions.

NIST AI RMF function

MAP 2.3: Scientific integrity

NIST description

Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 3.1: Potential benefits

NIST description

Potential benefits of intended AI system functionality and performance are examined and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 3.2: Potential costs

NIST description

Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 3.3: Application scope

NIST description

Targeted application scope is specified and documented based on the system's capability, established context, and AI system categorization.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 3.4: Operator proficiency

NIST description

Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed and documented.

NIST AI RMF function

MAP 3.5: Human oversight

NIST description

Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from GOVERN function.

NIST AI RMF function

MAP 4.1: Legal risk mapping

NIST description

Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third-party's intellectual property or other rights.

NIST AI RMF function

MAP 4.2: Internal risk controls

NIST description

Internal risk controls for components of the AI system including third-party AI technologies are identified and documented.

NIST AI RMF function

MAP 5.1: Impact assessment

NIST description

Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MAP 5.2: Stakeholder engagement

NIST description

Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 1.1: Risk metrics selection

NIST description

Approaches and metrics for measurement of AI risks enumerated during the Map function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be measured are properly documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 1.2: Metric appropriateness

NIST description

Appropriateness of AI metrics and effectiveness of existing controls is regularly assessed and updated including reports of errors and impacts on affected communities.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 1.3: Independent assessment

NIST description

Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.

NIST AI RMF function

MEASURE 2.1: TEVV documentation

NIST description

Test sets, metrics, and details about the tools used during test, evaluation, validation, and verification (TEVV) are documented.

NIST AI RMF function

MEASURE 2.2: Human subject evaluations

NIST description

Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 2.3: Performance demonstration

NIST description

AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 2.4: Production monitoring

NIST description

The functionality and behavior of the AI system and its components – as identified in the MAP function – are monitored when in production.

NIST AI RMF function

MEASURE 2.5: Validity and reliability

NIST description

The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented.

NIST AI RMF function

MEASURE 2.6: Safety evaluation

NIST description

AI system is evaluated regularly for safety risks – as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics implicate system reliability and robustness, real-time monitoring, and response times for AI system failures.

NIST AI RMF function

MEASURE 2.7: Security and resilience

NIST description

AI system security and resilience – as identified in the MAP function – are evaluated and documented.

NIST AI RMF function

MEASURE 2.8: Transparency and accountability

NIST description

Risks associated with transparency and accountability – as identified in the MAP function – are examined and documented.

NIST AI RMF function

MEASURE 2.9: Model explanation

NIST description

The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as identified in the MAP function – and to inform responsible use and governance.

NIST AI RMF function

MEASURE 2.10: Privacy risk assessment

NIST description

Privacy risk of the AI system – as identified in the MAP function – is examined and documented.

NIST AI RMF function

MEASURE 2.11: Fairness and bias

NIST description

Fairness and bias – as identified in the MAP function – is evaluated and results are documented.

NIST AI RMF function

MEASURE 2.12: Environmental impact

NIST description

Environmental impact and sustainability of AI model training and management activities – as identified in the MAP function – are assessed and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 2.13: TEVV effectiveness

NIST description

Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and documented.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 3.1: Emergent risk tracking

NIST description

Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.

NIST AI RMF function

MEASURE 3.2: Risk tracking adaptation

NIST description

Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 3.3: User feedback systems

NIST description

Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.

Relevant AIUC-1 requirements
NIST AI RMF function

MEASURE 4.1: Context-specific measurement

NIST description

Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.

NIST AI RMF function

MEASURE 4.2: Trustworthiness validation

NIST description

Measurement results regarding AI system trustworthiness in deployment context(s) and across AI lifecycle are informed by input from domain experts and other relevant AI actors to validate whether the system is performing consistently as intended. Results are documented.

NIST AI RMF function

MEASURE 4.3: Performance tracking

NIST description

Measurable performance improvements or declines based on consultations with relevant AI actors including affected communities, and field data about context-relevant risks and trustworthiness characteristics, are identified and documented.

Last updated July 22, 2025.
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.