NIST AI Risk Management Framework (AI RMF)

National Institute of Standards and Technology (NIST)

Cross-sector
Risk management
Worldwide
Policy type icon
Policy guidance
Alternative names icon
Private and public sector

Policy overview

AI Risk Management Framework (AI RMF) is a voluntary framework that aims to equip organisations and individuals to manage the various risks of AI and promote trustworthy and responsible development and use of AI systems. The AI RMF has been developed by the National Insitute of Standards and Technology (NIST), in partnership with both private and public sectors. The AI RMF is intended to be practical guidance that is flexible enough to evolve with the changing AI landscape and that is suitable for implementation by different organisations with varying levels of capacity. The AI RMF was released on 26 January, 2023. It aligns with existing AI risk management efforts and is complemented by a NIST AI RMF Playbook, AI RMF Roadmap, AI RMF Crosswalk, and other relevant resources. 

The AI RMF is intended for voluntary use by all AI actors. AI actors encompass organisations and individuals actively involved in the entire lifecycle of an AI system, including those who deploy or operate AI technologies.

The Framework comprises two parts. Part 1 addresses how organisations should frame AI-related risks, define the intended audience, and analyse AI risks and trustworthiness. The AI RMF outlines the characteristics of trustworthy AI, including valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful biases being managed. Part 2 forms the core of the Framework, detailing four essential functions for organisations to address the risks of their AI systems in practice: 'Govern', 'Map', 'Measure', and 'Manage'. 'Govern' focuses on fostering the culture of risk awareness and management in the organisation and applies in all stages of the organisation's risk management procedures and processes. 'Map' function refers to recognising use case context and identifying these context-specific risks whereas 'Measure' refers to the evaluation, analysis and monitoring of identified risks. The last function, 'Manage', refers to the prioritisation of risks and taking action on risks based on their anticipated impact. These four functions and further broken down into categories and subcategories in the AI RMF.

Templates in Saidot

No items found.