Decoding NIST’s AI RMF (Risk Management Framework): Overview and Critique

U.Y.
4 min readMay 10, 2024

--

Understanding the risks of a product is crucial for all the project stakeholders, especially for those who have to approve the residual risks at the end of the day. Risk management has become even more important with the rise of AI solutions, as there is more interaction, decision support, decision making or action being taken for the consumer’s (a human or a non-human identity) demand.

I wanted to create an overview of NIST’s AI RMF for those who want to understand what is explained in the document and how to proceed. You may be wondering if I used an AI tool to create the overview, and the answer is “no” :)

Let me start by copying the goal of the AI RMF below and write some critics about the document:

“the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”

First of all, it is worth mentioning that the document is really written in plain language for a broad audience. Even if you have a non-technical background, I guarantee that you will understand the most of the context, if you do read for understanding.

The framework document is also relatively new (the version is 1.0) and it is obvious that the document will grow based on the current strong structure. But even in the first version, the authors did a great job of showing us the way.

You might ask why we need a risk management framework specifically for AI. It is because AI risks require a different perspective than we are used to. Of course, the risk level is still a combination of “impact” and “likelihood”, but assessing and managing the risk requires a different love and care, a different relationship :) Actually, I will surprise you, and suggest you to read “Appendix-B: How AI Risks Differ from Traditional Software Risks” even before reading the main parts. This will help you keep the contrast of AI risks in mind as you read the document.

The document consists of two parts: “Part 1: Foundational Information” and “Part 2: Core and Profiles”.

Part 1 is more technical, or let’s say, has more technical jargon. This initial section of this part explains how organizations can frame (conceptualize) the AI risks, including the challenges of AI risk management. I highly recommend you reading “1.2.2 Risk Tolerance”, because the risk tolerance is a very critical awareness point for all. In addition, many companies or industries are unclear about “how much risk tolerance is legal or compliant”. Such an unclarity will hinder the progress of AI in some of those industries. This topic deserves its own blog.

Part 1 also explains the intended audience, and has a great section about AI risks and trustworthiness. The characteristics of trustworthy AI systems are summarized to be valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. The documents elaborates each characteristic and gives great references for more details.

Section 5 (AI RMF Core) of Part 2 is the heart of the document. If you have limited time, read only this section, which is about 13 pages long. AI RMF Core is clearly explained and the framework is extremely easy to follow. The four functions of the framework are: Govern, Map, Measure and Manage, all of which are self-explanatory (see the diagram below). Each function is then broken down into categories and subcategories.

NIST Artificial Intelligence Risk Management Framework Core Diagram
AI Risk Management Framework Core (Image Source: https://doi.org/10.6028/NIST.AI.100-1)

Part 2 concludes with AI RMF Profiles, which are defined as “implementations of the AI RMF functions, categories, and subcategories for a specific setting or application”. Profiles can be used for purposes like visualizing the current and desired state, or building commonalities between different sectors.

The document also has four great appendices that are well worth reading. There are 42 pages, and I estimate that it would take 2 to 4 hours depending on your experience with AI and risk management.

To repeat it here: If you have limited time, focus on Section 5 and Appendix B. But don’t forget to come back later and read all the other pages :)

Once you read the framework, you will definitely realize that The Playbook is another great resource. Besides training you on the categories and subcategories of the framework, there are so many additional resource links and references that can make you the world’s greatest AI risk management expert.

Happy reading!..

--

--