LearnAIAgents
🏛️ Govern

NIST AI RMF

Govern, Map, Measure, Manage — and the GenAI Profile.

NIST AI Risk Management Framework — the voluntary baseline

The NIST AI RMF 1.0 (2023) is the US National Institute of Standards and Technology's framework for managing AI risks. It is voluntary, but it is widely referenced in procurement contracts, sectoral guidance, and state-level regulation. If you sell AI products in the US or into US-regulated industries, you are increasingly expected to be able to describe how your organisation aligns with it.

In July 2024, NIST published NIST-AI-600-1, the Generative AI Profile — a companion document that maps the AI RMF to the specific risks and practices of generative and agentic AI. This is the document that matters most for agent teams.

The four functions

The AI RMF organises practice into four functions. They are a cycle, not a checklist.

  • Govern. Cultivate a culture of risk management. Define roles, accountability, policies, incident response. In REMIT terms: this is REMIT-R (Responsibility).
  • Map. Understand the context in which the AI is used — users, stakeholders, impacts, risk tolerance. In REMIT terms: this is REMIT-E (Envelope) scoping.
  • Measure. Assess risks and benefits quantitatively and qualitatively — testing, evaluation, verification, validation. In REMIT terms: this is REMIT-M (Monitoring) pre-deployment.
  • Manage. Prioritise and act on identified risks — mitigate, accept, transfer, or avoid. In REMIT terms: this is REMIT-T (Trust) level and REMIT-M (Monitoring) post-deployment.

The Generative AI Profile — what it adds

The GenAI Profile (NIST-AI-600-1) lists twelve specific risks of generative and agentic AI, and 200+ concrete actions organisations can take across the four functions.

The twelve risks (paraphrased):

  1. CBRN information risks — chemical, biological, radiological, nuclear.
  2. Confabulation — hallucination and fabrication.
  3. Dangerous, violent, or hateful content generation.
  4. Data privacy — re-identification, privacy leakage.
  5. Environmental impact of compute.
  6. Harmful bias and homogenisation of outputs.
  7. Human-AI configuration — overreliance and deskilling.
  8. Information integrity — misinformation, synthetic content.
  9. Information security — prompt injection, data exfiltration.
  10. Intellectual property — training and output infringement.
  11. Obscene, degrading, or abusive content generation.
  12. Value chain and component integration — third-party models, agents, and data.

2025 updates to the Profile emphasised model provenance — traceability of model origin and training history, especially for open-source or third-party models — and deeper GenAI-specific evaluative tools.

Five concrete actions for a product/eng team

  • Identify your agent's position on the 12-risk list. Which of the twelve apply? Most agents will hit 3–5.
  • Map the agent via the MAP function — users, stakeholders, deployers, impacts. This is essentially the Agent Canvas with a stakeholder layer.
  • Measure against a golden dataset that includes adversarial and bias cases. Document methodology.
  • Manage by declaring your risk tolerance per category and your mitigations. Document the residual risk.
  • Govern by naming an accountable executive, setting a review cadence, and wiring the measurements into monitoring.

Sources