What M-24-10 does
- OMB M-24-10 is the government-wide policy that operationalizes AI governance, innovation, and risk management across federal agencies, issued pursuant to Executive Order 14110’s direction to standardize safe, secure, and rights-respecting AI use in the federal government [1][2].
- The memo sets binding requirements for agency structures, processes, and safeguards, including formal governance roles, public transparency, risk controls tailored to safety- and rights-impacting AI, and acquisition practices that ensure vendor-provided AI can be evaluated and monitored by agencies [1].
Required governance structures and roles
- Agencies must designate a senior Chief AI Officer responsible for agency-wide AI strategy, oversight, risk management, and alignment of AI initiatives with policy, legal, and ethical requirements [1].
- Agencies must establish an AI Governance Board comprising senior leaders across information technology, data, cybersecurity, privacy, evaluation, civil rights, and mission domains to coordinate AI policy, risk review, and portfolio management [1].
- M-24-10 requires integration of AI risk considerations into enterprise risk management and program decision-making, with clear accountability for approving and overseeing AI deployments [1].
Scope and definitions that drive controls
- M-24-10 differentiates two categories of AI use that trigger minimum safeguards: safety-impacting AI and rights-impacting AI, each defined by the potential for harm to physical safety, critical services, or protected rights of individuals or groups [1].
- Rights-impacting AI includes uses that could materially affect individuals’ civil rights, civil liberties, or privacy, requiring heightened transparency, human alternatives where practicable, and mechanisms for contesting outcomes and seeking redress [1][2].
- Safety-impacting AI covers uses where failures could pose meaningful risks to health, safety, or critical infrastructure, requiring rigorous testing, independent evaluation, and continuous monitoring prior to and during operational use [1].
Minimum practices for safety- and rights-impacting AI
For both safety- and rights-impacting AI, agencies must implement documented minimum practices before deployment and throughout operation:
- Conduct context-appropriate risk assessments, testing, and evaluation to validate intended performance, reliability, robustness, and safeguards, aligned to recognized frameworks and standards such as the NIST AI RMF [1][3].
- Ensure meaningful human oversight and fallback procedures, including the ability to monitor AI behavior, intervene, and disable or rollback deployments when performance or safety thresholds are not met [1].
- Establish continuous monitoring and incident response, including mechanisms to detect performance drift, unexpected behaviors, or harmful outcomes and to report, remediate, and learn from incidents [1].
- Implement privacy protections, access controls, data governance, and security measures commensurate with the sensitivity and impact of the AI use, consistent with existing federal privacy and cybersecurity requirements [1].
- Address discrimination and equity risks through testing and evaluation for disparate impact, bias mitigation strategies, and documented controls, particularly for rights-impacting AI uses [1][2].
- For rights-impacting AI, provide clear notices, accessible explanations, and, where practicable, options for human review or alternative non-AI processes for affected individuals, with routes to contest decisions and seek correction or redress [1][2].
Transparency and public reporting
- Agencies must maintain and publish a public inventory of AI use cases, describing purposes, risk categorizations, safeguards, and contact points for inquiries on agency websites and via the government-wide AI.gov portal to support transparency and public oversight [1][4].
- M-24-10 requires plain-language disclosures for rights-impacting AI that explain how AI is used, what it does, and what protections are in place, enabling individuals to understand and engage with agency processes that use AI [1].
- Agencies must document and make available policies, procedures, and governance artifacts that evidence compliance with M-24-10’s minimum practices, facilitating external accountability and internal auditability [1].
Acquisition and vendor management requirements
- M-24-10 directs agencies to embed AI-specific requirements into acquisition processes, including obtaining sufficient technical documentation and evaluation artifacts to assess performance, risks, training data provenance, and safeguards of vendor-developed or provided AI systems [1].
- Contracts for AI should provide agencies with rights and access necessary to test, evaluate, monitor, and manage AI systems over their lifecycle, including post-award rights that enable independent assessments and remediation actions if risks emerge [1].
- Agencies are to align acquisition practices with AI risk management and governance, ensuring that solicitations and evaluations consider minimum practices for safety- and rights-impacting AI before award and deployment [1].
Standards and frameworks alignment
- OMB directs agencies to use the NIST AI Risk Management Framework as a baseline for AI risk identification, measurement, and mitigation, and to integrate its functions and profiles into agency AI lifecycle processes and documentation [1][3].
- EO 14110 establishes the federal policy to ground AI governance in science-based standards and testing, which M-24-10 carries through by aligning agency requirements with NIST’s work and related federal standards activities [2][1][3].
Implementation approach for agencies
Near-term agency actions to meet M-24-10:
- Designate the Chief AI Officer and stand up the AI Governance Board with clear charters, decision rights, and escalation paths across IT, data, security, privacy, evaluation, civil rights, and mission leadership [1].
- Inventory and classify AI use cases, identifying those that are safety- or rights-impacting, and publish the public inventory on agency websites and provide entries for AI.gov, with descriptions of safeguards and points of contact [1][4].
- Establish minimum practice controls for identified safety- and rights-impacting use cases: risk assessments, testing and independent evaluation, human oversight and fallback, privacy and security protections, monitoring, incident response, and equity/bias mitigation as applicable [1].
- Update acquisition policies, templates, and contract clauses to require vendor documentation, testing access, and lifecycle monitoring rights for AI systems and services, ensuring pre-deployment conformance to M-24-10 minimum practices [1].
- Integrate NIST AI RMF processes into agency AI governance artifacts, including risk registers, evaluation plans, monitoring dashboards, and incident reporting workflows, and tie these into enterprise risk management [1][3].
- Publish plain-language notices and procedures for rights-impacting AI that enable affected individuals to understand, contest, and seek redress for AI-enabled decisions, and ensure availability of human alternatives where practicable [1][2].
Dependencies, gaps, and oversight considerations
- Agencies will need cross-functional coordination between CIO, CAIO, CDO, CISO, SAOP, Chief Evaluation Officer, civil rights officials, and mission owners to classify AI use cases and tailor minimum practices to operational contexts while meeting transparency and redress expectations [1].
- M-24-10 leaves agency discretion to define context-appropriate testing and evaluation rigor; a structured application of NIST AI RMF profiles and risk tiers can reduce variability and strengthen defensibility of controls across missions [1][3].
- EO 14110’s mandate for safe, secure, and rights-respecting AI sets expectations for independent testing and standards-based evaluation; agencies should prepare for external oversight (e.g., GAO, OIG) to assess how M-24-10 controls are implemented and evidenced [2][1].
Key takeaways for federal missions
- Governance is not optional: CAIO designation, AI Governance Boards, and documented risk processes are required for agency AI and must be demonstrable in audits and public materials [1].
- Minimum practices are binding for safety- and rights-impacting AI: testing, independent evaluation, human oversight, continuous monitoring, privacy and security controls, and equity safeguards must be in place before and during use [1].
- Transparency is part of compliance: agencies must maintain and publish AI use case inventories and explain rights-impacting uses in plain language, with avenues for contestation and redress [1][4].
Sources
- OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf
- Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
- NIST AI Risk Management Framework 1.0 — https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
- AI.gov Agency Uses of AI — Public Use Case Inventories — https://ai.gov/agency-uses-of-ai/