An official AI intelligence platform for public sector professionals. All content generated and verified by Astra.
analysis

Federal AI policy framework status May 2026

Federal AI policy framework status May 2026

What constitutes the federal AI policy framework now

The binding federal framework for agency AI use is set by Executive Order 14110 and OMB Memorandum M-24-10, which together define agency responsibilities for governance, risk management, transparency, and use of AI in mission delivery and procurement12. EO 14110 directs departments and agencies, under existing authorities, to advance safe, secure, and trustworthy AI, tasking OMB to issue implementation guidance for federal use and management of AI1. OMB’s final M-24-10 operationalizes that direction across the executive branch, establishing roles, minimum practices for higher-risk AI applications, and reporting requirements2.

Core agency obligations under OMB M-24-10

  • Governance and leadership: Agencies must designate a Chief AI Officer and establish or leverage an AI governance body to coordinate policy, risk management, and adoption across the enterprise2.
  • Use-case inventory and transparency: Agencies must maintain an internal inventory of AI use cases and publish a public-facing portion on at least an annual cadence to promote transparency and accountability2.
  • Risk management baseline: Agencies must manage AI risks consistent with the NIST AI Risk Management Framework and implement minimum practices for safety-impacting and rights-impacting AI, including testing, monitoring, human oversight, and incident response23.
  • Definitions and scoping: M-24-10 defines “safety-impacting” and “rights-impacting” AI and applies heightened safeguards and reviews before deployment and during operations for such systems2.
  • Reporting and oversight: Agencies must provide OMB with implementation reporting and comply with timelines for governance setup, inventories, and control adoption as specified in the memorandum2.

NIST risk and assurance pillars

NIST’s AI Risk Management Framework 1.0 provides the federal risk baseline with four core functions — Govern, Map, Measure, and Manage — to structure AI risk identification, measurement, mitigation, and continuous monitoring across the AI lifecycle3. M-24-10 references and leverages this framework to standardize federal risk practices, including documentation, testing, evaluation, verification, validation, and post-deployment monitoring for AI systems23.

Civil rights, safety, and nonbinding principles context

OSTP’s Blueprint for an AI Bill of Rights provides nonbinding principles — safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives — that inform agency design and evaluation practices, complementing the binding requirements of M-24-104. EO 14110 also emphasizes safety, security, civil rights, and equity considerations in federal AI development and use, reinforcing the need for safeguards in safety-impacting and rights-impacting systems12.

Innovation levers and technical capacity

EO 14110 directs the federal government to promote AI innovation while managing risk, including by advancing standards, testing, and evaluation ecosystems1. NIST established the U.S. AI Safety Institute to develop measurement science, testbeds, and guidelines that support trustworthy AI, providing technical underpinnings for agencies’ evaluation and monitoring obligations under M-24-1052.

Scope and preemption boundaries

M-24-10 applies to executive departments and agencies and governs how federal entities acquire, develop, use, and manage AI; it does not purport to regulate state or private actors outside federal acquisition and use contexts2. EO 14110’s general provisions clarify that it shall be implemented consistent with applicable law and does not create any right or benefit enforceable at law or in equity, underscoring that it directs federal operations rather than establishing new justiciable obligations for non-federal parties1.

Acquisition and cloud posture implications

M-24-10 addresses agency acquisition and use of AI-enabled systems, requiring agencies to apply risk management practices, transparency, and oversight to procured AI consistent with the memorandum’s safeguards and the NIST AI RMF23. For hosting federal AI workloads, agencies typically rely on cloud environments with appropriate federal authorizations; for example, Microsoft Azure Government is authorized at the FedRAMP High impact level, supporting deployments that require that baseline6. For defense missions, the DoD Cloud Computing Security Requirements Guide defines impact levels (e.g., IL2, IL4, IL5, IL6) and associated controls for DoD information systems, which inform environment selection and authorization pathways for AI workloads handling defense information7.

Implementation priorities for CAIOs, CIOs, and mission owners

  • Stand up governance: Confirm designation of the Chief AI Officer and operation of an AI governance body with authority to set policy, approve use cases, and oversee risk management and reporting2.
  • Inventory and transparency: Establish and maintain a comprehensive AI use-case inventory and prepare the public-facing inventory update schedule and content in line with M-24-10 requirements2.
  • Risk controls for higher-risk AI: Identify safety-impacting and rights-impacting use cases and implement minimum practices before and during operation, including documented testing and evaluation, human oversight, monitoring, and incident handling consistent with NIST AI RMF functions23.
  • Acquisition alignment: Embed M-24-10 and NIST AI RMF requirements into solicitations and contracts for AI-enabled systems, including vendor transparency, testing artifacts, and post-award monitoring provisions23.
  • Technical assurance: Where applicable, leverage NIST AI Safety Institute outputs and test methods as they become available to strengthen pre-deployment evaluation and continuous monitoring regimes5.
  • Cloud and data posture: Align hosting environments and data protections with applicable federal authorizations and impact categorizations, using offerings with appropriate FedRAMP and, for DoD workloads, CC SRG impact level support67.

1: Executive Order 14110 Safe Secure and Trustworthy Development and Use of Artificial Intelligence — https://www.federalregister.gov/documents/2023/11/01/2023-24284/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence 2: OMB Memorandum M-24-10 Advancing Governance Innovation and Risk Management for Agency Use of Artificial Intelligence — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf 3: NIST AI Risk Management Framework 1.0 NIST AI 100-1 — https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf 4: Blueprint for an AI Bill of Rights — https://www.whitehouse.gov/ostp/ai-bill-of-rights/ 5: NIST launches U.S. Artificial Intelligence Safety Institute — https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute 6: FedRAMP Marketplace listing Microsoft Azure Government — https://marketplace.fedramp.gov/products?productName=Microsoft%20Azure%20Government 7: DoD Cloud Computing Security Requirements Guide overview — https://public.cyber.mil/dccs/dccs-documents/


References

  1. Executive Order 14110 Safe Secure and Trustworthy Development and Use of Artificial Intelligence — https://www.federalregister.gov/documents/2023/11/01/2023-24284/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
  2. OMB Memorandum M-24-10 Advancing Governance Innovation and Risk Management for Agency Use of Artificial Intelligence — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf
  3. NIST AI Risk Management Framework 1.0 NIST AI 100-1 — https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
  4. Blueprint for an AI Bill of Rights — https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  5. NIST launches U.S. Artificial Intelligence Safety Institute — https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute
  6. FedRAMP Marketplace listing Microsoft Azure Government — https://marketplace.fedramp.gov/products?productName=Microsoft%20Azure%20Government
  7. DoD Cloud Computing Security Requirements Guide overview — https://public.cyber.mil/dccs/dccs-documents/