An official AI intelligence platform for public sector professionals. All content generated and verified by Astra.
analysis

Internal AI oversight structures with governance boards and use case registries

What OMB now requires

  • OMB M-24-10 mandates each agency designate a Chief AI Officer (CAIO) to lead internal governance of AI across the enterprise and to coordinate with CIO/CDO/Privacy/Security functions on policy, risk management, and reporting responsibilities.1
  • M-24-10 directs agencies to establish an AI Governance Board, chaired by the CAIO, composed of senior officials with relevant equities (e.g., CIO, CDO, CISO, Senior Agency Official for Privacy), to oversee AI strategy, risk controls, and portfolio-level decisions.1
  • Agencies must develop and publish a public AI use case inventory on a recurring basis, using a schema specified by OMB, to provide transparency into the scope and status of AI uses, including whether any are rights-impacting or safety-impacting.1
  • The memorandum requires agencies to align their AI risk management practices to NIST’s AI Risk Management Framework (AI RMF 1.0), including governance, mapping, measurement, and management functions, and to incorporate evolving test methods from the NIST AI Safety Institute where applicable.123
  • For AI that is rights-impacting or safety-impacting, M-24-10 prescribes minimum practices such as documented impact assessments, pre-deployment testing, ongoing monitoring, incident response, human oversight, and, where appropriate, fallback or opt-out mechanisms.1

Why use case registries matter

  • Use case inventories create portfolio visibility and an authoritative record that lets Governance Boards enforce policy, triage risk, and resource oversight across the AI lifecycle.1
  • Public inventories operationalize transparency commitments under EO 14110 by communicating the nature and status of AI deployments to affected populations and external stakeholders.41
  • The inventory schema also standardizes reporting across agencies, enabling cross-government comparison and prioritization for testing, evaluation, and monitoring under the NIST AI RMF and the AI Safety Institute’s emerging evaluation ecosystem.123

Governance boards: scope, decisions, and workflows

  • Charters should vest the Board with authority to approve AI strategies, set portfolio risk thresholds, adjudicate rights-impacting use approvals, and assign accountability for remediation and decommissioning when controls fail, consistent with M-24-10.1
  • Boards should define an intake workflow that requires proposed AI uses to be mapped to the AI RMF functions, identify data sources and affected populations, and classify whether the use is rights- or safety-impacting before any deployment decision.12
  • OMB expects continuous monitoring: Boards must require performance, safety, and risk metrics; define material change triggers; and manage incident response and post-incident reviews for AI systems.1
  • For third-party or vendor-provided AI capabilities, Boards must ensure the same minimum practices apply and that acquisition artifacts document risk controls and monitoring commitments commensurate with rights and safety impact.1

Registries: minimum elements and stewardship

  • M-24-10 requires agencies to maintain a public inventory updated on a recurring cadence, covering at minimum purpose, status, and impact categorization, with governance points-of-contact for each entry.1
  • The CAIO is the accountable steward for the inventory, coordinating with mission owners and privacy/security/data offices to ensure complete and accurate entries aligned to OMB’s schema.1
  • Agencies should reference the AI RMF and AI Safety Institute resources to determine measurement and evaluation fields appropriate to each use case entry, especially as testing methods mature.23

Intersection with NIST AI RMF and testing

  • The AI RMF 1.0 provides the structured controls baseline that Governance Boards should adopt for decision criteria, documentation, and monitoring across the portfolio.2
  • The AI Safety Institute’s mission includes developing and disseminating measurement science and test methods; Boards should plan to incorporate validated tests (e.g., robustness, safety, and harmful content) into pre-deployment and continuous monitoring for relevant use cases.3
  • Agencies can leverage the AI RMF to translate framework functions into concrete process steps and artifacts for intake reviews and inventory metadata.2

Implementation patterns agencies can adopt

  • Establish an AI Governance Board charter that explicitly ties decisions and artifacts to AI RMF functions and M-24-10 minimum practices, with named executive owners for each control family.12
  • Stand up an intake and registration process: require mission leads to submit proposed AI uses via a standardized form mapped to OMB’s inventory schema and AI RMF “Map” function; reject incomplete entries.12
  • Define rights- and safety-impacting classification criteria per M-24-10 and ensure escalated review paths, additional testing, and contingency planning are documented before deployment.1
  • Create portfolio monitoring dashboards that track metrics specified in AI RMF “Measure” and “Manage” functions, and schedule periodic Board reviews against thresholds and trendlines.2
  • Publicly post the inventory at a stable agency URL and maintain a change log and contact information to meet transparency and accountability expectations under EO 14110 and M-24-10.41

Acquisition and third-party AI

  • M-24-10 applies minimum practices to AI uses delivered via contracts, grants, or cooperative agreements; Governance Boards should direct acquisition teams to embed risk, transparency, and monitoring requirements in solicitations and awards commensurate with impact categorization.1
  • Inventory entries must not omit third-party AI systems; Boards should require program offices to register vendor-provided AI uses and document how minimum practices will be met post-award.1

Microsoft and compliant cloud controls where relevant

  • Agencies deploying AI on Azure Government can leverage FedRAMP High and DoD SRG Impact Level authorizations (including IL5 in Azure Government and IL6 in Azure Government Secret) to meet infrastructure control baselines for sensitive workloads.56
  • Azure Policy can enforce governance at scale (e.g., resource tagging, configuration compliance, and policy-as-code), supporting inventory stewardship and continuous control assurance required by M-24-10 and the AI RMF.712
  • Microsoft’s Responsible AI Standard provides engineering requirements and documentation practices agencies can adopt or adapt for development teams, complementing (not substituting) OMB and NIST requirements.82
  • Azure AI Foundry offers managed capabilities for model orchestration, evaluation, and content safety that can be integrated into pre-deployment testing and monitoring pipelines consistent with AI RMF “Measure” and “Manage.”92
  • Note: Platform compliance and tooling do not satisfy policy obligations on their own; Governance Boards must still document impact assessments, human oversight, and transparency artifacts per M-24-10.1

Known tensions and gaps

  • GAO’s Accountability Framework predates M-24-10 and uses different terminology and structuring of governance artifacts; agencies should reconcile GAO’s emphasis on program accountability with OMB’s newer role definitions (e.g., CAIO-led Boards) when building charters and processes.101
  • The AI Safety Institute’s test methods are evolving; Governance Boards should flag where validated tests are not yet available and document interim evaluation approaches aligned to AI RMF principles until authoritative methods mature.32
  • Inventories will only be as complete as intake coverage; Governance Boards should explicitly require registration of internal and external (vendor) AI uses and audit for completeness to avoid transparency gaps under M-24-10.1

Action checklist for federal leaders

  • Designate or confirm the CAIO and constitute the AI Governance Board with a charter anchored in M-24-10 and AI RMF.12
  • Launch a standardized intake and inventory process aligned to OMB’s schema; publish the inventory and maintain it on a recurring cadence.1
  • Classify AI uses for rights- or safety-impact and apply minimum practices with escalated testing and monitoring.1
  • Integrate AI RMF functions into lifecycle controls; adopt AI Safety Institute test methods as they become available.23
  • Embed acquisition requirements to ensure vendor AI uses meet the same minimum practices and are included in the inventory.1
  • Use compliant cloud controls (e.g., Azure Government authorizations, Azure Policy) to enforce configuration governance and support measurement and management.57

1: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (OMB M-24-10) — https://www.whitehouse.gov/omb/memoranda/2024/m-24-10/ 4: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ 2: Artificial Intelligence Risk Management Framework (AI RMF 1.0) — https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.AI.100-1.pdf 3: NIST Artificial Intelligence Safety Institute — https://www.nist.gov/aisi 10: Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (GAO-21-519SP) — https://www.gao.gov/products/gao-21-519sp 5: Azure Government compliance offerings — https://learn.microsoft.com/azure/azure-government/compliance/ 7: Azure Policy overview — https://learn.microsoft.com/azure/governance/policy/overview 6: Azure Government Secret overview — https://learn.microsoft.com/azure/azure-government/secret/overview 8: Microsoft Responsible AI principles and approach — https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai 9: Azure AI Foundry documentation — https://learn.microsoft.com/azure/ai-foundry/


References

  1. Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (OMB M-24-10) — https://www.whitehouse.gov/omb/memoranda/2024/m-24-10/
  2. Artificial Intelligence Risk Management Framework (AI RMF 1.0) — https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.AI.100-1.pdf
  3. NIST Artificial Intelligence Safety Institute — https://www.nist.gov/aisi
  4. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  5. Azure Government compliance offerings — https://learn.microsoft.com/azure/azure-government/compliance/
  6. Azure Government Secret overview — https://learn.microsoft.com/azure/azure-government/secret/overview
  7. Azure Policy overview — https://learn.microsoft.com/azure/governance/policy/overview
  8. Microsoft Responsible AI principles and approach — https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai
  9. Azure AI Foundry documentation — https://learn.microsoft.com/azure/ai-foundry/
  10. Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (GAO-21-519SP) — https://www.gao.gov/products/gao-21-519sp