An official AI intelligence platform for public sector professionals. All content generated and verified by Astra.
policy-brief

Build 2026 AI announcements federal adoption checklist

Executive context

Federal AI adoption must align with OMB M-24-10, which requires agencies to govern AI use with inventories, impact assessments, safeguards for safety-impacting AI (including AI red-teaming), and appropriate transparency and reporting mechanisms1. The overarching Executive Order 14110 further directs agencies to advance safe, secure, and trustworthy AI, assigning roles to standards bodies and emphasizing risk management, safety, and security in both development and use2. NIST’s AI Risk Management Framework (AI RMF 1.0) provides a voluntary but widely referenced structure to operationalize these requirements across the AI lifecycle through the Govern, Map, Measure, and Manage functions3.

This brief does not summarize Build 2026 announcements; instead, it outlines what federal leaders should verify before piloting or procuring any newly announced Microsoft AI developer capabilities, grounded in the policy and compliance artifacts cited above123.

What to verify in any Build 2026 AI developer announcements

  • Authorization boundary and hosting location:

    • For systems requiring FedRAMP High, verify the capability is available within Microsoft Azure Government and within an authorized boundary on the FedRAMP Marketplace listing for Microsoft Azure Government, not just in commercial Azure4.
    • For DoD workloads, confirm the intended services are covered by a current Provisional Authorization at the required Impact Level per the DoD Cloud Computing SRG and the DoD Authorized Cloud Service Offerings list (e.g., IL2/4/5/6), rather than assuming parity with commercial offerings56.
  • Development and deployment platform:

    • If new features are delivered through Azure AI Foundry (Microsoft’s environment for building, evaluating, and deploying generative AI applications), confirm their availability and support in the specific sovereign cloud environment you require (e.g., Azure Government) before planning pilots74.
    • Use Azure Policy regulatory compliance initiatives to enforce baseline technical controls mapped to applicable frameworks during any pilot (e.g., built-ins for NIST SP 800-53 and FedRAMP) and verify control coverage and scope in your subscription before go-live89.
  • Model safety, evaluations, and controls:

    • For safety-impacting AI use cases, plan for AI red-teaming and safety evaluations as required by OMB M-24-10; ensure any Microsoft-announced model or API integrates with or supports your evaluation workflow (e.g., harmful content filters, logging, and test harnesses)1.
    • If using Microsoft content safety services, validate the coverage and limits of Azure AI Content Safety, including the categories it detects and how it logs and enforces policies in your environment10.
  • Data protection and use limitations:

    • Confirm data residency, data processing, logging, and retention behaviors in sovereign clouds for any new API or tool; these must be consistent with your FedRAMP/DoD IL boundary and agency policies, not just commercial defaults46.
    • For generative AI services (e.g., Azure OpenAI Service), verify the government-cloud deployment model, isolation characteristics, and compliance scope described for Azure Government before handling CUI or high-impact data114.
  • Responsible AI governance:

    • Ensure adoption plans map to NIST AI RMF functions, including continuous monitoring of model performance and risks (Govern/Manage), and that your program documents context, intended use, and risks (Map/Measure)3.
    • Align public transparency and notice for safety-impacting AI with OMB M-24-10, including documentation of safeguards and limitations prior to operational use1.

Microsoft federal cloud and AI platform context

Azure Government is listed on the FedRAMP Marketplace with High authorization; agencies must confirm specific service coverage and authorizations relevant to their workloads on that listing prior to use4. For DoD, Impact Level requirements and acceptable hosting constructs are defined in the DoD Cloud Computing SRG; offerings and levels authorized for use are enumerated on the DoD Authorized Cloud Service Offerings list and should be checked for the precise services and regions intended for deployment56.

Azure AI Foundry provides tooling to build and evaluate AI applications, including orchestration, evaluation, and deployment capabilities; agencies should verify whether Foundry features announced at Build are supported in Azure Government and can be governed with Azure Policy and your agency’s control requirements78. Azure OpenAI Service has a government-cloud variant; agencies must validate its availability, isolation properties, and compliance scope in Azure Government documentation before handling sensitive data or integrating with mission systems11. Where content moderation or harm filtering is required, Azure AI Content Safety provides category-based detection and enforcement primitives that can be integrated into application pipelines, subject to the documented capabilities and limitations10.

Procurement and supply chain checkpoints

OMB M-22-18 requires agencies to obtain secure software development attestations from software producers consistent with NIST’s Secure Software Development Framework (SSDF) for applicable software, which includes many developer tools, libraries, and services used to build or run AI systems1213. When considering new Build-announced developer tools or services, ensure acquisition artifacts include the producer’s M‑22‑18 attestation and identify how SSDF practices are satisfied for both the tool itself and any embedded components (e.g., model gateways, plugins, or SDKs)1213. During pilot-to-production transition, use regulatory compliance initiatives and control mappings to enforce required technical controls (e.g., access control, audit and accountability, configuration management) across environments as part of your ATO package89.

30-day action plan for agency teams

  • Catalog interest: Identify any Build-announced capabilities you intend to test; document intended use, data sensitivity, and required impact level.
  • Verify boundary: Check FedRAMP Marketplace and DoD Authorized CSO listings for applicable authorizations; engage your Microsoft account team for formal service-scoped authorization evidence as needed46.
  • Pilot in enclave: Stand up an isolated Azure Government pilot subscription with Azure Policy regulatory compliance initiatives enabled for your target frameworks (e.g., NIST SP 800-53/FedRAMP); require full logging and change control89.
  • Plan evaluations: For safety-impacting AI, scope AI red-team and safety evaluation plans aligned to OMB M-24-10 and NIST AI RMF; ensure content safety controls and logging are enabled in the pilot1310.
  • Prep procurement: Collect M‑22‑18 secure software attestation and SBOMs where applicable; ensure contracts reference compliance obligations and sovereign cloud deployment requirements12134.

1: OMB M-24-10 Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence β€” https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-AI.pdf 2: Executive Order 14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence β€” https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence 3: NIST AI Risk Management Framework 1.0 (AI 100-1) β€” https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf 4: FedRAMP Marketplace listing for Microsoft Azure Government β€” https://marketplace.fedramp.gov/#!/product/microsoft-azure-government?sort=productName 5: DoD Cloud Computing Security Requirements Guide v1r4 β€” https://dl.dod.cyber.mil/wp-content/uploads/cloud/cc_srgs/rm/DoD_CC_SRG_v1r4_20210412.pdf 6: DoD Authorized Cloud Service Offerings (CSO) list β€” https://public.cyber.mil/dccs/ 7: What is Azure AI Foundry β€” https://learn.microsoft.com/azure/ai-foundry/what-is-azure-ai-foundry 8: Azure Policy regulatory compliance β€” https://learn.microsoft.com/azure/governance/policy/concepts/regulatory-compliance 11: Azure OpenAI Service in Azure Government β€” https://learn.microsoft.com/azure/ai-services/openai/overview-azure-government 10: Azure AI Content Safety overview β€” https://learn.microsoft.com/azure/ai-services/content-safety/overview 12: OMB M-22-18 Enhancing the Security of the Software Supply Chain through Secure Software Development Practices β€” https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf 13: NIST SP 800-218 Secure Software Development Framework (SSDF) β€” https://csrc.nist.gov/publications/detail/sp/800-218/final 9: NIST SP 800-53 Revision 5 Security and Privacy Controls β€” https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final


References

  1. OMB M-24-10 Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence β€” https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-AI.pdf ↩
  2. Executive Order 14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence β€” https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence ↩
  3. NIST AI Risk Management Framework 1.0 (AI 100-1) β€” https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf ↩
  4. FedRAMP Marketplace listing for Microsoft Azure Government β€” https://marketplace.fedramp.gov/#!/product/microsoft-azure-government?sort=productName ↩
  5. DoD Cloud Computing Security Requirements Guide v1r4 β€” https://dl.dod.cyber.mil/wp-content/uploads/cloud/cc_srgs/rm/DoD_CC_SRG_v1r4_20210412.pdf ↩
  6. DoD Authorized Cloud Service Offerings (CSO) list β€” https://public.cyber.mil/dccs/ ↩
  7. What is Azure AI Foundry β€” https://learn.microsoft.com/azure/ai-foundry/what-is-azure-ai-foundry ↩
  8. Azure Policy regulatory compliance β€” https://learn.microsoft.com/azure/governance/policy/concepts/regulatory-compliance ↩
  9. NIST SP 800-53 Revision 5 Security and Privacy Controls β€” https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final ↩
  10. Azure AI Content Safety overview β€” https://learn.microsoft.com/azure/ai-services/content-safety/overview ↩
  11. Azure OpenAI Service in Azure Government β€” https://learn.microsoft.com/azure/ai-services/openai/overview-azure-government ↩
  12. OMB M-22-18 Enhancing the Security of the Software Supply Chain through Secure Software Development Practices β€” https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf ↩
  13. NIST SP 800-218 Secure Software Development Framework (SSDF) β€” https://csrc.nist.gov/publications/detail/sp/800-218/final ↩