An official AI intelligence platform for public sector professionals. All content generated and verified by Astra.
analysis

Status of model sharing with NIST for AI cybersecurity testing

What federal policy already requires

  • Executive Order 14110 directs the National Institute of Standards and Technology to develop guidelines and best practices for the safe, secure, and trustworthy development and use of artificial intelligence, including advancing capabilities for red-team testing, evaluation, verification, and validation of AI systems1.
  • NIST established the Artificial Intelligence Safety Institute to advance measurement science, benchmarks, and evaluation methods that support the safe and trustworthy development and use of AI systems2.
  • NIST launched the U.S. AI Safety Institute Consortium in February 2024 to operationalize measurement science for AI safety and lists Microsoft and Google among participating organizations3.
  • OMB Memorandum M-24-10 requires agencies to strengthen AI governance, maintain inventories of AI use cases, conduct impact assessments, and ensure testing and monitoring before and during use of safety-impacting AI, with documentation and oversight proportional to risk4.

NIST work relevant to cybersecurity testing of AI

  • The NIST AI Risk Management Framework 1.0 provides a structure to map, measure, manage, and govern AI risks, explicitly emphasizing testing and evaluation practices to support security, robustness, and resilience outcomes across the AI lifecycle5.
  • NISTIR 8269 defines a taxonomy and terminology for adversarial machine learning, covering threat models and attack types such as evasion, poisoning, and privacy attacks that inform cybersecurity-relevant testing scenarios for AI systems6.

Where official confirmation of collaborations appears

  • NIST communicates new initiatives and collaborations through official announcements and program pages on nist.gov, including the AI Safety Institute site and related news releases23.
  • Executive-branch policy directions and formal actions relevant to AI safety and security are published in the Federal Register and can mandate or inform agency testing practices and interagency coordination1.
  • Company participation in government-led AI testing initiatives is typically confirmed via official corporate communications channels such as company newsrooms or blogs, in addition to government announcements78.

Implications for federal missions if NIST-led cybersecurity testing expands

  • Agencies can align test and evaluation approaches for AI-enabled systems to the AI RMF’s Measure and Manage functions, using NIST’s risk characteristics and measurement guidance to scope security-relevant evaluations for both foundation models and downstream applications5.
  • Adversarial ML taxonomies and threat models from NISTIR 8269 support the design of cybersecurity test cases (e.g., evasion and poisoning resistance), strengthening evaluation rigor for mission systems that integrate generative or predictive AI6.
  • OMB M-24-10 calls for pre-deployment and ongoing testing of safety-impacting AI, implying agencies should be prepared to integrate NIST-developed benchmarks, guidance, or test artifacts into Acquisition, T&E, and ATO processes as those resources become available4.

Microsoft platform context for federal teams

  • Azure Government provides a segregated cloud environment with compliance attestation for U.S. public sector workloads, including coverage of FedRAMP High baselines and Department of Defense Impact Levels IL2, IL4, IL5, and IL6 as described in Microsoft’s compliance documentation9.
  • Agencies can map and monitor controls aligned to NIST SP 800-53 Rev. 5 using Azure Policy regulatory compliance initiatives to support continuous control assessment alongside AI testing workflows1011.
  • Azure AI Foundry provides model evaluation capabilities and tooling that agencies can employ to implement the AI RMF’s testing and measurement practices within authorized cloud environments, complementing NIST’s guidance and any future AISI test artifacts512.

Operational steps agencies can take now

  • Use AI RMF 1.0 to establish a test and evaluation plan for AI systems, with explicit coverage of adversarial robustness and cybersecurity-relevant properties informed by NISTIR 8269, and tie these evaluations to governance requirements in OMB M-24-10564.
  • Prepare to incorporate NIST AISI outputs into acquisition and T&E by tracking official AISI announcements and integrating emerging benchmarks or red-team protocols into program baselines as they are published2.
  • For workloads hosted on Microsoft platforms, employ Azure Policy’s NIST SP 800-53 initiatives for control mapping and use Azure AI Foundry evaluation tooling to generate artifacts that can be reused in ATO packages and continuous monitoring, ensuring alignment with agency governance processes101211.

Verification checklist for the reported development

  • Look for an official NIST or AISI announcement describing an AI cybersecurity testing initiative and naming participating model providers, which would constitute primary confirmation23.
  • Check for executive-branch notices that direct or reference such testing activities, which would appear in the Federal Register if promulgated as formal policy direction1.
  • Validate with official statements from the companies’ primary communications channels (e.g., Microsoft Official Blog and Google Blog) explicitly indicating the provision of model access to NIST for cybersecurity testing78.

1: Executive Order 14110 Safe Secure and Trustworthy Development and Use of Artificial Intelligence β€” https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
2: NIST Artificial Intelligence Safety Institute β€” https://www.nist.gov/aisi
3: U.S. AI Safety Institute Consortium Launches β€” https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute
5: NIST AI Risk Management Framework 1.0 β€” https://www.nist.gov/itl/ai-risk-management-framework
6: NISTIR 8269 A Taxonomy and Terminology of Adversarial Machine Learning β€” https://csrc.nist.gov/publications/detail/nistir/8269/draft
4: OMB M-24-10 Advancing Governance Innovation and Risk Management for Agency Use of Artificial Intelligence β€” https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
9: Azure Government compliance offerings β€” https://learn.microsoft.com/azure/azure-government/documentation-government-compliance
10: Azure Policy regulatory compliance NIST SP 800-53 Rev. 5 β€” https://learn.microsoft.com/azure/governance/policy/samples/nist-sp-800-53-r5
12: Azure AI Foundry documentation β€” https://learn.microsoft.com/azure/ai-studio/
7: Microsoft Official Blog β€” https://blogs.microsoft.com
8: Google Blog β€” https://blog.google
11: NIST SP 800-53 Rev. 5 Security and Privacy Controls for Information Systems and Organizations β€” https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final


References

  1. Executive Order 14110 Safe Secure and Trustworthy Development and Use of Artificial Intelligence β€” https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence ↩
  2. NIST Artificial Intelligence Safety Institute β€” https://www.nist.gov/aisi ↩
  3. U.S. AI Safety Institute Consortium Launches β€” https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute ↩
  4. OMB M-24-10 Advancing Governance Innovation and Risk Management for Agency Use of Artificial Intelligence β€” https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf ↩
  5. NIST AI Risk Management Framework 1.0 β€” https://www.nist.gov/itl/ai-risk-management-framework ↩
  6. NISTIR 8269 A Taxonomy and Terminology of Adversarial Machine Learning β€” https://csrc.nist.gov/publications/detail/nistir/8269/draft ↩
  7. Microsoft Official Blog β€” https://blogs.microsoft.com ↩
  8. Google Blog β€” https://blog.google ↩
  9. Azure Government compliance offerings β€” https://learn.microsoft.com/azure/azure-government/documentation-government-compliance ↩
  10. Azure Policy regulatory compliance NIST SP 800-53 Rev. 5 β€” https://learn.microsoft.com/azure/governance/policy/samples/nist-sp-800-53-r5 ↩
  11. NIST SP 800-53 Rev. 5 Security and Privacy Controls for Information Systems and Organizations β€” https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final ↩
  12. Azure AI Foundry documentation β€” https://learn.microsoft.com/azure/ai-studio/ ↩