An official AI intelligence platform for public sector professionals. All content generated and verified by Astra.
analysis

AI for Veterans at VA

Key takeaways

  • VA’s National Artificial Intelligence Institute (NAII) is the department’s focal point for advancing and coordinating AI research and applications for Veterans, signaling an enterprise posture toward AI-enabled health and service innovation1.
  • VA has an operational, clinically focused AI capability in REACH VET, which uses predictive analytics on health records to identify Veterans at statistically elevated risk of suicide and prompts proactive clinician outreach and care coordination2.
  • EO 14110 and OMB M-24-10 collectively require federal agencies, including VA, to implement governance, risk management, testing and monitoring, and public AI use-case inventories for affected AI systems, shaping how VA must design, operate, and account for AI in healthcare, benefits, and service delivery contexts34.
  • NIST’s AI Risk Management Framework provides a structured approach for managing AI risks (including fairness, explainability, and safety) that agencies can adopt to meet EO and OMB expectations in practice5.
  • For mission workloads requiring cloud-based AI, Azure Government offers FedRAMP High authorizations and support for DoD Impact Levels, enabling deployment of AI capabilities under federal security baselines relevant to VA’s sensitive health and benefits data67.

What VA is doing now

Enterprise posture: NAII

  • VA established the National Artificial Intelligence Institute in 2019 to accelerate AI research and implementation that improves Veteran outcomes, leveraging VA’s clinical, benefits, and research data assets and partnerships1.
  • NAII’s mandate includes coordinating AI efforts across VA, engaging external collaborators, and translating AI research into practical solutions for Veterans’ health and services1.

Clinical deployment: REACH VET

  • REACH VET is a predictive analytics program within VA’s suicide prevention enterprise that analyzes Veterans Health Administration data to identify individuals with statistically elevated suicide risk and directs clinicians to conduct proactive outreach and care coordination2.
  • The program integrates with clinical workflows to ensure identified Veterans receive timely engagement and evidence-based services, with operational materials and program description published by VA’s Office of Mental Health and Suicide Prevention2.

Policy and governance requirements shaping VA AI

  • EO 14110 directs agencies to advance safe, secure, and trustworthy AI, including promoting responsible innovation, protecting privacy and civil rights, and managing risks in safety-critical domains such as healthcare3.
  • OMB M-24-10 requires agencies to:
    • Designate a Chief AI Officer; establish AI governance structures; and develop agency AI strategies and plans4.
    • Identify and manage “safety-impacting” and other “affected” AI uses with pre-deployment testing and evaluation, ongoing monitoring, incident response, and documentation4.
    • Maintain and publish AI use-case inventories and ensure transparency and oversight appropriate to the risk profile4.
  • NIST’s AI RMF 1.0 offers a risk management structure (govern, map, measure, manage) and documented practices to operationalize trustworthy AI principles, supporting compliance with EO and OMB directives in healthcare and benefits contexts5.

Implications for Veterans’ healthcare, benefits, and service delivery

  • Healthcare: Safety-impacting AI such as clinical predictive models must undergo rigorous testing, validation, bias assessment, monitoring, and clinician-in-the-loop controls to meet OMB’s minimum practices, with governance that includes clear accountability and incident handling pathways45.
  • Benefits and service delivery: Any AI-enabled triage, routing, or decision support for claims or customer service would fall under OMB’s affected AI oversight, requiring inventories, impact assessments, and safeguards to prevent disparate impact or erroneous automation biases45.
  • Privacy and civil rights: EO 14110 prioritizes protections for privacy and civil rights, compelling VA to align AI deployments with applicable health privacy, anti-discrimination, and accessibility obligations across digital services3.

Technical foundations and cloud posture

  • When deploying AI workloads in federal cloud, Azure Government provides FedRAMP High-authorized environments and aligns with DoD Impact Level protections (IL2/IL4/IL5/IL6 depending on service), offering a path to host sensitive health and benefits data and AI services under federal baselines67.
  • Agencies can combine AI development with NIST AI RMF-driven evaluation and monitoring practices to ensure model performance, reliability, and post-deployment risk management in production environments5.

Microsoft platform context for VA missions

  • Azure Government’s FedRAMP High and DoD Impact Level support provide compliant environments for running AI pipelines, data integration, and inferencing services tied to healthcare and benefits workloads67.
  • Azure AI Foundry offers a managed approach for organizing model development, evaluation, and deployment; agencies should embed NIST RMF-aligned testing, bias checks, and monitoring into these pipelines to satisfy OMB’s minimum practices for affected AI45.
  • Microsoft’s compliance documentation enables agencies to map cloud controls to policy requirements; mission owners should still implement application-level safeguards (human review, auditability, red-teaming where applicable) to meet EO/OMB expectations346.

Risks, gaps, and governance considerations

  • Safety-impacting AI: Clinical predictive systems require continuous performance monitoring, drift detection, and equitable performance across subpopulations; OMB expects documented test and evaluation and ongoing monitoring for such uses4.
  • Transparency and accountability: AI use-case inventories, public reporting, and governance roles (Chief AI Officer, responsible officials) are mandatory and should be visibly tied to each VA AI deployment to meet OMB requirements4.
  • Data quality and integration: AI efficacy depends on high-quality, representative data; NIST emphasizes mapping and measuring processes to understand data provenance, limitations, and potential sources of harm5.
  • Procurement and lifecycle assurance: Contracts for AI systems should embed EO/OMB-aligned requirements for testing, monitoring, incident response, and auditability; cloud control attestations (e.g., FedRAMP authorizations) do not substitute for application-level responsible AI controls346.

Action for VA mission leaders

  • Establish or reaffirm AI governance structures aligned to OMB M-24-10, with defined roles for safety-impacting AI, incident response, and model lifecycle oversight across VHA and VBA4.
  • Use NIST AI RMF to operationalize risk management: document system purpose and context (map), define metrics and T&E protocols (measure), and monitor performance and harms post-deployment (manage)5.
  • Maintain a comprehensive AI use-case inventory and publish as required; ensure each entry includes risk classification, controls, and monitoring plans to meet transparency obligations4.
  • For cloud-hosted AI, select environments meeting FedRAMP High and applicable DoD Impact Levels; enforce application-layer responsible AI controls beyond infrastructure compliance baselines67.
  • In clinical contexts (e.g., predictive analytics like REACH VET), ensure clinician-in-the-loop workflows, bias and calibration assessments, and documented safety cases before scaling or modifying models42.

3: Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ 4: OMB M-24-10 — Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf 5: NIST AI Risk Management Framework (AI RMF 1.0) — https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf 1: VA establishes National Artificial Intelligence Institute — https://www.research.va.gov/currents/1219-VA-establishes-National-Artificial-Intelligence-Institute.cfm 2: REACH VET — Recovery Engagement and Coordination for Health–Veterans Enhanced Treatment — https://www.mentalhealth.va.gov/suicide_prevention/reachvet.asp 6: Azure compliance offerings — FedRAMP — https://learn.microsoft.com/azure/compliance/offerings/fedramp 7: Azure Government overview and DoD Impact Levels — https://learn.microsoft.com/azure/azure-government/documentation-government-overview


References

  1. VA establishes National Artificial Intelligence Institute — https://www.research.va.gov/currents/1219-VA-establishes-National-Artificial-Intelligence-Institute.cfm
  2. REACH VET — Recovery Engagement and Coordination for Health–Veterans Enhanced Treatment — https://www.mentalhealth.va.gov/suicide_prevention/reachvet.asp
  3. Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  4. OMB M-24-10 — Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf
  5. NIST AI Risk Management Framework (AI RMF 1.0) — https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
  6. Azure compliance offerings — FedRAMP — https://learn.microsoft.com/azure/compliance/offerings/fedramp
  7. Azure Government overview and DoD Impact Levels — https://learn.microsoft.com/azure/azure-government/documentation-government-overview