What changed and why it matters
- The federal zero trust mandate is grounded in OMB M-22-09 and NIST SP 800-207, directing agencies to implement identity-centric access, strong device posture, segmented networks, secure applications/workloads, data protections, and enterprise visibility/logging, with specific FY2024 milestones for multi-factor authentication, endpoint detection and response, encryption, and centralized event logging [1][2].
- AI policy—EO 14110 and OMB M-24-10—adds binding governance and risk-management requirements for agency AI uses, requiring inventories, designated leadership (CAIO), impact assessments, and safeguards for rights and safety; these obligations extend zero trust enforcement to AI models, agents, pipelines, and data, and necessitate integrating AI-specific risks and events into logging, monitoring, and access control [6][7][8][9].
- Agencies must align AI development and operations to secure software supply chain requirements (M-22-18, NIST SP 800-218), and ensure incident-ready logging (M-21-31), treating AI systems as first-class workloads, identities, and data assets within the zero trust architecture and maturity models used across government [5][12][3].
Mandate baseline: zero trust is policy, not optional
- EO 14028 directed a government-wide shift to zero trust and modernized cybersecurity, catalyzing OMB M-22-09’s prescriptive implementation guidance for agencies [4][1].
- NIST SP 800-207 defines zero trust architecture principles—continuous verification, least privilege, and assuming breach—informing agency designs and control implementations [2].
- OMB M-22-09 sets outcomes (e.g., phishing-resistant MFA, strong enterprise identity, encrypted DNS and web traffic, EDR deployment, and centralized logging) with timelines, expecting agencies to demonstrate measurable progress and target completion by the end of FY2024 across the pillars [1].
- CISA’s Zero Trust Maturity Model 2.0 operationalizes the pillars (Identity, Devices, Networks, Applications & Workloads, Data) with cross-cutting capabilities (Visibility & Analytics, Automation & Orchestration, Governance) and maturity stages that agencies use for planning and measurement [3].
- OMB M-21-31 mandates enterprise logging tiers, retention, and centralized visibility to support detection, investigation, and remediation; this logging scope must encompass workloads and data flows relevant to mission systems, including AI components [5].
The AI overlay: governance, risk, and secure design
- EO 14110 directs federal actions to ensure safe, secure, and trustworthy development and use of AI, tasking OMB and NIST for guidance and standards, and emphasizing security, privacy, and rights-preserving use across agencies [6].
- OMB M-24-10 establishes agency AI governance (e.g., Chief AI Officers, AI inventories), risk management practices (e.g., impact assessments, safeguards for safety, equity, and civil rights), and operating procedures for generative and other AI uses, creating binding requirements that intersect with cybersecurity and zero trust operations [7].
- NIST’s AI Risk Management Framework 1.0 provides functions (Govern, Map, Measure, Manage) and outcomes to identify and mitigate AI risks (e.g., data quality, robustness, transparency), which agencies should integrate with existing zero trust controls and continuous monitoring [8].
- CISA and international partners’ Secure AI System Development guidelines enumerate design-time, deployment, and operations practices (e.g., threat modeling AI-specific attacks, data provenance, secure pipeline and dependency management, robust logging and monitoring of AI system behavior) that fit within zero trust enforcement boundaries [9].
How AI changes application of zero trust pillars
Identity
- Human identities: enforce phishing-resistant MFA for staff, contractors, admins, and high-value asset users; align enterprise identity services to support strong authentication and policy-based access to AI tools and model endpoints [1][3].
- Non-human identities: register and govern service principals, API keys, and machine identities used by AI pipelines (training, inference, agents), applying least privilege, rotation, and policy-based authorization; treat automated agents as workloads requiring explicit trust policies per NIST ZTA [2][3].
Devices
- Extend endpoint controls (EDR, secure configuration, vulnerability management) to developer workstations, data science rigs, and accelerator clusters used for AI development and operations, maintaining attested device posture before granting access to datasets, model registries, or inference services [1][3].
Networks
- Apply microsegmentation and secure transport to isolate AI training environments, model registries, and inference endpoints; enforce application-layer access with continuous verification and avoid implicit trust based on network location per NIST ZTA [2][3].
Applications and workloads
- Treat AI components (data pipelines, model training/inference services, orchestration agents) as workloads subject to secure software development requirements, provenance tracking, dependency scrutiny, and attestation per OMB M-22-18 and NIST SP 800-218; include AI-relevant dependencies (frameworks, libraries) and model artifacts in supply chain risk management [12][11].
- Require secure-by-design patterns for AI (e.g., defense against model misuse and injection, robust input handling, constrained tool and data access) and document threat models across lifecycle stages per CISA’s AI guidelines [9].
Data
- Classify and tag training, fine-tuning, and inference data; enforce least privilege, encryption in transit and at rest, and continuous DLP monitoring; align data governance with AI RMF outcomes for data quality, provenance, and integrity within zero trust data controls [3][8].
- Ensure AI output logs and artifacts are governed as sensitive data when they can contain mission information, PII, or model secrets, with retention and access controls consistent with OMB logging policy [5][7].
Visibility, analytics, automation
- Integrate AI systems and pipelines into centralized logging (M-21-31), capturing authentication events, data accesses, model invocations, administrative actions, and anomalous behavior to support detection and response; define event schemas and retention to meet incident investigation requirements [5][9].
- Automate policy enforcement and continuous monitoring across identities, workloads, and data, using telemetry to drive risk-adaptive access in line with ZTA; CISA’s maturity model and DoD’s strategy both emphasize automation and analytics as core cross-cutting capabilities [3][10].
Conflict and gaps to surface
- Framework alignment: CISA’s ZTMM defines five pillars with cross-cutting capabilities, while DoD’s Zero Trust Strategy organizes seven pillars (user, devices, applications, data, network, visibility and analytics, automation and orchestration), creating differences in taxonomy that agencies must reconcile when coordinating joint missions or reporting maturity [3][10].
- AI control baselines: NIST AI RMF provides voluntary risk-management guidance, not a FISMA control catalog; agencies must map AI RMF outcomes to existing NIST SP 800-53 controls and OMB directives, as there is no distinct government-wide AI-specific technical control baseline mandated at this time [8][11][7].
- Event logging for AI: M-21-31 prescribes logging and retention, but does not enumerate AI-specific event categories; agencies must define and include AI-relevant telemetry (e.g., prompt/response audits where permitted, model admin actions) to meet investigative sufficiency, subject to privacy and civil rights safeguards required by M-24-10 [5][7].
- Supply chain for models: OMB M-22-18 mandates secure software development attestation, but does not prescribe a standardized “model bill of materials” artifact; agencies should include AI components within software attestation and provenance practices per SP 800-218 while monitoring evolving standards [12][11].
Implementation playbook for CIO/CISO/CAIO teams
- Governance and inventory
- Establish joint CIO–CISO–CAIO operating model; complete AI use inventory and designate authoritative systems-of-record for AI workloads, datasets, and identities; tie inventory records to zero trust access policies and logging scopes [7][3].
- Identity and access
- Enforce phishing-resistant MFA for all human users of AI tooling and administrative interfaces; implement managed machine identities with short-lived credentials, scoped tokens, and policy evaluation at every call to model endpoints and data stores [1][3][2].
- Data controls
- Tag AI-related datasets and outputs; enforce encryption and least privilege; implement DLP and continuous monitoring for sensitive content; validate data provenance and integrity per AI RMF [3][8].
- Secure software supply chain
- Require vendor and internal attestations for AI components per M-22-18; adopt SSDF practices (secure build, signed artifacts, dependency management) for model training/inference services and orchestration code; gate deployments on policy compliance [12][11].
- Logging and detection
- Extend M-21-31 logging to AI systems: collect authentication, authorization, model execution, configuration changes, data access, and anomaly signals; define playbooks for AI-specific threats (prompt injection, data exfiltration via outputs) per CISA guidance [5][9].
- Network and workload isolation
- Segment training clusters, registries, and inference endpoints; restrict east-west traffic; enforce application-layer access and zero trust brokers at boundaries per NIST ZTA [2][3].
- Privacy, civil rights, and ethics
- Conduct impact assessments for AI uses; embed safeguards for rights, equity, and safety; limit retention and access to sensitive prompts/outputs consistent with M-24-10 [7].
Microsoft platform context where relevant
- Azure Government is authorized at FedRAMP High, supporting agencies that require High baseline controls; agencies can leverage this authorization status when hosting zero trust-aligned workloads in Azure Government regions [13][14].
- DoD missions can use Azure Government DoD regions aligned to IL2/IL4/IL5 and IL6 for classified workloads, subject to program approvals, aligning infrastructure to zero trust control enforcement per mission requirements [15].
- Azure Policy offers built-in regulatory compliance initiatives that map to NIST SP 800-53 and FedRAMP baselines, enabling agencies to codify and audit zero trust-relevant configurations at scale; Defender for Cloud provides a regulatory compliance dashboard to monitor control adherence across workloads [16][18].
- Microsoft Entra ID supports phishing-resistant authentication with FIDO2 security keys, which agencies can use to meet M-22-09 phishing-resistant MFA objectives for human users accessing AI administration and developer tooling; machine identities should be governed via managed service principals and certificate-based auth aligned to least privilege [17][1].
Note: Vendor capabilities are implementation options; agencies must verify configuration and control coverage against OMB and NIST requirements and cannot treat vendor marketing as policy compliance [1][2][11].
What to report and measure
- Zero trust maturity metrics: identity coverage (phishing-resistant MFA adoption), device posture coverage (EDR, configuration), segmentation coverage, workload onboarding to policy enforcement, data classification/tagging, and cross-cutting automation and analytics status per CISA ZTMM [3].
- AI governance metrics: inventory completeness, impact assessments performed, safeguards implemented, and incident readiness (telemetry coverage, playbooks) per OMB M-24-10 and AI RMF integration [7][8].
- Logging sufficiency: percentage of AI workloads covered by M-21-31 logging tiers, retention compliance, and correlation with identity and data events [5].
- Software supply chain: SSDF adoption and attestation coverage for AI components, including dependencies and build pipelines, per M-22-18 and SP 800-218 [12][11].
Sources
[1] OMB M-22-09
[2] NIST SP 800-207
[3] CISA Zero Trust Maturity Model 2.0
[4] EO 14028
[5] OMB M-21-31
[6] EO 14110
[7] OMB M-24-10
[8] NIST AI RMF 1.0
[9] CISA/NCSC Secure AI System Development Guidelines
[10] DoD Zero Trust Strategy (2022)
[11] NIST SP 800-53 Rev. 5
[12] OMB M-22-18
[13] FedRAMP High Baseline
[14] FedRAMP Marketplace: Azure Government
[15] Microsoft: Azure Government DoD impact levels
[16] Microsoft: Azure Policy regulatory compliance initiatives
[17] Microsoft: Entra ID FIDO2 phishing-resistant authentication
[18] Microsoft: Defender for Cloud regulatory compliance dashboard