Bottom line
- Federal law enforcement’s AI posture is being driven by Executive Order 14110 and OMB M‑24‑10: agencies must inventory AI uses, institute governance, and implement pre-deployment testing, ongoing monitoring, independent evaluation, incident response, and public transparency for “safety-impacting” and “rights-impacting” AI systems.123
- DHS has formalized department-wide AI safeguards and established an AI Task Force; operationally, CBP’s Traveler Verification Service supports biometric entry/exit and TSA’s identity verification uses are governed under these safeguards.456
- DOJ and FBI activities include facial recognition within the FBI’s NGI program and civil rights enforcement guidance for algorithmic systems, alongside GAO oversight calling for accuracy and privacy controls.78910
What the directives require of DOJ, FBI, and DHS
- Executive Order 14110 sets the federal policy to advance AI use while protecting safety, security, and civil rights; it assigns implementation to agencies, including DOJ and DHS, across governance, enforcement, and operational domains.1
- OMB M‑24‑10 operationalizes the EO: agencies must designate a Chief AI Officer, maintain public AI use case inventories, align risk management with NIST’s AI RMF, and apply additional guardrails for “safety-impacting” and “rights-impacting” AI (pre-deployment testing and independent evaluation, continuous monitoring, incident reporting, user notices, and public documentation).1112
- Agencies’ non-classified AI use case inventories are posted on Performance.gov; DOJ and DHS are included in this governmentwide transparency requirement.3
Implications:
- Law enforcement components must treat identity, screening, investigative-lead generation, and adjudication-support tools as candidates for “rights-impacting” designation, triggering the full suite of M‑24‑10 safeguards.2
DHS: policy framework and operational AI
- Governance. DHS issued a department-wide AI policy in February 2024 that establishes use conditions, testing for accuracy and bias, governance review, transparency, and guardrails on sensitive use cases; the policy explicitly addresses facial recognition safeguards and civil rights protections.4
- Mission prioritization. DHS created an AI Task Force in April 2023 to accelerate responsible AI adoption for mission areas including interdiction of fentanyl, combating online child sexual exploitation, and supply-chain enforcement (e.g., forced labor detection).13
- CBP identity verification. CBP’s Traveler Verification Service (TVS) enables biometric entry/exit through facial matching, with detailed privacy, accuracy, and redress controls documented in the DHS/CBP PIA; TVS operates as an identity verification service interfacing with airline and airport partners under DHS oversight.6
- TSA identity verification. DHS’s AI policy describes facial recognition as a covered capability used within the department; TSA’s identity verification pilots and deployments are governed by the same department-wide safeguards (e.g., opt-in, testing, and monitoring).4
- Training and performance analytics. DHS Science & Technology’s ScreenADAPT uses machine learning-driven feedback to improve Transportation Security Officer X-ray image screening performance and training effectiveness.14
Programmatic takeaways:
- DHS components must document AI uses in inventories, conduct bias/accuracy testing consistent with DHS policy, and ensure user notice and opt-in parameters for facial verification where applicable.43
DOJ and FBI: investigative AI, biometrics, and civil rights
- FBI investigative biometrics. The FBI’s Next Generation Identification (NGI) program includes the Interstate Photo System that supports face recognition searches for authorized law enforcement purposes; NGI is a core national biometric capability used for investigative leads and identity services.7
- Oversight and accuracy. GAO’s review of FBI facial recognition called for stronger privacy and accuracy measures, including monitoring and testing system performance—recommendations that frame ongoing governance expectations for investigative AI.10
- Civil rights enforcement posture. DOJ has issued guidance and joined interagency enforcement statements clarifying that algorithmic systems are subject to federal civil rights laws (e.g., ADA, fair lending, employment), signaling scrutiny over bias, accessibility, and disparate impact from AI in public and private sectors.89
- Transparency. DOJ’s public AI use case inventory is required under OMB M‑24‑10, supporting external visibility into departmental AI uses without revealing sensitive tradecraft.23
Operational implications:
- FBI and DOJ components should treat face recognition and other algorithmic triage tools as “rights-impacting” where outputs inform investigative or adjudicative actions, triggering M‑24‑10’s testing, monitoring, and independent evaluation requirements.2
Testing, assurance, and external benchmarks
- NIST’s Face Recognition Vendor Test (FRVT) provides independent accuracy and demographic performance evaluations of face recognition algorithms, used by agencies and vendors to benchmark and select systems.15
- NIST’s AI Risk Management Framework 1.0 provides a governmentwide approach for mapping, measuring, and managing AI risks (validity, reliability, robustness, privacy, bias, accountability), which OMB directs agencies to use in their AI governance programs.122
Assurance practices that align with policy:
- Document model purpose, data provenance, and known limitations (AI RMF “MAP/MEASURE” functions).12
- Perform pre-deployment testing for accuracy and bias on representative operational data, and conduct periodic re-evaluations (OMB M‑24‑10 safeguards).2
- Provide user notice, opt-out/alternative procedures where required, and establish redress mechanisms (OMB M‑24‑10 and DHS AI policy requirements, as applicable).24
Acquisition and infrastructure considerations
- Cloud compliance baselines. Law enforcement workloads that process PII, biometrics, or law enforcement sensitive data typically require FedRAMP High and, for DoD-affiliated missions, DoD SRG IL5/IL6; Azure Government maintains FedRAMP High authorization and DoD SRG IL5 coverage for many services, while Azure Government Secret holds IL6 provisional authorization.1617
- Governance at scale. Agency governance teams can use platform policy engines (e.g., Azure Policy) to enforce configuration baselines, data residency, and control inheritance for AI services consistent with M‑24‑10 and NIST AI RMF-aligned controls.18212
Caveat:
- Platform compliance enables but does not satisfy AI-specific safeguards; agencies must still perform use-case-level testing, monitoring, and documentation required by M‑24‑10.2
Risks, gaps, and oversight signals
- Accuracy and bias remain central risks in identity and investigative applications; GAO’s findings on FBI facial recognition underscore the need for continuous performance monitoring and privacy protections.10
- Facial verification and recognition are clearly within DHS’s AI policy scope, with explicit safeguards; components must maintain opt-in, testing, and transparency commitments as deployments expand.4
- Transparency is improving via public AI inventories, but inventories omit classified/sensitive details; external assurance will continue to rely on PIAs, GAO/OIG reviews, and standards-based testing (e.g., FRVT).315
Action checklist for agency leads
- Classify law enforcement AI uses under M‑24‑10: identify which are “rights-impacting” or “safety-impacting” and apply required safeguards before scaling.2
- Align testing with NIST AI RMF and, for facial systems, reference FRVT results while validating with operational data representative of your population and conditions.1215
- Ensure inventories, PIAs, user notices, and redress pathways are current and linked from public sites consistent with OMB requirements and component policies.234
- Use compliant cloud environments and enforce policy-as-code to ensure consistent control baselines for AI services and data (e.g., FedRAMP High, IL5/IL6 where required).161718
- Coordinate with civil rights, privacy, and oversight offices early for reviews of AI-enabled identity, screening, and investigative tools to preempt compliance and trust issues.2489
1: Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ 2: OMB M-24-10 — Advancing Governance, Innovation, and Risk Management for Agency Use of AI — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-AI.pdf 3: Performance.gov — Federal Agency AI Use Case Inventories — https://www.performance.gov/ai/ 4: DHS releases first-ever Department-wide Policy on the Use of Artificial Intelligence — https://www.dhs.gov/news/2024/02/06/dhs-releases-policy-use-artificial-intelligence 5: Secretary Mayorkas establishes Department Task Force on Artificial Intelligence — https://www.dhs.gov/news/2023/04/21/secretary-mayorkas-establishes-department-task-force-artificial-intelligence 6: DHS/CBP/PIA-056 — Traveler Verification Service Privacy Impact Assessment — https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp056-tvs-june2020_1.pdf 7: FBI Next Generation Identification (NGI) — Program Overview — https://www.fbi.gov/services/cjis/fingerprints-and-other-biometrics/ngi 10: GAO-16-267 — Facial Recognition Technology: FBI Should Better Ensure Privacy and Accuracy — https://www.gao.gov/products/gao-16-267 15: NIST Face Recognition Vendor Test (FRVT) — https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt 12: NIST AI Risk Management Framework 1.0 — https://www.nist.gov/itl/ai-risk-management-framework 8: DOJ and EEOC — The Americans with Disabilities Act and the Use of AI in Employment — https://www.ada.gov/resources/ai-employment/ 9: Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (DOJ/FTC/CFPB/EEOC) — https://www.justice.gov/opa/pr/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems 14: DHS S&T — ScreenADAPT — https://www.dhs.gov/science-and-technology/news/2019/07/16/snapshot-screenadapt-improves-tsa-transportation-security 16: Microsoft Azure Government — Compliance Offerings — https://learn.microsoft.com/en-us/azure/azure-government/compliance/ 18: Azure Policy — Overview — https://learn.microsoft.com/en-us/azure/governance/policy/overview 17: Azure Government Secret — DoD SRG IL6 — https://azure.microsoft.com/en-us/explore/global-infrastructure/azure-government-secret/
References
- Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ ↩
- OMB M-24-10 — Advancing Governance, Innovation, and Risk Management for Agency Use of AI — https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-AI.pdf ↩
- Performance.gov — Federal Agency AI Use Case Inventories — https://www.performance.gov/ai/ ↩
- DHS releases first-ever Department-wide Policy on the Use of Artificial Intelligence — https://www.dhs.gov/news/2024/02/06/dhs-releases-policy-use-artificial-intelligence ↩
- Secretary Mayorkas establishes Department Task Force on Artificial Intelligence — https://www.dhs.gov/news/2023/04/21/secretary-mayorkas-establishes-department-task-force-artificial-intelligence ↩
- DHS/CBP/PIA-056 — Traveler Verification Service Privacy Impact Assessment — https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp056-tvs-june2020_1.pdf ↩
- FBI Next Generation Identification (NGI) — Program Overview — https://www.fbi.gov/services/cjis/fingerprints-and-other-biometrics/ngi ↩
- DOJ and EEOC — The Americans with Disabilities Act and the Use of AI in Employment — https://www.ada.gov/resources/ai-employment/ ↩
- Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (DOJ/FTC/CFPB/EEOC) — https://www.justice.gov/opa/pr/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems ↩
- GAO-16-267 — Facial Recognition Technology: FBI Should Better Ensure Privacy and Accuracy — https://www.gao.gov/products/gao-16-267 ↩
- NIST AI Risk Management Framework 1.0 — https://www.nist.gov/itl/ai-risk-management-framework ↩
- DHS S&T — ScreenADAPT — https://www.dhs.gov/science-and-technology/news/2019/07/16/snapshot-screenadapt-improves-tsa-transportation-security ↩
- NIST Face Recognition Vendor Test (FRVT) — https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt ↩
- Microsoft Azure Government — Compliance Offerings — https://learn.microsoft.com/en-us/azure/azure-government/compliance/ ↩
- Azure Government Secret — DoD SRG IL6 — https://azure.microsoft.com/en-us/explore/global-infrastructure/azure-government-secret/ ↩
- Azure Policy — Overview — https://learn.microsoft.com/en-us/azure/governance/policy/overview ↩