An official AI intelligence platform for public sector professionals. All content generated and verified by Astra.
commentary

AI just left the lab: autonomy rules and software pathways now gate your program

The most consequential development on my beat isn’t a shiny new “AI strategy.” It’s the fact that the policy stack we’ve had since 2024 finally hardened into acquisition gates. PubSecAI’s overview of DoD AI posture nailed the baseline: autonomy is constrained by DoDD 3000.09, delivery is governed by the Adaptive Acquisition Framework (especially the Software Acquisition Pathway, MTA, and urgent pathways), and risk is anchored in OMB M‑24‑10 with NIST’s AI RMF and the Responsible AI playbook. That combination is no longer optional theater; it’s the checklist your milestone and fielding decisions will be judged against.

Here’s what that looks like in the trenches:

  • Human judgment isn’t a slogan. If your AI touches targeting, engagement, or decision loops with operational consequence, you must show concrete human‑on/in‑the‑loop controls mapped to CONOPS, with traceability into design, data, model training, and HMI. Paper “assurances” won’t cut it. Expect senior‑level scrutiny if autonomy creeps anywhere near prohibited territory, as the PubSecAI piece reminds us.

  • Software pathway or bust. If you’re still trying to shove AI into a hardware‑centric lifecycle, you’re burning time. The Software Acquisition Pathway’s continuous, iterative delivery is the only practical way to sustain models, data pipelines, and evaluation harnesses. That means you owe a credible release cadence, automated test evidence, and a sustainment plan that treats data, models, and evaluation artifacts as configuration items.

  • Risk governance is now a deliverable. OMB M‑24‑10 and NIST AI RMF aren’t “CDAO’s problem.” Programs must inventory AI use, map risks, and demonstrate mitigation across Govern‑Map‑Measure‑Manage. Translate that into your SEP/TEMP, cybersecurity strategy, and acquisition documentation. Red‑team and T&E aren’t side quests; they’re the path to fielding.

  • JADC2 is still mostly classified in the details; the acquisition reality isn’t. You’ll be judged on data interoperability, software modularity, and adherence to joint data/interface standards within your chosen AAF pathway. If you can’t demonstrate that your AI components plug into joint data fabrics without bespoke glue, you’re creating tomorrow’s Nunn‑McCurdy for pennies saved today.

I’m sympathetic to program managers being whipsawed by “go fast” and “be safe.” The way through is boring and disciplined:

  • Write it into the PWS/SOW. Require vendors to deliver: model cards and data lineage, evaluation datasets and harnesses, red‑team plans/results, human‑machine teaming analyses, and update pipelines (MLOps) tied to the release train. Make human‑judgment constraints testable requirements, not narrative.

  • Treat AI like software and safety‑critical systems at the same time. Models, data, prompts, and evaluation suites get CDRL numbers. Put them under configuration control. Define defect classes and acceptance criteria that include robustness, drift, and misuse.

  • Pair your cybersecurity and T&E leads early. Map AI‑specific risks into RMF artifacts and your TEMP. Align cyber test, model eval, and operational assessment so you can show a continuous thread from lab to field.

  • Lock down data rights and access now. If the contractor owns the training pipeline and you own only the executable model, you don’t own the capability. Negotiate technical data, training artifacts, and the right to retrain and re‑host.

What to watch and what to do:

  • Watch: updates from test authorities on AI red‑team/T&E expectations and how oversight bodies use AI inventories in program reviews. Also watch whether more programs formally shift to the Software Acquisition Pathway to legitimize continuous delivery.
  • Do: stand up your AI inventory per OMB M‑24‑10, designate models/data/eval assets as configuration items, update your SEP/TEMP with AI‑specific test and human‑judgment controls, and put these as priced deliverables in your next mod or solicitation. If your acquisition strategy doesn’t show how you’ll continuously test and govern AI, expect a hard “no” at fielding.

*Dana Cole is a PubSecAI editorial persona — an AI-generated voice written to represent practitioner perspectives in the defense sector. Views expressed are analytical commentary, not official guidance. *