You bought the licenses. You stood up the tenant configuration. You ran the kickoff. And now, six weeks in, usage looks... uneven. A handful of people swear by it. Most opened it once. Your leadership wants to know if this is working.
This is not the question Microsoft's adoption materials are designed to answer. They tell you how to deploy. They tell you how to enable features. They do not tell you how to evaluate whether your agency is actually getting value โ or when to intervene.
That is what this piece addresses.
The difference between deployment and adoption
Deployment is a technical event. Adoption is a behavioral change that sticks.
You can have 100% license deployment and 10% genuine adoption. In government environments, this gap is common and predictable. The reasons are structural, not motivational: federal workflows are built around document-centric processes, layered approval chains, and security-driven restrictions that collectively resist the kind of ambient, exploratory AI use that adoption programs assume.
AI tools designed for knowledge workers in commercial settings assume a certain workflow fluency โ that people regularly draft, iterate, and summarize in open-ended ways. Federal environments impose constraints that interrupt this: classification requirements that restrict what can be processed, document management systems that don't integrate with productivity tools, approval chains that require human review at every substantive output, and ATO boundaries that limit which features are actually available.
None of this means adoption is impossible. It means the standard adoption playbook often doesn't transfer, and agencies that measure adoption against commercial benchmarks will consistently misread their own results.
What real adoption looks like in government
Real adoption has three characteristics:
Repeated use in a specific workflow. Not "tried Copilot in Outlook once." Someone who uses meeting summaries in Teams every time they miss a call, or who has replaced their manual document review process with Copilot in Word, is genuinely adopted. The behavior is integrated, not occasional.
Use that survives the first month. The first 30 days of any new tool show inflated engagement because users are curious. Real adoption is what's still happening at day 60 and 90. If usage drops sharply after the initial period and doesn't recover, that is a signal โ not background noise.
Use concentrated in a small number of high-value workflows. In government settings, five to eight use cases will account for the majority of sustained value. Broad feature exposure rarely drives deep adoption. The agencies that get real value tend to identify the two or three workflows where Copilot saves meaningful time, train people specifically for those workflows, and measure those workflows specifically.
The workflows that most reliably drive sustained Copilot use in federal environments are:
- Email triage in Outlook. Summarizing long threads, drafting responses to routine correspondence, and identifying action items in high-volume inboxes. This is high-frequency and low-stakes โ ideal for habit formation.
- Meeting summaries in Teams. Capturing key decisions and action items from meetings, particularly in agencies with heavy meeting cultures. This reduces rework and creates a lightweight institutional memory.
- Document summarization in Word. Condensing long policy documents, reports, or regulations into actionable summaries. Particularly valuable for analysts and program managers who read across a high volume of incoming material.
- Data analysis in Excel. Using Copilot to identify patterns, generate formulas, and produce plain-language summaries of datasets without requiring query skills. Valuable for budget, acquisition, and compliance functions.
These are not the most sophisticated use cases. They are the most durable ones. Start there.
Metrics you should actually be watching
The Copilot Dashboard in Microsoft Viva Insights gives agencies access to usage telemetry at the cohort level.1 The terminology can be confusing, so here is what to track and how to read it.
Active users per week. Not licenses assigned โ actual users who took an action in Copilot during the week. If this number is flat or declining after the first month, adoption is not progressing.
Return rate. Of the users who engaged with Copilot in a given week, how many came back the following week? Low return rates (under 40%) typically indicate that users tried the tool but did not find it useful enough to integrate into their workflow. This is the most important leading indicator of whether adoption will sustain.
Feature concentration. Which Copilot features are actually being used? If 90% of usage is in one app (typically Teams or Outlook), that tells you where your value is concentrating โ and where targeted training may unlock additional value. If usage is extremely diffuse across all features with no concentration, adoption has not yet taken root anywhere.
Sentiment correlation. The Copilot Dashboard includes satisfaction signals that can be compared against usage patterns. High usage with low satisfaction indicates friction โ the tool is being used because it was mandated, not because it is working. Low usage with high satisfaction among a small cohort often indicates a champion group that can be expanded.
What you should not do is report raw license utilization percentages to leadership as a proxy for value. A license that was opened twice is not a productive investment. Measure return behavior and workflow integration, not headline penetration.
A realistic 30/60/90 day curve
Days 1โ30: Discovery and drop-off. Expect an early spike in usage followed by a significant drop. This is normal. The users who remain active after the first month are your early adopters and potential champions. They have found a workflow that works for them. Identify them.
Days 31โ60: Workflow anchoring. If adoption is progressing, you will see usage stabilize in one or two specific features for a consistent cohort of users. This is the phase where targeted training matters most โ not broad awareness training, but specific, workflow-anchored guidance for the use cases you identified in your pilot design. If usage continues to decline through day 60, that is not organic maturation; that is a signal that the deployment needs intervention.
Days 61โ90: Threshold decision. By day 90, you should have enough data to make a defensible assessment: are the users who adopted in the first wave generating measurable value? Are there identifiable barriers preventing others from reaching that threshold? Is the return rate stabilizing or continuing to decline?
The agencies that succeed at Copilot adoption almost universally do two things by day 90: identify and formalize a champion network from their early adopters, and redesign training for the specific workflows where adoption is concentrating rather than running generic feature overviews.
When to intervene vs. let it mature
This is a judgment call, but there are clear indicators.
Let it mature when: Usage is declining modestly from an early spike but stabilizing in a consistent cohort. Users are self-discovering useful workflows. The return rate is holding above 35โ40% among active users. Satisfaction signals are positive.
Intervene when: Usage continues to decline through day 45 with no stabilization. Return rates are below 30% across all cohorts. Users report that Copilot is not producing useful outputs for their actual work. The early spike was driven by mandatory testing rather than voluntary exploration.
Common intervention levers: targeted workflow training for the two or three use cases with the highest potential, designation of trained champions in each team, removal of unnecessary access restrictions that are blocking legitimate use cases, and executive modeling (leaders visibly using and discussing Copilot outputs in meetings).
Do not mandate usage. Mandated usage produces compliance metrics, not adoption. It inflates your dashboard numbers while masking the actual adoption state.
What GCC and GCCH mean for what you can actually do
This is an area where honest assessment matters more than optimism.
Microsoft 365 Copilot features in GCC and GCC High do not always match commercial availability. Microsoft publishes a government-specific Copilot documentation set that describes which capabilities are available in which tenant types.2 Agencies should treat this documentation as authoritative and verify specific features before building adoption programs around them.
As of this writing, several Copilot capabilities available in commercial Microsoft 365 have limited or no availability in GCC High. The gap tends to be widest for newer features โ newer capabilities often ship to commercial tenants first and migrate to government clouds on a delay that can range from months to over a year.
The practical implication: before you build an adoption program around a specific workflow, verify that the underlying Copilot feature is available in your tenant type. Discovering mid-rollout that a capability is not yet available in your environment is one of the most common and avoidable causes of adoption program failure.
Microsoft's government compliance boundary also affects what Copilot can process. Prompts and grounding data are processed within the Microsoft 365 service boundary, but agencies with classification or data handling requirements should validate this against their ATO boundary documentation and Microsoft's security model for their specific environment.3
What a healthy federal Copilot program looks like at six months
At six months, a functioning program has these characteristics:
- A stable weekly active user count representing at least 20โ30% of licensed users, concentrated in identified high-value workflows
- Documented, measurable time savings or quality improvements in at least two specific workflows, derived from telemetry rather than survey alone
- A trained champion network that is the first line of support and the primary driver of peer adoption
- An updated adoption strategy that reflects actual usage patterns rather than the original launch plan
- Leadership that actively uses and discusses Copilot-generated outputs, creating organizational modeling rather than mandates
What it does not look like: high license utilization numbers achieved through mandatory reporting requirements, broad feature awareness with no deep workflow integration, or success defined primarily by the number of users who have "tried" Copilot at least once.
Decision framework
Use this to assess your current state honestly.
Adoption health
- Weekly active user count is stable or growing past day 60
- Return rate among active users is above 35%
- Usage is concentrating in identifiable workflows, not diffuse across all features
Workflow specificity
- You have identified the two to four workflows where Copilot is generating the most consistent use
- Training is targeted to those workflows, not generic feature overviews
- Champions are identified and trained in those specific workflows
Environmental alignment
- You have verified feature availability for your use cases in your tenant type (GCC, GCCH) against Microsoft's government documentation2
- Copilot Dashboard is enabled and producing cohort-level telemetry1
- No critical use cases are blocked by misconfigured access controls or unnecessary security restrictions
Measurement
- You have defined what "value" means in mission terms, not just usage metrics
- You are tracking return rate and workflow concentration, not just license utilization
- You have a documented threshold at which you will make a scale, pivot, or stop decision
Governance
- Copilot use cases are inventoried under OMB M-24-10 requirements for AI governance4
- A NIST AI RMF-aligned measurement plan exists for the deployment5
If more than three of these are unchecked, your program needs intervention before it will generate defensible value.
The honest bottom line
Most federal Copilot deployments are underperforming not because the technology is wrong for government, but because the adoption model was designed for a different environment. The workflows that drive value are known. The measurement infrastructure exists. The barriers are structural and addressable.
The question is not whether Copilot can work for your agency. The question is whether your adoption program is designed to find and accelerate the workflows where it actually will.
4: OMB M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence โ https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf 5: NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) โ https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf 2: Microsoft Copilot for Microsoft 365 for government โ https://learn.microsoft.com/en-us/microsoft-365/copilot/microsoft-365-copilot-government?view=o365-worldwide 1: Use the Microsoft Copilot Dashboard โ https://learn.microsoft.com/en-us/viva/insights/use/copilot-dashboard 3: Data, privacy, and security for Microsoft 365 Copilot โ https://learn.microsoft.com/en-us/microsoft-365/copilot/microsoft-365-copilot-security?view=o365-worldwide 6: Overview of Microsoft 365 Copilot โ https://learn.microsoft.com/en-us/microsoft-365/copilot/overview-microsoft-365-copilot?view=o365-worldwide
References
- Use the Microsoft Copilot Dashboard โ https://learn.microsoft.com/en-us/viva/insights/use/copilot-dashboard โฉ
- Microsoft Copilot for Microsoft 365 for government โ https://learn.microsoft.com/en-us/microsoft-365/copilot/microsoft-365-copilot-government?view=o365-worldwide โฉ
- Data, privacy, and security for Microsoft 365 Copilot โ https://learn.microsoft.com/en-us/microsoft-365/copilot/microsoft-365-copilot-security?view=o365-worldwide โฉ
- OMB M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence โ https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10.pdf โฉ
- NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) โ https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf โฉ
- Overview of Microsoft 365 Copilot โ https://learn.microsoft.com/en-us/microsoft-365/copilot/overview-microsoft-365-copilot?view=o365-worldwide โฉ