Most teams don’t struggle because they can’t calculate Earned Value. They struggle because Earned Value often tells an incomplete story.
EVM is good at measuring performance against a baseline—if the baseline scope stays stable. But that’s not how real programs behave. Requirements evolve. Clarifications happen. Interfaces shift. Test uncovers realities that early documents didn’t. Suppliers change something upstream and it ripples through design. Anyone who has lived inside a major program knows this isn’t the exception; it’s the operating condition.
And when scope moves but the baseline doesn’t move cleanly with it, Earned Value starts to punish the wrong thing.
The classic scenario
You’ve seen it:
- Requirements change
- Engineering adjusts
- Hours increase
- ACWP climbs
- BCWP/BCWS don’t change (at least not in sync)
- Suddenly you’re “overrunning”
The program may actually be executing well. But the metrics make it look like performance failure because the reporting doesn’t understand scope evolution.
“Isn’t this what BCRs are for?”
It’s a fair question—because yes, most mature EVMS environments already have a formal mechanism intended to handle scope changes: the baseline change request (BCR).
In theory, BCRs keep the PMB aligned by approving replans and adjusting future BCWS when scope is added, deleted, or re-sequenced. That governance mechanism matters. It’s how organizations prevent the baseline from becoming pure fiction.
But here’s the catch: BCRs typically update budgets. They rarely explain budgets at the level program teams actually need to manage change.
Most BCR packages capture the change as narrative plus summary budget moves—sometimes with an engineering change reference like an ECP/ECN. What they usually don’t provide is structured, persistent lineage back to what changed in the engineering definition of scope:
- which requirement(s) changed
- how they changed (added, clarified, rewritten, tolerance tightened, etc.)
- where the change mapped into the WBS/work packages
- what portion of the cost movement can be attributed to that specific scope delta
So even when the EVMS process is “working,” teams still struggle to answer the deeper question leadership always asks: what changed, exactly, and what did it do to cost?
This gap is less about policy and more about data capture and structure. Without requirement-level traceability into the cost structure, scope change remains governable—but not learnable.
The missing link is requirements-to-cost lineage
This is why the “digital thread” conversation matters—but not in the abstract.
Most digital thread efforts connect requirements to engineering artifacts: models, designs, verification, test. That’s valuable. But cost and program controls often sit off to the side, operating on their own structures: WBS, work packages, budgets, labor categories, actuals.
When requirements change, estimating and controls do respond. Budgets move. Plans get updated. But the “why” is usually trapped in documents, email threads, and individual expertise. The systems might show that budget moved, but they don’t preserve the causal chain in a form that can be searched, analyzed, and reused over time.
Without that linkage, cost variance becomes a debate instead of an analysis.
What “scope-aware” controls actually look like
If you treat requirements as versioned artifacts—meaning you keep a baseline, you capture the next version, and you can see what changed between them—you can turn scope evolution into structured information.
This isn’t about creating more paperwork. It’s about capturing change in a way that enables better decisions.
At a practical level, scope-aware program controls start to look like this:
- A requirement changes, and the system captures what changed from the prior baseline (a real diff, not a vague narrative).
- The change is classified in a way practitioners understand—scope add, rewrite, clarification, tolerance update, removal.
- The requirement is mapped to the parts of the WBS and work packages it drives.
- Cost movement can be attributed back to the change, so when budgets move (including through a BCR), there’s a traceable rationale behind the numbers.
Now, instead of saying “cost went up,” you can say:
“Cost increased because this requirement was added/rewritten, it impacted these work packages, and here’s the expected delta.”
That’s a different conversation.
This is where AI becomes useful (and why it often fails without the right structure)
There’s a lot of noise in “AI for estimating.” The hard truth is that machine learning cannot rescue weak structure.
If your data consists of dates, totals, and hours, a model can predict an output, but it can’t explain why. That’s not acceptable in high-stakes acquisition environments.
AI becomes useful when the data includes lineage—what changed, when it changed, what type of change it was, where it landed in the WBS, and how those changes behaved historically. With that structure, you can start to identify patterns like:
- late-phase scope adds tend to be disproportionately expensive
- certain subsystems show volatility signals earlier than others
- specific change types correlate strongly with EAC drift
- some requirement clusters drive rework more than new build
And importantly, you can explain the model’s reasoning using modern interpretability methods. That’s the difference between “the model says so” and “here’s what signals drove the forecast.”
The big payoff: separating scope-driven variance from performance-driven variance
This is the part that changes the game for program controls.
When you have requirements-to-cost lineage, you can start to partition variance into two categories:
- variance caused by execution performance (productivity, rate, process, schedule inefficiency)
- variance caused by scope evolution (requirement changes, clarifications, additions)
That separation brings sanity back to decision-making. It improves internal trust. It strengthens negotiations. And it makes EVM more aligned with reality.
It also upgrades the BCR process itself. BCRs don’t go away—they get stronger. The justification becomes cleaner, more consistent, and easier to defend because it’s anchored to structured scope change rather than purely narrative explanation.
Where to start (without boiling the ocean)
The good news is you don’t need perfection to get value.
Start with:
- versioning requirement baselines (even if it’s just major increments)
- capturing meaningful diffs between versions
- mapping high-impact requirement areas to WBS/work packages
- recording change events in a structured way (not just as documents)
- linking those events to estimate deltas and actuals
Once you do that, the organization becomes learnable over time. You stop relearning the same lessons every program.
Closing thought
The future of program management isn’t just better dashboards. It’s better structure.
When scope is versioned, traceable, and connected to cost, you stop arguing about variance and start managing it. You can see what changed. You can quantify impact. You can forecast what’s likely next. And you can make Earned Value reflect the world your teams actually operate in.
