Why execution keeps failing for predictable reasons
This is Article 6 in a seven-part series examining what the NCSU 2026 Top Risks Report surfaces, and what it doesn’t. The report is serious work. It captures what more than 1,500 board members and senior executives say keeps them up at night. It is not designed to explain why the same patterns keep reappearing year after year. That is what this series examines.
By the end of the quarter, most leadership teams are looking at some version of the same thing.
A set of reports. A set of metrics. A set of risks.
Registers are reviewed. Items are updated. Probabilities adjusted. Impacts re-evaluated. New risks added. Old ones closed. The exercise feels disciplined. Structured. Responsible.
And yet, beneath that activity, a quieter pattern tends to persist.
Execution did not land as expected. It rarely does.
Some initiatives slowed. Others drifted. A few required an intervention. Deadlines moved. Costs shifted. Decisions took longer than anticipated. Coordination became more complicated. In some cases, outcomes were delivered, but at a higher organizational cost than anyone planned for.
These are not anomalies. They are recurring patterns. And, strikingly, they rarely influence what gets recorded in the risk register, if one exists.
Most risk registers are not designed to capture the conditions and patterns that give rise to risks. They capture what might happen. They do not explain why certain conditions persist.
This is where the conversation on capability becomes important and where it almost always breaks down.
Capability Is Referenced. Rarely Defined.
Across sectors, the word “capability” has become common in executive conversations. We talk about building it, strengthening it, assessing it, and investing in it.
But when you look closely, capability is almost always treated as a proxy for something else, usually skills.
Do we have the right expertise? The right people? Do we need training? Do we need to hire?
Skills matter. But they are only one part of what determines whether something can actually be executed under real conditions.
Here is an example of what I mean.
A financial services organization, well run by most external measures, had spent two years building out a new client onboarding capability. They invested in training. They hired specialists. They redesigned the process. On paper, everything looked right. By every skills-based measure, the team was ready.
Eighteen months in, onboarding times had improved only marginally. Error rates remained stubbornly high. The executive team was frustrated. The people doing the work were demoralized. Everyone agreed the capability was there.
What no one had looked at was the conditions under which that capability was expected to operate.
The team was simultaneously carrying three other transformation priorities. Decision rights for non-standard cases were unclear, creating escalation loops that consumed hours of productive time each week. The data systems feeding the onboarding workflow had not been integrated with the new process design, so workarounds had become routine. Leadership direction on prioritization shifted every six to eight weeks, each shift requiring the team to reorient mid-stream.
The skills were present. The capacity was fragmented. The conditions were working against execution.
That is a capability problem. But it was never recorded as one. Capability, in this sense, was never considered as a risk at the start of the initiative.
The Missing Structure: Skills, Capacity, Conditions
Capability is not determined by skills alone. It is determined by the interaction of three elements.
Skills: what people know and can do.
Capacity: the time, attention, and cognitive bandwidth available to apply those skills.
Conditions: the environment in which those skills are expected to be exercised.
Most organizations invest heavily in skills. Some attention is given to capacity, usually in the form of workload conversations. Almost nothing is systematically designed around conditions.
And yet conditions are often the decisive factor.
Conditions are not abstract. They show up in very concrete ways: clarity of priorities, stability of direction, quality of decision-making, sequencing of initiatives, consistency of processes, alignment of incentives, degree of interruption and rework, and number of competing demands.
When these conditions are coherent, capable people perform consistently. When they are not, even highly skilled teams begin to produce variable outcomes, not because they lack competence, but because the system is working against them.
The 2026 Risk Report identifies execution risk, resource risk, and change-related resistance as recurring concerns across sectors. What it does not identify is that these are often the same risks in different clothing. Resistance is frequently “capacity saturation” by another name. Resource constraints often mask conditions that make effective use of resources impossible, even when those resources are technically available. The register describes the symptoms. It does not reach the structure beneath them.
What the Risk Register Misses
Consider a pattern most leadership teams will recognize.
An initiative is flagged as “at risk” due to resource constraints. Additional resources are allocated. The risk is downgraded. Progress resumes, temporarily.
What remains unexamined is why those resources were constrained in the first place and will be again.
Was it a skills issue? Or was it capacity saturation across too many parallel priorities? Or was it conditions – conflicting direction, unclear sequencing, decision bottlenecks – that made effective use of resources impossible, regardless of how many were added?
If the underlying condition is not addressed, the pattern repeats. The register updates. The system does not.
From a distance, these look like isolated execution issues. Up close, they are signals of capability strain – not because people lack skill, but because capacity is fragmented and conditions are unstable.
The Question Quarter-End Reviews Almost Never Ask
Quarter-end reviews tend to focus on outcomes. What was delivered? What slipped? What needs attention? Risk updates follow the same logic.
But this is precisely the moment where a more important question should be asked:
What did this quarter reveal about our system of capabilities?
Where did capacity thin out? Where did conditions degrade? Where did skilled teams struggle to perform consistently? Where did decision quality drop under pressure? Where did coordination become heavier than it should have been?
These are not secondary observations. They are early indicators of where risk is generated within the organization, before it surfaces as an entry in a register.
The predictability of execution failure is what makes this so consequential. When priorities are not clearly defined, everything becomes urgent. When too many initiatives run in parallel, capacity is fragmented. When decision rights are unclear, escalation increases. When sequencing is weak, dependencies collide. When conditions are unstable, variability rises across the board.
None of these is captured effectively as a discrete risk. But together, they shape the operational system’s ability to execute. And they are visible to anyone who knows how to look.
The Link Back to Strategy and Change
This is where capability becomes the connective tissue of the entire system.
Strategy introduces pressure. Change determines how that pressure is applied. Capability determines whether the system can absorb it.
If the strategy expands without clear boundaries, the pressure exceeds what the system was designed to handle. If change is layered without sequencing, load accumulates faster than capacity can recover. If capability is assumed rather than deliberately designed, strain emerges quietly and shows up late, usually as a risk that surprises no one in hindsight.
Risk, understood this way, is not just an external threat to be catalogued. It is the internal system signalling that the relationship between strategy, change, and capability is breaking down. The register records the signal. It does not interpret it. That is up to the humans in the system.
What Designing Capability Actually Requires
The shift is not about abandoning risk registers. It is about recognizing their limits and building something alongside them that addresses what they cannot see.
Designing capability deliberately means defining not just what needs to be done, but what will not be done, in line with the strategy. It means sequencing change based on absorption capacity rather than urgency. It means protecting capacity where it is critical and stabilizing conditions where variability is highest. It means clarifying decision structures to reduce friction and the competing demands that dilute focus before work even begins.
These are not risk management activities in the traditional sense. They are capability design decisions. And they belong in the same conversation as strategy, not after it.
At the enterprise level, this often looks like choosing a few critical capabilities, mapping the skills, capacity, and conditions they actually run on, and then making explicit design choices about what will change and what will stop.
The Question to Take Forward
As you review your risk register this quarter, the question is not whether it is complete.
It is whether it is pointing to the right things.
Are you reporting on risks? Or are you describing the symptoms of capability gaps that have never been formally named?
Because until capability is understood as the combination of skills, capacity, and conditions, and is designed with the same rigour applied to financial models or technology platforms, execution will continue to fail for entirely predictable reasons.
Not because the risks were unknown.
Because the system was never built to handle what it is asked to carry.
So, before you close out this quarter, add one disciplined move: choose one strategically critical capability, map the skills, capacity, and conditions it actually depends on, and name where those conditions are working against execution. Turn those findings into explicit design decisions, not just new entries in the register. If you repeat that work quarter after quarter, your risk profile will change for reasons the register alone could never explain.
If something specific came to mind while you were reading this, that’s a signal. That conversation is worth having.
If you want a brutally honest outside view before you commit to another quarter or year of “more initiatives, same results,” I run a capability review with senior teams. We take one critical capability, map the skills, capacity, and conditions it actually runs on, and surface the structural risks your register is currently treating as noise.
Contact: Dragica@uvidi.ca