What the 2026 Risk Report Doesn’t Say and Why It Matters

A seven-part executive series on why strategy, change, and risk keep disconnecting

When strategy, change, and risk operate in separate orbits, something predictable happens: strategy becomes abstract, change becomes disruptive, and risk surfaces only after decisions are already made. The executive team agrees AI is strategic. Six months later, IT is managing pilots, operations is absorbing new workflows, and the CFO is asking what this is all supposed to add up to. No one is confused about what’s happening; they’re unclear about what it’s for.

The 2026 Top Risks Report, produced by the NC State ERM Initiative in collaboration with Protiviti, captures what many executives are already experiencing: rising complexity, mounting execution pressure, and growing uncertainty, particularly around AI. It is a serious, well-researched document and a useful reflection of today’s executive climate.

What risk reports are not designed to do is explain why these pressures keep converging inside organizations in the same way, why well-intentioned strategies struggle to land, why change initiatives generate friction rather than momentum, and why risk repeatedly surfaces downstream of decisions rather than alongside them.

This seven-part series examines the structural gaps beneath today’s most visible risks—the places where strategy, change, and risk disconnect in predictable patterns.

We begin where most executive conversations begin today: AI.

AI Without Strategic Intent

Why So Many AI Investments Feel Urgent and Still Go Nowhere

Artificial intelligence has become unavoidable in executive conversations. Boards ask about it, investors expect it, and leadership teams feel pressure to demonstrate movement. Yet in many organizations, that movement is accompanied by a quiet unease. Activity is visible, spending is real, and progress is reported, but direction remains oddly difficult to articulate.

Behind closed doors, many senior leaders admit they can describe what their organization is doing with AI, but not what it’s supposed to become because of it. The gap between action and intent is subtle, but it is where strategic momentum erodes.

The prevailing explanation for this unease is that AI itself is inherently risky. The technology is evolving quickly, regulation is still forming, data exposure is growing, and the workforce is not fully prepared. These concerns are valid, but they’re downstream. The root cause is more fundamental.

In most organizations, AI is being introduced after strategy has already been declared, rather than as part of how strategy is formed. It arrives as a capability to be deployed, an initiative to be managed, or an efficiency to be captured. Rarely does it enter the conversation as a business model question. Leadership teams often move quickly to experimentation and implementation before they have resolved what role AI is meant to play in value creation, differentiation, or long-term advantage.

As a result, AI activity accumulates without a unifying logic. Tools multiply, pilots proliferate, and integration efforts intensify. Operating models begin to shift under the weight of new demands, yet the underlying strategic narrative remains implicit or contested. The organization is busy, but not oriented.

This is why AI investments feel urgent and still go nowhere. The problem is not execution failure in the traditional sense. It is not that teams are incapable or resistant. It is that AI has become active in the absence of an explicit strategic intent strong enough to guide coherent design decisions.

When intent is unclear, AI stops functioning as a strategic lever and becomes a requirement—something the organization adopts to remain credible, current, and competitive. In that state, AI behaves like a hygiene factor: necessary to prevent falling behind, insufficient to create advantage.

The problem is that hygiene factors, by definition, don’t compound. You can’t invest your way into differentiation when the activity is fundamentally defensive. Leaders sense this, which is why so many AI conversations feel oddly flat despite significant spending. The organization is moving, but nothing is building.

This shows up clearly in how organizations allocate capital. Cybersecurity leads investment priorities at 43%, followed by infrastructure modernization at 33% and business process improvements at 35%. Customer experience—the only growth-oriented priority in the top five—ranks fifth at 27%. The pattern is unmistakable: organizations are spending defensively, protecting what exists rather than building what’s next.

This mismatch creates a particular kind of risk that is easy to misdiagnose. Because the organization is moving quickly, leaders blame pace. Because the technology is complex, they blame uncertainty. Because governance questions multiply, they blame compliance gaps.

But the strain isn’t coming from AI itself; it’s coming from motion without meaning.

AI sits between what the organization is and how it creates value. When that boundary is clear, AI can be used deliberately to reinforce or reshape how value is created and delivered. When it is not, AI exposes the gap. Systems are reconfigured, roles are redefined, and workflows are altered in response to tools rather than direction.

Consider what just happened: “operations and legacy IT infrastructure unable to meet performance expectations” jumped from the 13th-ranked risk last year to 4th this year – the single highest elevation of any risk measured.  The explanation isn’t mysterious. Organizations spent the past year accelerating AI deployments while their infrastructure, processes, and people were already strained. The operating model is buckling under demands it was never designed to meet.

In that environment, risk does not announce itself early. It accumulates the way water damage does – invisible inside the walls until the ceiling buckles. It shows up as integration friction, decision delays, inconsistent prioritization, and declining confidence in outcomes. By the time it’s recognized as risk, it’s already structural.

What is often described as AI risk is, in reality, a design failure. The organization hasn’t failed to implement AI – it has failed to decide what AI is meant to serve. This is the consequence of allowing strategic ambiguity to persist while execution accelerates.

When leaders address this explicitly, the dynamic changes quickly. AI stops feeling urgent and starts feeling purposeful. Decisions about speed become selective rather than reactive. Capability is assessed before deployment, not after strain appears. Risk becomes visible earlier, as a signal of system limits rather than a post-hoc compliance concern.

The most important question is not how fast an organization should move with AI. It is whether leadership has articulated, clearly enough, to design around what role AI is meant to play in the organization’s future.

Until that question is answered, AI will continue to feel risky, not because it is powerful, but because it is directionless.

Next in the series: Competitive Pace Isn’t the Real Risk
Why speed becomes dangerous when strategy is unclear

Leave a Comment

Your email address will not be published. Required fields are marked *

15 − one =

Scroll to Top