Santosh Sahoo
Back to Writing
AI Strategy
Enterprise Integration
4 min read

The Operating Model Is the AI Strategy

Most enterprise AI strategies spend 80% of their attention on the technology and 20% on the operating model. That ratio needs to flip. The operating model is where AI programs succeed or fail.

Santosh Sahoo

I've reviewed a lot of enterprise AI strategies. Most of them have the same structural problem.

They are technology strategies wearing the clothing of business strategies.

They spend most of their pages on model selection, infrastructure architecture, and technology roadmap. They spend very few pages on the operating model — how the humans and technology will work together, who is accountable for what, how decisions will be made, and what changes will be required in how the organization actually operates.

That's exactly the wrong ratio. And it's why so many AI programs produce impressive demonstrations and disappointing production deployments.

What an Operating Model Is

An operating model is the design of how an organization delivers its work — the structures, processes, accountabilities, and behaviors that translate strategy into outcomes.

For an AI program, the operating model questions include:

Who decides which use cases AI will be applied to, and based on what criteria? How do humans and AI divide the work in each use case? Who is accountable when AI-assisted work produces a wrong outcome? How does the organization learn from failures and iterate? How does it scale what works?

These are not technology questions. They're organizational design questions. And the answers to them determine whether AI creates value or creates cost and confusion.

Why Operating Models Are Hard

Operating model design is harder than technology selection for several reasons.

It involves changing how humans work, not just what tools they have. Most organizations underestimate this. They buy the technology and assume the behavioral change will follow. It doesn't.

It requires resolving accountabilities that are often unclear or contested. Who owns AI governance? Who decides when to override AI recommendations? Who is responsible for AI quality? These questions surface organizational ambiguities that exist independently of AI — and AI makes them urgent.

It creates winners and losers in terms of how work is distributed. Some roles will have more interesting work as AI handles the routine. Others will feel displaced. Managing that transition is a people and culture challenge, not a technology challenge.

What Good Operating Model Design Looks Like

The enterprises I see moving fastest on AI have a few things in common.

They defined the operating model before they selected the technology. They understood how they wanted humans and AI to work together, what governance they needed, and what accountabilities they required — and then they selected technology that fit the operating model, not the reverse.

They piloted and iterated before they scaled. They picked a specific use case, designed the operating model for it, ran it in production, learned from it, and refined it before deploying at scale. The operating model for pilot one is never the operating model for the scaled program.

They invested in change management at the same level as technology implementation. They recognized that getting people to work differently with AI requires the same attention as getting systems to work together.

The Ratio That Actually Works

A well-structured enterprise AI program allocates roughly equal attention to: technology selection and implementation, operating model design, and change management.

The technology-heavy programs — 80% technology, 20% everything else — produce impressive demos. The balanced programs produce production value.

If your AI strategy document is mostly about models, infrastructure, and architecture — you have a technology document, not an AI strategy. The operating model is the strategy. Start there.

Views are personal.