
Design an AI operating model that scales. Learn the three structural archetypes, CAIO role, portfolio management, and talent structure enterprises need to move beyond pilots.
Published
AI Initiatives Operating Model Guide [2026]
TLDR: Most enterprises have launched AI initiatives. Very few have built the organizational infrastructure to sustain them. The operating model — how AI work is owned, funded, governed, and connected to business outcomes — is the difference between a portfolio of disconnected pilots and a program that compounds value over time. This guide explains the three dominant operating model archetypes, how to choose between them, what the leadership structure needs to look like, and how AI portfolio management actually works in organizations that get it right.
Best For: CEOs, COOs, CIOs, and VP Operations at enterprises with 1,000+ employees who have moved beyond early AI experimentation and are designing or redesigning the organizational structure that will carry their AI program through scale.
Why the operating model determines whether AI scales or stalls
Most enterprises approach AI transformation as a technology problem. They invest in platforms, hire data scientists, commission proofs of concept, and wait for results. When those results don't materialize at scale (BCG's research shows only 25% of companies have successfully scaled AI to deliver significant business value), the reflex is to invest more in technology. The actual problem is almost never the technology. It's the absence of an operating model that connects AI initiatives to business accountability.
An AI initiatives operating model is the set of structures, roles, decision rights, and funding mechanisms that determine how AI work gets prioritized, resourced, executed, and measured across an enterprise. Without it, every business unit runs its own experiments on its own timeline with its own tools. Governance is reactive. Duplication accumulates silently. And the organization never builds the institutional capability that makes each successive AI initiative faster and cheaper than the last. With it, the enterprise learns. Individual projects contribute to shared infrastructure. Governance becomes a velocity enabler rather than a bottleneck. The question shifts from "can we run an AI pilot?" to "how quickly can we take a proven use case to production?"
Deloitte's 2026 State of AI in the Enterprise report found that governance readiness sits at only 30% among companies already deploying AI, compared to 43% for technical infrastructure and 40% for data management. That gap is not a technology gap. It's an operating model gap. Enterprises are building the car before they have decided who drives it or what happens when it takes a wrong turn.
The operating model question is also more urgent than it was two years ago. Gartner predicts that 40% of enterprise applications will include task-specific AI agents by 2026, up from less than 5% in 2025. Gartner also predicts that more than 40% of agentic AI projects will be canceled by 2027, not because the technology doesn't work, but because organizations are deploying it without the structural foundations that would make it accountable. The operating model is what separates the 60% that scale from the 40% that get shut down.
The three dominant operating model archetypes
Enterprises designing or redesigning their AI operating model typically choose between three structural archetypes, each with distinct tradeoffs. McKinsey's research on gen AI operating models shows that more than 50% of businesses have adopted a centrally-led organization for gen AI, even in cases where their usual setup for data and analytics is relatively decentralized. That tendency toward centralization in early stages is rational. It's also temporary.
The centralized model places all AI capability, governance, and budget authority in a single enterprise-level function, typically reporting to the CTO, CIO, or CAIO. This function sets standards, owns infrastructure, selects tools, and manages the portfolio of active initiatives. Business units submit use cases and receive resources from the center. The strengths are clear: standards are enforced consistently, duplication is minimal, and the organization builds deep expertise in one place rather than shallow expertise distributed across dozens of teams. The limitations are equally clear: the central function becomes a bottleneck as the portfolio grows, business units experience the process as slow and disconnected from their operational reality, and the talent pool for AI work is artificially constrained to what the central team can absorb.
The hub-and-spoke model distributes execution while maintaining central standards. A central AI function (often organized as an AI Center of Excellence) owns governance, infrastructure, tooling standards, and shared platforms. Business units own use case prioritization and day-to-day execution, supported by AI champions or embedded AI talent who sit within the business unit but connect to the center. This is the most commonly adopted model for enterprises that have moved past the earliest stages of AI deployment, and the research on its effectiveness is consistent. IBM research found that CAIOs operating in centralized or hub-and-spoke models achieve 36% higher ROI than those in decentralized structures, a gap that reflects the compounding effect of shared infrastructure, consistent governance, and institutional learning.
The federated model gives individual business units significant autonomy over their AI programs, with the central function playing a coordination and standards-setting role rather than an authority role. This model works in highly diversified enterprises where business units operate in genuinely different markets, regulatory environments, and technology contexts. Forcing a single tool or workflow standard across all units would impose more cost than it saves. The risk is duplication and governance erosion as business units optimize for their own timelines rather than enterprise-wide accountability. McKinsey's research suggests that federated design fits organizations at higher levels of AI maturity, when business units have developed the internal capability to govern AI work responsibly without central oversight at every decision point.

Your AI Transformation Partner.
How to choose the right model for your organization
The operating model decision is not primarily a preference question. It's a function of where the organization currently sits on three dimensions: AI maturity, organizational complexity, and regulatory context.
Organizations in the earliest stages of AI deployment, running fewer than ten active initiatives with most talent concentrated in a central team, almost always benefit from a centralized or lightly hub-and-spoke model. The priority at this stage is not speed of business unit execution. It's building the shared infrastructure, governance discipline, and institutional knowledge that will make later distribution possible. Organizations that skip this stage and move immediately to federated models consistently report higher duplication, more compliance incidents, and longer time-to-production for new initiatives.
As AI maturity grows and the portfolio expands beyond what a central team can manage without becoming a bottleneck, the hub-and-spoke model becomes the right structure. The transition point is typically when the portfolio exceeds 15–20 active initiatives across three or more business units, or when business units are generating use case demand faster than the central function can evaluate and resource. The hub-and-spoke model does not reduce central governance; it distributes execution while keeping standards, infrastructure, and portfolio-level visibility centralized. As we cover in our AI transformation roadmap, the operating model evolution from centralized to hub-and-spoke is a structural milestone in its own right, not an organic outcome of portfolio growth.
The regulatory context also shapes the operating model decision independently of maturity. In financial services, healthcare, insurance, and other regulated industries, McKinsey's analysis consistently finds that centralized AI governance delivers better compliance outcomes and faster deployment timelines for regulated use cases. When AI models are making or informing decisions that carry regulatory exposure, the traceability and auditability requirements favor central oversight over distributed execution.
The leadership structure that makes AI programs accountable
The structural question (centralized, hub-and-spoke, or federated) is inseparable from the leadership question. Who owns AI across the enterprise, and where do they sit in the organization? These decisions determine whether AI initiatives have the authority to compete for resources, the visibility to attract executive attention, and the accountability to be held to business outcomes rather than technical outputs.
The emergence of the Chief AI Officer role reflects a growing recognition that AI strategy requires dedicated executive ownership that neither the CTO nor the CIO typically provides. IBM research shows that only 26% of organizations currently have a CAIO, up from 11% in 2023. And 66% of current CAIOs expect most organizations to have one within two years. The organizations that have appointed CAIOs reporting directly to the CEO or COO see measurable differences: 10% greater ROI on AI spend, and 24% greater likelihood of outperforming peers on innovation. These are not trivial margins, and they're attributable not to the individual in the role but to the structural effect of having a dedicated executive owner who controls budget, sets priorities, and is held accountable for business outcomes across the enterprise AI portfolio.
Below the executive level, the operating model requires two layers of leadership that most enterprises underinvest in. The first is the AI Program Lead: a senior operational role that owns day-to-day portfolio management, vendor relationships, and the governance processes that the CAIO designs. This is not a technology role. It's a program management role with deep AI literacy, and it's the function that most determines whether governance works in practice rather than on paper. The second layer is the AI Champion network embedded in each business unit. AI Champions are typically senior individual contributors or team leads with enough domain credibility to translate between business needs and AI capability, and enough AI literacy to evaluate whether a proposed solution is technically credible. They are not members of the central AI team, but they are connected to it through training, shared tooling access, and regular portfolio review sessions. This network is how the hub-and-spoke model actually works operationally, and its absence is the most common reason hub-and-spoke implementations collapse back into de facto centralization.
How AI portfolio management works in practice
Most enterprises manage their AI initiatives the way they manage IT projects: as individual requests to be evaluated, funded, and tracked in isolation. This approach fails at scale because AI initiatives are not independent. They share data infrastructure, model infrastructure, governance overhead, and talent. Managing them in isolation means invisible cumulative infrastructure costs, duplicated foundational work, and no visibility into where organizational capability is actually building versus where it is quietly stalling.
Effective AI portfolio management treats the initiative portfolio as a single system with four distinct tiers that require different management approaches. The first tier is foundational infrastructure: the data platforms, model infrastructure, security frameworks, and governance tooling that every initiative depends on. These investments are shared costs, and they should be funded and managed separately from individual initiatives rather than baked into individual project budgets where they become invisible. The second tier is production deployments — initiatives that have completed pilot validation, achieved production readiness, and are operating against defined business KPIs. These are managed like operational programs, with SLAs, performance monitoring, and regular review against the ROI case that justified the deployment. The third tier is active pilots, operating against pre-defined success criteria and timelines. The fourth tier is the use case pipeline: candidate initiatives that have been identified but not yet resourced, maintained as a prioritized backlog that the portfolio review cycle can draw from as capacity opens up.
Gartner's guidance recommends that CFOs build a balanced portfolio that includes productivity use cases, targeted process improvements, and selective transformational bets, explicitly because AI does not produce one uniform type of value and should not be evaluated through a single ROI framework. Productivity use cases are low-risk, fast-to-validate, and generate the organizational confidence that justifies investment in more complex transformation bets. Transformational bets are high-risk, long-horizon, and require executive protection from the quarterly pressure to show returns. A portfolio that contains only quick wins will not build the organizational capability that creates durable advantage. A portfolio that contains only transformational bets will not survive the first funding review cycle after results fail to materialize on schedule. The balance between these tiers, and the discipline to maintain it as individual stakeholders lobby for their priorities, is what separates portfolio management from project tracking.
The portfolio review cadence is the mechanism that keeps the model honest. As we have covered in our analysis of why AI projects fail to deliver ROI, the absence of a formal review process is one of the most reliable predictors of stalled programs, because without a regular moment where the portfolio is assessed against business outcomes, underperforming initiatives accumulate rather than getting shut down or restructured. A quarterly portfolio review that covers production performance, pilot progress against success criteria, and pipeline prioritization is the minimum viable governance rhythm for an enterprise running more than ten active AI initiatives.
The talent question that most enterprises answer too late
Every AI operating model lives or dies on talent. Not just the technical talent to build and run models, but the operational talent to manage programs, the business talent to identify and validate use cases, and the change management talent to drive adoption when new workflows replace familiar ones. Most enterprises underinvest in the last three categories while competing fiercely for the first.
Gartner's 2026 research found that acquiring and developing AI and digital talent is CFOs' top near-term challenge, a finding that reflects how thoroughly talent constraints have become the binding constraint on AI program velocity. The talent shortage is real, but it's often misdiagnosed. The scarcest resource in most enterprise AI programs is not machine learning engineers or data scientists. It's people who can operate at the intersection of business domain expertise and AI literacy: people who understand what a procurement workflow looks like in practice and can evaluate whether an AI solution will hold up under real conditions. This is the AI Champion profile, and it's almost impossible to hire from outside. It must be developed from within the business, which is why the reskilling investment needs to precede the operating model redesign rather than follow it.
Deloitte's 2026 findings show that 53% of organizations are prioritizing education of the broader workforce to raise overall AI fluency, and 48% are designing formal upskilling and reskilling strategies. These are not the same thing. Broad AI fluency (helping every employee understand what AI can and cannot do, how to work alongside it, and when to push back on outputs that don't make sense) is a change management investment. Targeted reskilling of the AI Champion layer is a capability investment. Both are necessary, and the sequencing matters. Broad fluency without targeted capability development produces awareness without operational change. Targeted capability development without broad fluency produces AI programs that experts build and organizations resist adopting.
The talent model also needs to account for the long-term trajectory of roles. New positions (AI collaboration designers, AI operations leads, model performance analysts) are emerging faster than traditional HR functions can evaluate and price them. The operating model needs to include explicit talent planning for these roles, not as an appendix to the technology roadmap, but as a parallel workstream with its own milestones and resourcing commitments. Organizations that treat talent planning as something that happens after the operating model is designed consistently encounter the same problem: the model is built, the governance is in place, and there is nobody to run it.
Moving from model to maturity
Designing the operating model is not the end of the work. It's the beginning of a maturation process that unfolds over two to four years and requires active management at every stage. Most enterprises go through three recognizable phases, and the transitions between phases are where programs most commonly stall or regress.
The first phase is establishment: getting the basic structure in place. This means standing up the steering committee or CAIO function, defining the operating model archetype, establishing the governance policy framework, and building the portfolio management infrastructure. At this phase, the operating model is largely aspirational. The governance processes are new and untested. The AI Champion network is nascent or nonexistent. The portfolio contains a small number of pilots with varying levels of rigor. The work at this phase is less about deploying AI and more about building the organizational infrastructure that will make AI deployment consistently successful. Most enterprises reach the establishment phase within the first six to twelve months of a deliberate operating model design effort.
The second phase is integration: embedding the operating model into how the enterprise actually makes decisions. Governance processes become part of standard project approval workflows rather than parallel processes. Portfolio reviews are on the executive calendar as standing agenda items. Business units are generating use case demand through the portfolio intake process rather than approaching the central AI team ad hoc. The AI Champion network is active, trained, and connected to the center. This is the phase where the operating model either takes root in the organization's operating rhythm or gets bypassed by business units that find it easier to work around it than through it. Our guide on implementing AI without replacing legacy systems covers the technical dimension of this integration challenge; the organizational dimension requires the same discipline applied to process and governance.
The third phase is optimization: using portfolio data and governance experience to continuously improve the operating model itself. At this phase, the portfolio review cycle generates data on time-to-production, ROI realization rates, and governance incident patterns that inform operating model decisions: which use case categories should be fast-tracked, where the AI Champion network needs reinforcement, which business units are building genuine capability and which are still depending on central team resources for work they should own. This is the phase most enterprises have not reached yet. Deloitte's research found that only 11% of organizations have deployed agentic AI systems in production, despite 38% piloting them, which suggests the majority are still working through establishment and integration rather than optimizing a mature model.
The operating model is not a document or an org chart. It's a set of behaviors: how decisions actually get made, how resources actually flow, how accountability actually works. These behaviors develop through iteration and reinforcement over time, not from publishing a governance policy. The enterprises that get this right are not the ones that design the most sophisticated model on paper. They're the ones that start with a clear, simple structure, run it consistently, and adjust based on what they learn. Two to three years of that discipline is what the data actually shows separates the organizations that scale AI from the ones that keep announcing new pilots.
Frequently Asked Questions
What is an AI initiatives operating model?
An AI initiatives operating model is the set of structures, roles, decision rights, and funding mechanisms that determine how AI work gets prioritized, resourced, executed, and measured across an enterprise. It answers who owns AI strategy, how initiatives get approved and funded, how governance works in practice, and how individual projects connect to enterprise-wide accountability for business outcomes.
What are the three main AI operating model archetypes?
The three archetypes are centralized, hub-and-spoke, and federated. Centralized models place all AI authority in a single enterprise function. Hub-and-spoke models distribute execution to business units while keeping standards, infrastructure, and portfolio oversight central. Federated models give business units significant autonomy, with central functions playing coordination and standards-setting roles. McKinsey research shows more than 50% of businesses use a centralized model in their early AI stages.
Which operating model delivers the best ROI?
Hub-and-spoke models consistently outperform fully centralized or decentralized alternatives. IBM research found that organizations using centralized or hub-and-spoke models achieve 36% higher ROI on AI investment than those with decentralized structures. The advantage comes from shared infrastructure, consistent governance, and institutional learning that compounds across initiatives rather than restarting with every project.
What does a Chief AI Officer do, and do we need one?
The CAIO owns enterprise AI strategy, portfolio management, and cross-functional AI governance — roles that neither the CTO nor CIO typically performs with sufficient focus. Only 26% of organizations currently have a CAIO, but IBM data shows companies with a CAIO see 10% greater ROI on AI spend and are 24% more likely to outperform peers on innovation. The role is most valuable when it reports directly to the CEO or COO with authority over budget allocation and portfolio prioritization.
How should enterprises fund AI initiatives?
AI programs require two distinct funding streams — one for foundational infrastructure shared across all initiatives, and one for individual pilots and production deployments. Bundling infrastructure costs into individual project budgets makes shared investments invisible and consistently leads to underinvestment in the foundations that make individual initiatives faster and cheaper. Gartner recommends treating the AI portfolio as a balanced mix of productivity use cases, targeted process improvements, and transformational bets — each evaluated against different ROI horizons.
What is an AI Center of Excellence, and when does it make sense?
An AI Center of Excellence is the central function in a hub-and-spoke operating model, responsible for governance standards, shared infrastructure, tooling selection, and portfolio oversight. It makes sense for organizations that have moved beyond early experimentation and need to scale AI across multiple business units without losing consistency or duplicating foundational work. The CoE is not a delivery function — it is an enabling and governing function that business units draw on for standards and shared capability.
What is an AI Champion, and why does the role matter?
An AI Champion is a senior domain expert embedded in a business unit who operates as the connection point between business operations and the central AI function. They translate business needs into AI use case requirements, evaluate whether proposed solutions are operationally credible, and drive adoption of new AI-enabled workflows within their domain. The AI Champion network is the mechanism that makes hub-and-spoke models work in practice, and its absence is the most common reason those models fail to distribute execution effectively.
How should enterprises manage a portfolio of AI initiatives?
Effective AI portfolio management treats initiatives across four tiers: foundational infrastructure, production deployments, active pilots, and the use case pipeline. Each tier requires different management cadence and accountability structures. A quarterly portfolio review covering production performance, pilot progress, and pipeline prioritization is the minimum governance rhythm for an enterprise running more than ten active AI initiatives. Without this cadence, underperforming initiatives accumulate rather than being restructured or shut down.
Why do so many AI initiatives fail to scale?
Gartner predicts more than 40% of agentic AI projects will be canceled by 2027 due to escalating costs, unclear business value, or inadequate risk controls — not because the technology does not work. The operating model failures behind these cancellations are consistent: no portfolio-level visibility, unclear accountability for outcomes, governance that activates after problems emerge rather than preventing them, and talent structures that cannot sustain operational AI at scale.
When should an enterprise evolve its operating model from centralized to hub-and-spoke?
The transition is typically warranted when the portfolio exceeds 15–20 active initiatives across three or more business units, or when business unit use case demand is outpacing the central function's capacity to evaluate and resource it. The transition requires standing up the AI Champion network and embedding AI talent in business units before the central team reduces its hands-on role — not after. Operating model evolution that leaves business units without capability support before the center scales back is the most reliable path to adoption failure.
How does the AI operating model connect to production readiness?
Production readiness is a governance gate, not a technical milestone — and the operating model determines whether that gate works. A production readiness review requires named reviewers with defined criteria, a clear path for issues that fail the review, and executive authority to delay deployment when criteria are not met. Organizations without these structural elements consistently advance initiatives to production before they are ready, then encounter the visible failures that erode executive confidence in the broader AI program. The AI production readiness checklist covers the criteria; the operating model determines who applies them.
What talent roles does an AI operating model require beyond technical AI skills?
The most underinvested roles are operational rather than technical: AI Program Lead (portfolio management, governance process ownership, vendor coordination), AI Champions (domain-embedded use case translators and adoption drivers), and AI Operations Analysts (model performance monitoring, exception tracking, continuous improvement). These roles require AI literacy but not technical AI expertise. They are the connective tissue between strategy and execution, and their absence is why well-designed operating models fail to produce the outcomes that the governance documents promise.
How long does it take to build a mature AI operating model?
Most enterprises move through three phases over two to four years: establishment (6–12 months, building basic structure and governance), integration (12–24 months, embedding the operating model into decision-making rhythms), and optimization (24+ months, using portfolio data to continuously improve the model itself). The timeline compresses for organizations that enter with strong governance experience and expands for those navigating significant change management complexity. The most reliable predictor of timeline is not technology readiness — it is the quality and consistency of executive commitment to operating model discipline through the integration phase.
How does the operating model handle AI governance in regulated industries?
Regulated industries require more centralized control of AI governance, even when the operating model distributes execution. In financial services, healthcare, and insurance, AI models making or informing regulated decisions — credit scoring, claims processing, hiring recommendations — require traceability, auditability, and explainability that distributed governance cannot reliably provide. McKinsey's analysis of regulated industry AI deployments consistently finds that centralized governance of regulated use cases delivers better compliance outcomes and faster deployment timelines than distributed alternatives, even in organizations that use hub-and-spoke models for non-regulated domains.
What does AI operating model maturity look like in practice?
A mature AI operating model is visible in how the organization actually behaves, not in its governance documents. Portfolio reviews happen on schedule with executive attendance. Business units generate use case demand through the intake process rather than working around it. The AI Champion network is active and trained. Production deployments are monitored against defined KPIs and reviewed quarterly. New initiatives move through pilot-to-production in a consistent, predictable timeline. The central AI function is a strategic asset rather than a bottleneck. These behaviors do not emerge from policy — they develop through consistent enforcement and iterative improvement over two to three years of deliberate operating model management.
Legal