Operating Models and What Success Looks Like
Why enterprise AI programs succeed or stall, and how to tell which is happening to yours.
Cite as: The Applied Layer. (2026). Operating Models and What Success Looks Like. The Applied Layer. https://appliedlayer-ai.com/briefings/pillar-operating-models

Pillar 3, Operating Models and What Success Looks Like
Why enterprise AI programs succeed or stall, and how to tell which is happening to yours.
The question
Why does one organisation ship a working AI assistant in nine months while a peer with a larger budget is still circling its third pilot? The difference is rarely the model. It is the operating model, the team shape, and the cadence at which the programme can absorb signal from production.
Editorial thesis
Operating model dominates technology choice as the determinant of enterprise AI outcomes. Two firms with comparable ambition, similar vendor stacks, and similar talent pools routinely produce divergent results because their operating models differ. The population of operating models in use clusters into four archetypes on a 2x2 grid: Centralized Platform, Centralized Delivery, Federated Platform, and Federated Delivery / CoE. Six conditions of success account for most of the variance among healthy programs, and each archetype makes some easier and others harder.
Key findings (from the anchor research)
- Operating model dominates technology choice as the determinant of enterprise AI outcomes. JPMC, DBS, Morgan Stanley, Sanofi and Walmart show what coherent operating models look like; McDonald’s, Air Canada, Zillow, and the Klarna customer-service reversal show what happens when they are not.
- Operating models cluster into four archetypes on a 2x2 grid (centralization x platform-vs-delivery orientation), each with predictable strengths and failure modes.
- Six conditions of success account for most of the variance among healthy programs: production reach, evaluation in production, integration to systems of record, governance integrated with delivery, talent retained around an applied-layer practice, and stable executive sponsorship.
- The archetype-by-condition interaction matrix (the synthesis figure of the report) is predictable enough to guide intervention. It is the report’s original framework contribution.
- Time-to-second-use-case is among the strongest programme health metrics. Reorganisations follow architecture more reliably than architecture follows reorganisations.
What is filed under this pillar
- Anchor research: “Operating Models and What Success Looks Like”, the flagship survey of operating-model archetypes and the conditions of success.
- Briefings on team shape, governance cadence, programme health metrics (forthcoming).
[upgrade-prompt target=”member”] Become a Member, free in 60 seconds, to read the underlying research and briefings. [/upgrade-prompt]
Member view
The flagship Pillar 3 research, “Operating Models and What Success Looks Like”, is the canonical anchor for this pillar. Members can read the full report, including the four archetypes, the six conditions of success, and the synthesis figure (the operating-model x success-condition interaction matrix).
Briefings filed beneath this pillar walk through specific archetype transitions, governance cadence, and programme health metrics as they are published.
[upgrade-prompt target=”patron”] Patron unlocks methodology notes, the full bibliography with annotations, and primary research data. £15 per month. [/upgrade-prompt]
Patron view, methodology and primary data
The methodology note and full bibliography for the flagship operating-models research live in the Patron-tier section of the anchor piece. The annotated bibliography is exportable as BibTeX. The synthesis matrix is downloadable as CSV/JSON for use in internal operating-model design exercises.
Patrons receive new pieces in this pillar 7 days before they go live for Members and 14 days before they go fully public.
Was this useful?
Related
Membership
Become a Member to receive new briefings as they are published.
