Gartner surveyed 782 infrastructure and operations leaders. Only 28% of their AI projects fully delivered on ROI. One in five failed outright. 57% had experienced at least one failure.
That's not a niche finding. That's the majority of organisations running AI programs.
The failures aren't random
Melanie Freeze, Gartner's Director of Research, identified a consistent thread: teams expect AI to fix deep operational problems fast. When results don't come quickly, confidence collapses and projects stall.
"The 20% failure rate is largely driven by AI initiatives that are either overly ambitious or poorly scoped. AI that doesn't fit into the organisation's operations simply can't deliver ROI."
The areas where AI struggles most — auto-remediation, self-healing infrastructure, agent-led workflow automation — are exactly the environments where edge cases are frequent and reliability is non-negotiable. Current tools can't consistently perform there. The ambition was ahead of the tool.
Two more failure drivers, cited by 38% of respondents each: skill gaps, and poor data quality or limited data access. Not the model. Not the vendor. The foundations.
The gap between ambition and execution is a delivery problem, not a technology problem. Most technology conversations don't acknowledge that distinction, which is why the same failures keep recurring.
What the 28% do differently
The organisations that succeed aren't using better AI. They're running it differently.
Three patterns from the data:
- They don't run AI as a side experiment. High-performing teams deploy AI inside systems people already use — which cuts adoption friction and makes results visible faster. The opposite of the pilot-that-never-scales.
- They get executive alignment before they start. Projects with leadership backing survive the rough patches. Without it, the moment results are slow or the business case gets fuzzy, the project gets cut.
- They start narrow. IT service management and cloud operations came up most often as starting points — well-defined environments with measurable ROI and structured processes. Not the most complex, highest-ambiguity problem in the organisation.
The board pressure problem
A lot of AI initiatives are still funded by individual business units, with no CFO or CEO oversight. As AI infrastructure spend rises — Gartner projects it'll account for more than half of global IT spend this year — that's changing. Boards are asking harder questions, and leaders running projects on "trust us, the value is coming" are increasingly exposed.
A Harris Poll found 98% of CIOs said board pressure to demonstrate ROI was increasing. 71% believed their AI budget would be cut or frozen if targets weren't met by end of H1.
That's not a comfortable position to be in if you skipped the delivery discipline at the start.
What this actually means
AI doesn't fail because the technology doesn't work. It fails when it's disconnected from operational reality — when expectations weren't grounded, data wasn't ready, people weren't brought along, and scope was too ambitious to produce early wins.
None of these are new failure modes. ERP rollouts, CRM implementations, digital transformation programs — same pattern, for thirty years. The hype cycle just makes them harder to see coming.
The teams getting ROI are running AI like a delivery program: scoped business cases, realistic timelines, executive accountability, and a concrete plan for how people will actually use it. Not a science experiment.

