Fail Fast to Win Big: Rethinking the path to AI Success in the Public Sector

Date posted
7 November 2025
Reading time
5 minutes
Joe Driscoll
Data Scientist · Kainos

Why embracing failure is critical to unlocking the value of AI

A recent report from MIT’s NANDA initiative found that up to 95% of Generative AI projects are failing to deliver measurable impact, pointing to flawed enterprise integration and focusing on the wrong problems. But this isn’t a new problem. Even before the GenAI boom, some estimates put failure rates for AI projects at over 80%. 

If AI is supposed to be so transformative, why is it proving to be so hard to extract real value from it? Especially for government departments and large organisations pushing to adopt it quickly? 

At Kainos, we’ve worked closely with some of the UK’s largest public sector departments and enterprise clients, supporting AI programmes from early scoping through to delivery and scale. We’ve experienced the challenges first hand and helped find what makes them succeed. 

This post shares some of those observations and the strategies we’ve seen work in turning AI from a liability into an asset. 

Rethinking Success: Why aiming for Active Failure is a Competitive Advantage

From what we’ve seen first hand, the issue isn’t a lack of ambition or expertise. It’s that many organisations are approaching AI as a guaranteed investment when it’s still fundamentally an innovation activity. 

The traditional software procurement processes in the public sector are intentionally geared towards in depth discovery, detailed requirements gathering and long term commitment. This approach is justified when it’s clear that the piece of software in question will solve the problem and it’sits rates of failure and possible impacts of those failures are well understood. The viability of this approach doesn’t transfer well into the world of AI where performance, risks and impacts often can’t be fully appreciated without domain specific testing and assurance. 

This can result in situations where AI projects will often gain significant senior leadership interest, budget and strategic attention, which leads to projects that once had high expectations being allowed to limp along far longer than they should because: 

  • Significant time is spent on prioritisation and down selection of projects to take forward before any technical work has even been started, resulting in bias towards continuation. 
  • Stopping a project, at any point, is perceived as a negative result due to wasted resources. 

 

Rather than stubbornly resist the often reported AI project failure statistics, organisations should embrace them and adapt their strategy to find value despite the high failure rate. If up to 95% of AI projects are going to fail across a portfolio then failure isn’t a risk, it’s almost a certainty. The risk is in allowing those projects to carry on when they could have been stopped earlier. Therefore the goal should be to fail on purpose, early and cheaply. By failing early and cheaply, you free up resources to focus on the projects that come out on top. 

What this looks like in practice: 

  • Don’t overdo it with early non-technical discovery activities, use case documentation and prioritisation processes. Collect just enough information to be able to roughly plan out a short, sharp technical discovery and then move on.  
  • Commit early to a limited timeline for demonstrating viability of the project. A two week “technical discovery” where some real data is used to test different AI components of the solution should provide enough evidence to decide whether or not to continue and how to define the success criteria. A fast “no” is better than a slow “maybe”. 
  • Agree success criteria up front before continuing beyond technical discovery. Importantly this should include criteria for data availability (more on that soon). 
  • Reassess frequently. Set up regular but light touch review points. 
  • Create a culture that embraces failure as a necessary part of the innovation process, so that decision makers can be objective in their assessments. 
  • Double down on the projects that make it. Use the resources freed up by failing early to accelerate the initiatives that show value. 

In short: lose small, win big. 

image

Data First. Still and Always. 

Even with current state of the art AI models, availability and quality of data will always be one of the biggest predictors of project success.  

It’s very easy to be swayed by the concept of what could be possible when the data is made available, but this can result in wasted effort developing a solution for the data to either never arrive, or arrive in a drastically different form to what was expected. 

Data availability and quality should be top-tier criteria, not just at kick-off, but throughout the project lifecycle. Treat every promising use case as a data problem first, AI problem second. If it’s not clear what the real data is going to look like, where it’s going to come from, or when it’s going to be available, that’s a sign to pause. 

Bridging the Gap: Deployed AI Teams 

In an ideal world, every team would have an embedded AI professional fluent in both domain and data. But with data professionals in short supply that’s rarely realistic.  

Centralised AI teams can be a solution for this, but have the potential to result in a disconnect between the AI experts and business owners. Under this model, highly skilled AI teams may build technically impressive solutions that simply don’t fit user needs, or worse, solve non-existent problems.  

Instead, we’ve helped organisations adopt a Deployed AI Team model: where the centralised AI group temporarily embed smaller sub-teams in business units for use-case specific solution development. 

When this works well, it unlocks: 

  • Collaboration across silos 
  • Reusability of assets and tools 
  • Shared standards for quality and ethics 

But challenges can occur with this model when business users don’t feel ownership of the project. This can result in limited user engagement as it’s seen as “side of desk” activity, and projects running on despite not showing value because: 

  • Timelines don’t get agreed at the start of the project, making it easier to just continue because it was never agreed when to stop. 
  • Clear success criteria aren’t agreed early enough in the project, so the stop/go decision becomes subjective and puts personal pressure on the decision makers. 
  • It’s not clear who is accountable for making the stop/go decision, resulting in a sort of “bystander effect”. 

To help make it work, we recommend: 

  • Being clear on who is responsible for making the stop/go decision. Who represents the User and who represents the AI portfolio? They will likely have different views. 
  • Co-defining goals with business stakeholders 
  • Setting expectations around time and input, recognising that user engagement is a significant indirect investment. 
  • Introducing internal charging (even if symbolic) to build accountability for outcomes 
image

Minimising Friction: Why You Need to Prototype Fast 

You can’t assess an AI solution on paper. Users need to see it in action, with real data, to evaluate whether it meets their needs. Not perfect, not production-ready, but real enough to learn from. 

The easier and faster you can get a prototype into the hands of users, the faster you can either realise its potential and invest further, or fail on purpose, saving time and money. 

We’ve helped organisations set up AI development platforms that give AI teams a head start by providing pre-approved, secure environments for rapid prototyping, with access to the right tools, data, and controls, without the setup overheads. This kind of infrastructure investment pays for itself in velocity. 

It’s more likely that an AI project will fail because it doesn’t work for users, not because it didn’t scale for production. Start small, test early, learn fast. 

Summary: The Key Takeaways for Maximising AI ROI 

The key point is that failure isn’t the enemy, wasted effort is. In the AI space, a high failure rate is not just inevitable, it’s desirable. It means you’re exploring, testing and innovating. 

To succeed with AI in your organisation, try the following: 

  • Fail Fast, On Purpose - Set short timelines and clear success criteria to stop unviable AI projects early. 
  • Data First - Prioritise real, available, and high-quality data before investing in AI solutions. 
  • Embed AI with the Business - Place AI teams within business units to ensure alignment and shared ownership. 
  • Prototype Early - Quickly build working prototypes with real data to test value before scaling. 

That’s how we can embrace failure to unlock the value of AI. 

About the author

Joe Driscoll
Data Scientist · Kainos