The UK public sector is sitting on a productivity time bomb, and the clock is ticking. A recent, brutally frank government report, developed with Bain & Company, revealed that clinging to outdated legacy IT systems is costing the taxpayer an astonishing £45 billion in lost productivity every single year. This isn’t the cost of a few failed projects; it’s the ongoing, systemic price of technological decay. For leaders in both the public and private sectors, this isn’t just an IT issue—it’s a fundamental threat to operational resilience, service delivery, and economic competitiveness.
This guide is for the leader who sees the storm clouds gathering. It’s for the Permanent Secretary, the CIO, the founder who knows that another high-stakes technology programme cannot be allowed to join the graveyard of good intentions. We will move beyond the usual platitudes and get to the heart of why these initiatives fail, and what you, as a leader, can actually do about it.
What you’ll learn:
- The Anatomy of Failure: Why the real cost of a doomed project isn’t the money you waste, but the opportunity you lose—and the early warning signs you can spot in the first eight weeks.
- The Antidote That Pays for Itself: A 7-step playbook for a “Discovery” phase that de-risks your investment and delivers a 400% return before a single line of code is written.
- What to Do on Monday Morning: Three direct questions to ask your team that will instantly reveal the health of your most critical technology projects.
The Sobering Reality: Why Your Project is Statistically Doomed
The first step in solving a problem is to appreciate its scale. When it comes to major software and technology programmes, the numbers are not just bad; they are consistently, terrifyingly bad. This isn’t about blaming individuals; it’s about acknowledging a systemic weakness that costs the UK billions.
The true cost of failure is not what you spend, but what you lose
While headlines love to focus on spectacular budget overruns, the real financial damage lies in the silent, grinding inefficiency of the systems that failed projects were meant to replace. The government’s 2025 State of Digital Government Review puts this in stark terms: the £45 billion annual productivity gap is the value the UK forfeits every year because public services are not fully digitised and modernised.
This isn’t a marginal problem. Over a quarter (28%) of all IT systems in central government are officially classified as “legacy”—a figure that has worryingly increased from 26% the previous year. In some departments, the proportion of outdated systems is as high as 70%. These systems are caught in a financial death spiral:
- Crippling Maintenance Costs: They cost three to four times more to maintain than modern alternatives. The UK government spends a staggering £2.3 billion a year—nearly half its IT budget—just keeping these creaking platforms alive.
- The Funding Trap: This exorbitant maintenance spend starves the very transformation programmes needed to escape the cycle. The government’s own review found that 28% of the most critical, “red-rated” legacy systems have no funding allocated for their remediation.
On top of this slow-motion disaster, we have the acute pain of direct project failures. A National Audit Office (NAO) report from early 2025 highlighted five critical digital programmes where costs ballooned by £3 billion—a 26% increase over their initial budgets. We’ve seen this story before with Universal Credit (a 45% cost increase of £912 million) and the infamous abandoned NHS patient record system, which has so far cost taxpayers nearly £10 billion, a colossal overrun on its original £6.4 billion estimate.
Reality Check: The True Cost of a Single Failure
The abandoned NHS National Programme for IT has cost the UK taxpayer nearly £10 billion so far, against an original budget of £6.4 billion. The final bill continues to rise due to ongoing contractual disputes and the cost of replacing the failed systems. It stands as one of the most expensive public sector IT fiascos in history.
Source: Public Accounts Committee, The Guardian
This isn’t just a public sector malaise. The globally respected Standish Group’s CHAOS reports have chronicled this for decades. Their 2020 data shows only 31% of software projects succeed (on time, on budget, with required features). A full 19% fail outright, and 50% are “challenged”—late, over-budget, or stripped of features. For large, complex projects, the kind that define careers and organisations, the success rate plummets to less than 10%.
The conclusion is inescapable. The most significant financial risk you face as a leader is not the headline cost of one failed project. It is the silent, compounding, multi-billion-pound opportunity cost of the organisational paralysis that prevents you from modernising.
Early warning signs: How to spot a failing project in the first 8 weeks
Catastrophic failures are rarely a surprise. They are the predictable result of foundational flaws that are present—and visible—from day one. A vigilant leader, asking the right questions, can spot these red flags long before the budget starts to burn. Here’s what to look for in the first two months.
The ‘Project Peril’ Diagnostic:
- The Solution Masquerading as a Problem: The project brief begins with a pre-ordained solution: “We need to build an AI chatbot,” or “We must implement the Acme CRM platform.” This is a fatal error. It bypasses the single most important question: what is the user’s problem we are trying to solve? As the Government Digital Service (GDS) manual wisely states, you must first interrogate the solution and reframe it as a problem.
- The “Too Busy” User: You’re told that the people who will actually use the system—the nurse, the call centre agent, the citizen—are “too busy with their day job” to attend workshops. This is the single biggest predictor of failure. For over 25 years, the Standish Group has identified user involvement as the number one factor for project success. Its absence guarantees you will build the wrong thing.
- Vague Definitions of Success: You ask, “What does success look like?” and receive a flurry of ambiguous, unquantifiable, and often contradictory answers. One stakeholder wants cost savings, another wants new features, and a third wants user satisfaction, but none of it is measured. This points to a critical misalignment, a key failure driver identified by both Gartner and Forbes.
- The ‘Big Bang’ Delusion: The project plan is a multi-year Gantt chart culminating in a single, high-stakes “go-live” event. This monolithic, “big bang” approach has been repeatedly condemned by the NAO as exceptionally risky because it prevents learning and course correction along the way.
- The Ghostly Sponsor: The executive sponsor gives a rousing speech at the kick-off meeting and is never seen again. They are not actively removing obstacles, defending the team from political interference, or making tough priority calls. A lack of executive support is the second-most cited reason for project failure.
- The Resource Mirage: The project is immediately starved of the people and budget it was promised. Key team members are pulled away to fight other fires, signalling that the organisation’s real priorities lie elsewhere. A Gartner survey found that a “change in the organization’s priorities” was the top reason for IT project failure, cited by 39% of respondents.
- A Culture of Unspoken Truths: Early conversations about risks, dependencies, or the sheer difficulty of the task are met with awkward silence or are deferred. A culture of what one report calls “denial adopted in preference to hard truths” quickly takes root.
The common thread here is that these early warning signs are not technical; they are behavioural. They are about a lack of clarity, commitment, and courage to face reality before the real spending begins. This empowers any leader, regardless of their technical background, to be an effective interrogator of their own technology programmes.
The systemic sickness: Why good people and good money aren’t enough
If project failure were a rare event, we could blame it on isolated mistakes. But its chronic nature, particularly in the UK public sector, points to a deeper, systemic illness. Throwing more money or more well-intentioned people at the problem won’t cure it, because the system itself is often the patient.
The 2025 State of Digital Government Review identified five root causes that should be deeply unsettling for any leader :
- Leadership & Incentives: The system doesn’t reward the right things. There is “little reward for prioritising an agenda of service digitisation, reliability, or risk mitigation.” Leaders are simply not “paid, promoted, or valued for doing so.”
- Structure & Fragmentation: Government departments often act as isolated islands, with “limited mechanisms to contract services from each other.” This prevents standardisation, stifles the reuse of good solutions, and forces everyone to reinvent the wheel at enormous expense.
- Measurement: You can’t manage what you don’t measure. The public sector lacks “consistent metrics of digital performance.” Without baseline data on service quality, user experience, or cost, investment decisions are based on guesswork, not evidence.
- The Skills Chasm: This is the most persistent complaint from the NAO and others. Uncompetitive pay and poor career paths have led to a critical “lack of digital and procurement capability” inside government. This gap is plugged with expensive contractors—who can cost two to three times more than civil servants—which “degrades institutional knowledge” and creates dependency.
- Short-Term Funding: The Treasury’s funding models “prioritise new programmes at the expense of continuous improvement”. This thinking treats technology like a one-off building project, not a living service that needs constant care and evolution. It is the direct cause of the legacy IT crisis.
These flaws converge in the government’s broken approach to procurement. The NAO finds that departments are over-reliant on a few large suppliers, lack the in-house skills to manage complex digital contracts, and frequently exclude their own digital experts from the commercial process. The result? Contracts are awarded without any real assessment of whether the proposed solution is even feasible.
This paints a stark picture. The UK’s public sector is attempting to solve 21st-century digital challenges using a 20th-century industrial-era operating model. Its structures for funding, hiring, and buying are fundamentally misaligned with—and often actively hostile to—the agile, user-focused, and iterative methods that are proven to deliver successful technology.
The Antidote: Discovery as a Strategic Imperative
If the problem is systemic, the solution must be equally fundamental. It requires a foundational shift in how we approach technology investment. The single most powerful tool for making this shift is a short, sharp, evidence-gathering phase at the very beginning of any initiative: a Discovery.
The foundational shift: From ‘building a thing’ to ‘solving a problem’
The most profound change a leader can champion is to ban the question, “What should we build?” and replace it with, “What problem must we solve for whom?” A Discovery is the formal, disciplined process for answering that second question.
Pioneered by the Government Digital Service (GDS), the Discovery phase is the bedrock of a de-risking process that moves from Discovery (understanding the problem) to Alpha (testing potential solutions) and Beta (building an initial service with real users) before going Live.
The purpose of a Discovery is not to write a project plan. It is to:
- Deeply understand users: Their context, their goals, and their pain points, through direct research.
- Challenge risky assumptions: To surface and test the hidden beliefs that could sink the project later.
- Map the constraints: To understand the policy, operational, and technical hurdles before you commit to a specific path.
- Quantify the problem: To understand how much the problem is currently costing, creating a baseline against which you can measure the value of any solution.
- Generate evidence for a decision: The primary output is a body of evidence that allows leaders to make a conscious, informed choice: commit to the next phase, pivot to a different approach, or—crucially—stop the project and save millions.
A Discovery is not a project planning stage; it is a strategic decision-making framework. Its most valuable output is not a plan, but clarity and confidence for the high-stakes investment decision that follows. It reframes the work from a delivery exercise to an intelligence-gathering operation, where the most valuable outcome can be the decision not to proceed.
The ‘Discovery that pays for itself’ playbook
A well-run Discovery is not a long, academic study. It is a rapid, intense, and highly structured process, typically lasting 4-8 weeks, that generates value far exceeding its cost. It requires a small, dedicated, multi-disciplinary team. Here is a playbook for getting it right.
1. Goal: Frame the Problem, Not the Solution.
- Action: Assemble key stakeholders. Aggressively interrogate the initial request (“We need a new website”). Ask “Why?” until you arrive at the root user need or business outcome. Write it down as a clear problem statement. Crucially, agree what is not in scope.
- Result: A single, clear mission that focuses the entire team. You stop the project from building a beautiful solution to the wrong problem.
2. Goal: Understand the User’s World.
- Action: Get the team out of the building (literally or virtually). Conduct one-to-one interviews and observation sessions with at least a dozen real users. Collaboratively map their end-to-end journey, highlighting every frustration and workaround.
- Result: Genuine empathy and a journey map that reveals the true opportunities for improvement, rather than just paving over a broken process with a digital veneer.
3. Goal: Map the Technical and Operational Territory.
- Action: Conduct a rapid technical assessment of existing systems, data sources, and APIs. Talk to the people who run the current service. Map the immovable policy, legal, and operational constraints.
- Result: A clear-eyed view of the landscape. You identify the biggest integration nightmares and bureaucratic blockers when they are just lines on a diagram, not failing code in a testing environment.
4. Goal: Expose and Test Risky Assumptions.
- Action: Get the team to list every single thing they are assuming to be true for the project to succeed (e.g., “Users will be happy to download an app,” “We can get clean data from Department X”). Prioritise the most dangerous assumptions and design cheap, fast ways to test them.
- Result: Evidence replaces belief. The project’s biggest risks are confronted and neutralised when it costs pounds to do so, not millions.
5. Goal: Make Ideas Tangible.
- Action: Ban 100-page documents. Instead, build low-fidelity prototypes—clickable mockups, even paper sketches—to bring potential solutions to life. Test these prototypes with users to get concrete, visceral feedback.
- Result: Stakeholders and users can react to something real, providing far richer feedback than abstract reports. Ideas are validated or invalidated in days, not months.
6. Goal: Define ‘Done’ and Get a Recommendation.
- Action: Based on all the evidence, define clear, measurable success criteria for what a potential solution would need to achieve. Generate a prioritised list of hypotheses to test in a follow-on Alpha phase.
- Result: The team produces a clear, evidence-based recommendation: proceed with a specific, de-risked approach; pivot to a more valuable opportunity; or stop now.
7. Goal: Hold the ‘Go/No-Go’ Showcase.
- Action: Hold a formal showcase for the executive sponsor and key stakeholders. Walk them through the evidence: the user journey, the prototypes, the risks, the recommendation. Ask for a clear, unambiguous decision and the resources for the next phase.
- Result: The project proceeds (or stops) with full leadership alignment and a shared, realistic understanding of the problem, the risks, and the potential value.
This is where a strategic partner like Devsultants can be invaluable. We provide the specialist user research, technical architecture, and product strategy skills that are often missing in-house. We bring an independent perspective to challenge assumptions and ensure rigour, and we can provide the DV/SC-cleared teams ready to work in sensitive government environments from day one.
The financial case: A pragmatic ROI model for Discovery
Investing in a Discovery is not an optional extra or a “nice to have.” It is the single highest-leverage financial decision a leader can make on a technology programme. The business case is not built on speculative benefits, but on the direct, quantifiable avoidance of predictable waste.
The logic is simple and powerful. Industry studies consistently show that avoidable rework consumes 40-50% of the total cost of a typical software project. This is the cost of fixing requirement errors, redesigning features that users reject, and rebuilding components that don’t integrate. It is pure waste.
Simultaneously, research by IBM and others has proven the exponential cost of fixing errors late in the process. A mistake that costs £1 to fix during design can cost £6 to fix during development, £15 during testing, and up to £100 after the product has been released.
A Discovery phase, which typically costs between 5-10% of the total project budget, is designed specifically to prevent these early-stage errors. By ensuring user needs are understood, requirements are clear, and technical risks are identified upfront, a well-run Discovery can eliminate the vast majority of this rework—some sources suggest an 80% reduction in rework effort is achievable.
This allows us to build a simple, powerful ROI model:
The Discovery ROI Calculator
Input: Total Project Budget | Expected Rework Cost (at 50%) | Discovery Cost (at 8%) | Rework Avoided (at 80%) | Net Saving | ROI on Discovery |
---|---|---|---|---|---|
£1,000,000 | £500,000 | £80,000 | £400,000 | £320,000 | 400% |
£5,000,000 | £2,500,000 | £400,000 | £2,000,000 | £1,600,000 | 400% |
£20,000,000 | £10,000,000 | £1,600,000 | £8,000,000 | £6,400,000 | 400% |
The maths is compelling. For a £5 million programme, you invest £400,000 in a Discovery. This investment prevents £2 million of predictable rework. The net saving is £1.6 million. This isn’t a speculative benefit; it’s the direct financial result of not having to pay teams of people to fix mistakes that should never have been made. It is an insurance policy with a guaranteed positive return.
Learning from the Trenches: Case Studies in Risk and Mitigation
These are not theoretical risks. They have happened, at massive scale and cost, to UK organisations. The following anonymised vignettes, based on real, high-profile government failures, illustrate what happens when the principles of Discovery are ignored.
Three cautionary tales
Vignette 1: The Rushed Mandate (“Project SwiftStart”)
- The Real Story: Based on the Department for Transport’s Shared Services Centre.
- The Narrative: A central department mandates a new shared services platform for its agencies, promising £57 million in savings. The deadline is aggressive and politically driven. The project is rushed to meet “overly optimistic deadlines.” The system is barely tested. On launch day, it greets some users in German and is so unstable that only two of the seven agencies can use it. The project ultimately costs £81 million, turning the promised saving into a £138 million net loss.
- The Lesson: A deadline dictated by politics without a corresponding reality check from users and technology is a recipe for failure. A Discovery would have immediately flagged the unrealistic timeline and lack of buy-in from the agencies, forcing a difficult but necessary strategic conversation before millions were wasted on a doomed implementation.
Vignette 2: The Unreliable Partner (“Project Keystone”)
- The Real Story: Based on the Libra system for Magistrates’ Courts.
- The Narrative: A contract is awarded to a major IT supplier for £146 million. Before the ink is dry, the supplier raises the price to £184 million. Ten months later, they demand more money. An independent review finds their financial model is “unreliable.” But the department, now deeply committed, feels it is “too important to let the supplier default” and renegotiates the price up to £319 million. The supplier still fails to deliver the core software. A different firm has to be brought in to rescue the project, while the original supplier walks away with tens of millions for its failure.
- The Lesson: A lack of in-house commercial and technical expertise makes an organisation a weak and vulnerable client. A Discovery, including rigorous due diligence on the supplier’s technical proposal and financial viability, would have exposed the flawed bid before the contract was signed, preventing the department from being held hostage by a failing partner.
Vignette 3: The Grand Design (“Project Unity”)
- The Real Story: Based on the NHS National Programme for IT (NPfIT).
- The Narrative: A breathtakingly ambitious, top-down programme aims to create a single, centralised IT system for the entire NHS. The design is monolithic, with little meaningful consultation with the doctors and nurses who would use it—there was “no time to engage with users.” The sheer scale and complexity prove unmanageable. By the time parts of the system are ready, the technology is already obsolete. The programme is eventually dismantled after a decade of delays and disputes, having cost nearly £10 billion and delivering only a fraction of its promised value.
- The Lesson: Monolithic, “big bang” projects are almost guaranteed to fail in complex, dynamic environments like healthcare. A Discovery-led approach would have rejected the grand design. It would have started by breaking the problem down into small, manageable pieces—solving one concrete problem for one group of users in one hospital. It would have delivered value iteratively, learning and adapting based on real-world feedback, instead of failing spectacularly at massive scale.
The strategic partner: How Devsultants embeds de-risking
Avoiding these fates requires more than just a different process; it demands a different kind of partnership. You don’t just need a supplier to build what you ask for; you need a strategic partner to help you ensure you’re asking for the right thing.
This is how Devsultants works:
- We bring the missing skills: We provide the senior, specialist expertise in user research, data enrichment, AI and RAG strategy, and cloud architecture that the NAO has identified as critically lacking in-house.
- We provide an independent challenge function: As an external partner, we are empowered to ask the uncomfortable questions and challenge the core assumptions that internal teams may be afraid to voice. We help you build a robust, evidence-based business case before you commit to major expenditure.
- We build capability, not dependency: Our model is to work in blended teams, upskilling your people in modern, user-centred methods through practices like pair-programming and coaching. We aim to leave your organisation more capable than we found it.
- We operate with cleared expertise: For our public sector clients, we provide DV/SC-cleared delivery teams who understand the security and sensitivity requirements of government work, de-risking your project from day one.
- We focus on measurable checkpoints: We replace long, opaque projects with a rhythm of short, focused phases. The Discovery/Alpha/Beta model ensures there are regular, evidence-based decision gates where you can assess progress and ROI, keeping you in full control of your investment.
Your Mandate for Change
As a leader, you have the power to change this narrative. You don’t need to be a technologist to steer your organisation towards better outcomes. You just need to ask better questions.
What to do on Monday morning
You can start de-risking your technology portfolio today. This week, pick your most important, most expensive, or most worrying technology programme and ask the project leader these three questions:
- “Show me the problem.” Ask for the single-sentence problem statement they are solving, for a specific, named user. If they show you a feature list, a technology diagram, or a Gantt chart, you have a red flag.
- “Who is our user sponsor?” Ask for the name of the end-user (not their manager) who is embedded with the team, attending daily meetings, and providing constant feedback. If that person doesn’t exist, you have a red flag.
- “What is our single most dangerous assumption?” Ask the team to name the one thing that must be true for this project to succeed, which they have not yet proven with evidence. If they can’t answer, or claim there are no major assumptions, you have your biggest red flag of all.
Starting the right conversation
Navigating the complexities of a major technology programme is one of the toughest challenges a leader can face. The risks are high, but the rewards for getting it right—in efficiency, service quality, and competitive advantage—are immense.
If this guide has raised questions about your own technology investments, we can help you frame the right conversation. Devsultants offers a confidential, no-obligation Strategic Risk Assessment Session for senior leaders. In 90 minutes, we will help you apply the principles in this guide to your specific context, equipping you with a clear framework to challenge your teams and de-risk your decisions.
This isn’t a sales pitch. It’s an opportunity to leverage our experience from the front lines of major UK programmes to ensure your next investment is set up for success from day one.
To schedule your session, contact Andrew.
Works cited
1. UK government admits over 25% of its digital systems are outdated - Tech Monitor, https://www.techmonitor.ai/digital-economy/government-computing/legacy-technology-costs-uk-public-sector-45bn-annually 2. Digital Transformation in Government - UK Parliament, https://researchbriefings.files.parliament.uk/documents/POST-PN-0743/POST-PN-0743.pdf 3. State of digital government review - GOV.UK, https://www.gov.uk/government/publications/state-of-digital-government-review/state-of-digital-government-review 4. The Cost of Legacy Software in the UK: When and How to Modernize? - Netguru, https://www.netguru.com/blog/legacy-software-cost-uk 5. NAO highlights critical gaps in Government digital procurement, https://www.government-transformation.com/transformation/nao-highlights-critical-gaps-in-government-digital-procurement 6. Abandoned NHS IT system has cost £10bn so far - The Guardian, https://www.theguardian.com/society/2013/sep/18/nhs-records-system-10bn 7. Chaos Report — why this study about IT project management is so unique - The Story, https://thestory.is/en/journal/chaos-report/ 8. CHAOS Report on IT Project Outcomes - OpenCommons, https://opencommons.org/CHAOS_Report_on_IT_Project_Outcomes 9. IT Project Failure Rates: Facts and Reasons | Faeth Executive Coaching, https://faethcoaching.com/it-project-failure-rates-facts-and-reasons/ 10. How the discovery phase works - Service Manual - GOV.UK, https://www.gov.uk/service-manual/agile-delivery/how-the-discovery-phase-works 11. THE CHAOS REPORT, https://www.csus.edu/indiv/v/velianitis/161/chaosreport.pdf 12. Why Software Projects Falter (And How To Succeed) - Forbes, https://www.forbes.com/councils/forbestechcouncil/2024/11/07/why-software-projects-falter-and-how-to-succeed/ 13. Why Software Projects Fail and How To Get it Right - Zibtek, https://www.zibtek.com/blog/top-reasons-software-projects-fail-and-how-to-get-it-right/ 14. Main report (text only) - Audit Scotland, https://audit.scot/uploads/docs/report/2017/briefing_170511_digital_future.rtf 15. The Standish Group report 83.9% of IT projects partially or completely fail - TIGO Solutions, https://en.tigosolutions.com/the-standish-group-report-839-of-it-projects-partially-or-completely-fail 16. Most IT Projects Fail. Will Yours? - Project Smart, https://www.projectsmart.co.uk/it-project-management/most-it-projects-fail-will-yours.php 17. UK government must rethink tech procurement, says NAO - Digit.fyi, https://www.digit.fyi/uk-government-must-rethink-tech-procurement-says-nao/ 18. Discovery Phase - Digital Marketplace, https://www.applytosupply.digitalmarketplace.service.gov.uk/g-cloud/services/794639020296196 19. Government Digital Service (GDS) Service Standard Discovery phase - Digital Marketplace, https://www.applytosupply.digitalmarketplace.service.gov.uk/g-cloud/services/366976227271883 20. Product Discovery Phase Services Company - Lionwood.software, https://lionwood.software/services/discovery-phase/ 21. Measuring the Cost of Software Quality of a Large Software Project at Bombardier Transportation - ResearchGate, https://www.researchgate.net/publication/236398768_Measuring_the_Cost_of_Software_Quality_of_a_Large_Software_Project_at_Bombardier_Transportation 22. The Cost of Finding Bugs Later in the SDLC - Functionize, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlc 23. Cost to Fix Bugs and Defects During Each Phase of the SDLC | Black Duck Blog, https://www.blackduck.com/blog/cost-to-fix-bugs-during-each-sdlc-phase.html 24. Biggest UK Government Project Failures - YourShortlist, https://yourshortlist.com/biggest-uk-government-project-failures/ 25. Case Study 1: The £10 Billion IT Disaster at the NHS - Henrico Dolfing, https://www.henricodolfing.com/2019/01/case-study-10-billion-it-disaster.html |