Beyond the Buzzwords: How to Actually Translate Your Business Goals into Technical Requirements

The figures are consistently, eye-wateringly stark. The National Audit Office (NAO), the UK’s public spending watchdog, regularly uncovers billions in waste on government programmes. A recent NAO report revealed that cost increases on just five digital change programmes amounted to a staggering £3 billion. Separately, a forthcoming government report concedes that failures in legacy IT systems cost the taxpayer an estimated £45 billion every year in lost productivity savings. These aren’t just numbers on a spreadsheet; they represent delayed services, frustrated citizens, and a colossal drain on the public purse. They are, in short, the anatomy of modern government failure.
This raises an uncomfortable question for any leader accountable for a major budget: if everyone in the boardroom agrees on the strategic goal, why do so many digital projects still deliver the wrong thing, late and wildly over budget? The answer is that these are rarely pure technology failures. They are failures of translation. They happen in the chasm between a clear business objective and the detailed, technical instructions a delivery team needs to build the right thing. The path from “improve citizen outcomes” to a functioning digital service is littered with the ghosts of ambiguity, assumption, and miscommunication.
This article is a guide to navigating that path. It’s a playbook for non-technical leaders to de-risk their digital investments by applying a disciplined, evidence-led process. It’s about moving beyond buzzwords and building a bridge of clarity between your strategic intent and your team’s daily work.

What you’ll learn

  • How to apply the government’s own de-risking playbook—the Discovery phase—to any major project, public or private.
  • A practical, step-by-step framework for turning a vague Key Performance Indicator (KPI) into a crystal-clear, testable backlog of work.
  • How to spot the three classic pitfalls that derail digital projects before they even start, and the precise questions you must ask to prevent them.

The cost of getting this wrong is not abstract. Industry analysis provides a direct causal link between vague requirements and project disaster. One landmark study found that poor requirements practices add an average premium of 60% to a project’s time and budget. Another suggests that a shocking 60-80% of total development cost is spent on rework, much of it caused by fixing issues that arose from poorly defined requirements.
Applying this logic to the NAO’s findings is sobering. If a 60% premium for poor requirements is a reliable benchmark, it suggests that of the £3 billion overspend on those five government programmes, up to £1.125 billion could be directly attributed to the failure to properly define and manage what needed to be built. This transforms the conversation. Investing in a rigorous definition process isn’t a “nice-to-have” cost centre; it is one of the highest-leverage financial controls a leader can implement. A six-to-eight-week Discovery phase , costing perhaps £150,000 to £250,000, is not an expense; it is an insurance policy against multi-million-pound failure.

First, Put Down the Solution: The Discipline of Discovery

The most expensive mistake is building the wrong thing perfectly; the cheapest way to avoid it is to obsess over the problem first.
The UK government, having learned many of these lessons the hard way, has codified a powerful de-risking methodology. Mandated by the Government Digital Service (GDS) and the Central Digital and Data Office (CDDO), the phased approach of Discovery, Alpha, Beta, and Live is the gold standard for delivering complex digital services. While born in the public sector, its principles are universally applicable for de-risking any significant investment.
The Discovery phase is the critical first step. Its sole purpose is to understand the user need, the policy intent, and the real-world constraints (be they technical, legislative, or operational) before a single line of code is written or a solution is chosen. It is a period of intense, structured learning designed to challenge assumptions and, crucially, to stop the organisation from building something for which there is no genuine user need. Trying to skip Discovery is like a surgeon refusing to look at an X-ray before picking up a scalpel.

Reframing the Problem: The Leader’s First Job

The first, and most important, contribution a leader can make is to frame the challenge correctly. Too often, projects begin with a solution masquerading as a problem. This is known as “solutioneering”—the premature leap to an answer.

  • The Wrong Brief: “We need a new CRM system to improve staff retention.”
  • The Right Question: “Our caseworkers are spending 70% of their time collating information from three different legacy systems, leading to high error rates and burnout. How might we reduce that administrative burden so they can focus on proactive, high-value work?”

The first statement pre-supposes the answer is a specific piece of technology. The second defines a measurable problem rooted in a user’s experience. The GOV.UK Service Manual is explicit: you must interrogate the proposed solution and reframe it as a problem to be solved. This act of reframing opens up the possibility for better, cheaper, or simpler solutions that might not involve building a complex new system at all.

The Outputs of a Good Discovery

A well-run Discovery phase, typically lasting four to eight weeks, doesn’t deliver software. It delivers evidence and clarity. As a leader, you should expect to receive:

  • A Discovery Report: Not a 100-page tome, but a concise, evidence-based summary. It should include validated user needs, user personas, journey maps showing how users currently tackle the problem, a clear list of identified constraints, and a firm recommendation on whether the problem is worth solving and if the project should proceed to an Alpha phase.
  • A Quantified Problem: A clear-eyed assessment of how much the problem is currently costing your organisation, whether in wasted staff hours, operational inefficiency, missed revenue, or regulatory risk.
  • Stakeholder Alignment: Tangible proof—perhaps in the form of signed-off problem statements or workshop outputs—that all senior stakeholders have a shared and documented understanding of the problem being addressed.

This is where specialist partners add immense value. At Devsultants, our DV/SC-cleared teams are experts in running these rigorous, evidence-led Discovery phases, ensuring your investment is grounded in real user needs and technical feasibility from day one.

Reality Check: The Price of Poor Planning

A study by IAG Consulting found that poor requirements practices are a direct cause of project failure. The average project with a $3 million budget ultimately cost companies with poor practices an average of $5.87 million—a premium of over 95%—and was more likely to be a “run-away” project, exceeding its original budget by more than 160%.

The Translation Engine: A Leader’s Guide from Goal to Backlog

A business goal is not a requirement; it’s the start of a chain of evidence that must be traceable from top to bottom.
How do you get from a high-level ambition to something a team can actually build and test? You follow a structured, repeatable process that translates strategy into specifics.

  • Before (The Vague Idea): “We need to improve our online business licensing service to increase renewal rates by 10%.”
  • After (The Crisp Requirement): A prioritised backlog of user stories, each with clear, testable acceptance criteria, ready for a development team to pull into their next sprint.

This transformation doesn’t happen by magic. It happens through a deliberate cascade of activities.

The 5-Step Translation Cascade

This framework breaks down the process into manageable stages, each with a clear goal, action, and result.

Step 1: Goal: Deconstruct the KPI.

  • Action: Convene a workshop with key stakeholders. Go beyond the 10% target and ask the critical questions: “Who are the users involved in the ‘renewal’ process?” and “What specific behaviours or pain points are causing people to drop out?”.
  • Result: A shared understanding emerges. The 10% goal is not a single problem. It’s driven by two distinct groups: time-poor business owners who find the online process confusing, and internal caseworkers who are too bogged down in manual checks to process applications efficiently. The problem is now correctly framed around user experiences.

Step 2: Goal: Define the People (Personas).

  • Action: Armed with a clearer problem, the team must now conduct direct user research—interviews, contextual inquiry, and observation—with real people from both groups. The insights from this research are then synthesised into evidence-based personas. These are not marketing demographics; they are rich portraits of user behaviours, goals, and frustrations.
  • Result: Two primary personas are created, grounded in real data:
    • “Anika, the experienced caseworker”: She is motivated by accuracy and compliance but is deeply frustrated by having to switch between three different systems to verify a single piece of information. Her unstated need is for a single source of truth.
    • “Saleem, the time-poor electrician”: He needs to renew his business license quickly between jobs, usually on his mobile phone. He is frustrated by government jargon and being forced to re-enter information (like his business address) that he knows the council already holds.

Step 3: Goal: Capture the Need (User Stories).

  • Action: Translate the goals and frustrations of Anika and Saleem into the simple but powerful user story format: “As a [type of user], I want to [perform an action], so that I can [achieve a goal].” This format, mandated by the GDS Service Manual, forces the team to anchor every feature in a specific user need and its underlying purpose.
  • Result: A list of user stories begins to form the project backlog. These are often grouped into larger themes, or “epics”:
    • Epic: Caseworker Efficiency
      • Story 1: “As Anika, I want to view a citizen’s complete application history and all required identity checks on a single screen so that I can approve a standard renewal in under three minutes.”
    • Epic: Citizen Self-Service
      • Story 2: “As Saleem, I want to pre-populate my renewal form with the details from my previous application so that I can complete the process with minimal typing.”

Step 4: Goal: Make it Real (Artefacts & Criteria).

  • Action: With a user story defined, the team must now specify what “done” looks like in tangible terms. This involves creating low-fidelity wireframes—simple, black-and-white sketches of the user interface—to quickly test layout ideas with users. They also write formal Acceptance Criteria, which are the pass/fail conditions for the story.
  • Result:
    • Wireframe: A simple box-and-line drawing of the “single screen” for Anika is created. It’s not pretty, but it can be put in front of real caseworkers for feedback in hours, allowing for rapid iteration before any costly development begins.
    • Acceptance Criteria for Story 1: “It’s done when: 1. All historical application data from systems X, Y, and Z is visible without scrolling. 2. A ‘one-click approve’ button is present for cases meeting pre-defined eligibility rules. 3. The approval action is logged immutably in the audit trail.”.

Step 5: Goal: Set the Rules of the Game (DoR/DoD).

  • Action: The team establishes a quality contract with itself and its stakeholders. This takes the form of two critical checklists: the Definition of Ready (DoR) and the Definition of Done (DoD).
  • Result:
    • Definition of Ready (DoR): This is the entry gate for development. It’s a checklist that confirms a story is clear, feasible, and ready to be worked on. For example: “A story is ‘Ready’ when it has clear acceptance criteria, any associated wireframes are approved by the user researcher, and all external dependencies have been identified.”.
    • Definition of Done (DoD): This is the exit gate. It’s a formal, shared understanding of the quality standard that every piece of work must meet to be considered complete and potentially releasable. For example: “Work is ‘Done’ when the code is peer-reviewed, all automated tests are passing, it meets accessibility standards, and it has passed security scans.”.

While agile purists note that the Definition of Ready is an optional practice and the Definition of Done is the formal Scrum commitment, this distinction misses a crucial point for large, complex organisations like those in the UK public sector. In these environments, where ambiguity is a primary source of waste , the DoR and DoD serve a vital cultural purpose. They are not just technical checklists; they are social contracts that act as guardrails. The DoR forces a conversation about clarity before money is spent writing code, protecting the team from pressure to start work on under-specified or vague requests. The DoD forces a conversation about quality before a feature can be declared “finished,” preventing the accumulation of technical debt. For a leader, the key is not the Scrum-purity of the process, but the outcome it produces: predictability. By asking, “What is our team’s shared agreement on what ‘ready’ and ‘done’ mean?”, a leader is not micromanaging; they are performing essential governance and ensuring the team has a mechanism to manage delivery risk.

The Three Pitfalls That Will Scupper Your Project (and How to Fix Them)

Your project’s biggest threats aren’t technical; they’re human habits that can be fixed with the right questions.
Even with a solid process, ingrained organisational habits can derail a project. A savvy leader knows how to spot these early warning signs and what questions to ask to get things back on track.

Pitfall 1: The Lure of ‘Solutioneering’

  • Definition: The irresistible, reflexive urge to jump to a solution before the problem is deeply understood. It’s hearing “our intranet is confusing” and immediately briefing a design agency for a visual refresh, rather than first investigating why it’s confusing and for which users.
  • The Cost: This is the primary cause of wasted effort. Resources are poured into building elegant solutions to the wrong problems, potentially making things worse. This is a major driver of the rework that can consume up to 80% of a development budget. It also stifles innovation by shutting down the exploration that could lead to simpler, more effective answers.
  • The Leader’s Fix: Champion a culture of inquiry. When a solution is proposed, relentlessly ask “Why?” until you get to the root user problem. Reward teams for the quality of their problem analysis, not the speed of their answers. Solutioneering is the corporate equivalent of a doctor prescribing surgery based on a phone call.

Pitfall 2: The ‘Proxy User’ Trap

  • Definition: Relying on the opinions of people who are not your end-users as a substitute for direct user research. These “proxies” are often well-meaning subject matter experts, senior managers, or support staff who are close to the user but are not the user themselves.
  • The Cost: Proxies are not a reliable source of truth. They provide a filtered, second-hand account coloured by their own biases and perspectives. They can give you the “highlights” of a user’s experience but they inevitably miss the critical nuance, the unstated needs, and the subtle behaviours that can only be uncovered through direct observation. Building a service based on proxy feedback often results in a product that is technically correct but practically unusable, creating friction and failing to meet the actual user’s goal.
  • The Leader’s Fix: Mandate real evidence. Ask your team, “When did we last test this assumption with real end-users? Show me the video clips or the research findings.” If the answer is, “Well, the Head of the Department thinks…”, a major red flag should go up. Ensure your project has the time and budget for continuous, direct user research from the start. A proxy user is to a real user what a travel brochure is to a holiday: a glossy, sanitised version that omits all the important details.

Pitfall 3: The Ghost of Non-Functional Requirements (NFRs)

  • Definition: If functional requirements define what a system does, non-functional requirements (NFRs) define how well it does it. They are the critical “-ilities”: security, reliability, scalability, performance, accessibility, and maintainability. Forgetting them is like designing a new hospital but forgetting to specify that it needs to be sterile, have a reliable power supply, or be accessible to wheelchair users.
  • The Cost: This is arguably the most catastrophic and common failure mode in large-scale IT. A system that is functionally complete but is insecure, painfully slow, or crashes under load is an abject failure. Ignoring NFRs until the end of a project leads to massive, costly rework, blown budgets, profound user dissatisfaction, and significant security vulnerabilities. In a government context, this can mean failing to meet legal obligations (like the Public Sector Bodies Accessibility Regulations) or, worse, exposing sensitive citizen data to attack.
  • The Leader’s Fix: Elevate NFRs to first-class citizens. They are not optional extras. They must be defined, quantified, and tested from the very beginning of the project and must be an integral part of the team’s Definition of Done. Functional requirements get you to the launch party. Non-functional requirements are what stop the building from burning down during it.

The table below provides a simple diagnostic tool for leaders to identify these pitfalls in their own meetings and reviews.

The Pitfall The Symptom (What you’ll hear) The Leader’s Fix (What you should ask)
Solutioneering “We just need to build an app.” / “The answer is obviously a new AI-powered dashboard.” / “Let’s just copy what the other department did.” “What is the specific, measurable user problem we are trying to solve here?” / “What evidence do we have that this is the most important problem for us to solve right now?”
The Proxy User Trap “The Head of Operations says users want…” / “We’re the experts, we know what our users need.” / “We don’t have time or budget to talk to real users for this.” “Can you show me the research findings from direct engagement with end-users in the last month?” / “Who are our most vulnerable or hard-to-reach users, and how are we ensuring their needs are met?”
The Ghost of NFRs “Let’s just focus on getting the features working first.” / “We’ll worry about performance and security later.” / “That’s a job for the infrastructure team.” “What are the specific, testable targets for system response time, user concurrency, and data security?” / “How are these critical quality attributes reflected in our Definition of Done?”

Maintaining the Golden Thread: From Business Case to Live Service

If you can’t trace a line of code back to a business objective, you’re wasting money.
In an environment of intense public and political scrutiny, being able to demonstrate value for money is paramount. The Public Accounts Committee (PAC) and the NAO exist to hold government to account for the effective use of public funds. The key to satisfying this scrutiny is traceability.
This is where the Requirements Traceability Matrix (RTM) comes in. Far from being a bureaucratic exercise, an RTM is a powerful tool that creates a “golden thread” connecting every element of a project. It provides a clear, auditable, and bidirectional map linking artefacts together:
Business Objective ↔ Key Performance Indicator ↔ User Need ↔ User Story ↔ Acceptance Criteria ↔ Test Case ↔ Code Commit ↔ Defect.
For a leader, this is your evidence. It is the definitive proof that every pound of taxpayer or shareholder money was spent on delivering a validated requirement that directly serves a strategic goal.

Traceability in Practice

  • Data-Driven Change Management: When a stakeholder requests a change mid-project, the RTM allows you to instantly assess the impact. You can see precisely which other requirements, designs, and tests will be affected. This transforms a potentially political, opinion-based decision into a rational, data-driven conversation about cost, risk, and benefit.
  • Bulletproof Audit and Compliance: For any organisation operating in a regulated environment (which includes all of government), an RTM is non-negotiable. It is the primary tool for demonstrating to auditors that you have met all necessary standards, from security protocols to accessibility laws.
  • Connecting to the Business Case: A strong business case outlines the expected benefits of an investment. A robust traceability practice ensures that the product being built is precisely the one that will deliver those benefits. This is a core part of the de-risking and business case development expertise that Devsultants provides to its clients.

The true value of a traceability matrix is not in looking backward to prove what was done, but in looking forward to manage what might happen next. It is not a static report for an audit file; it is a dynamic risk management tool for project governance. When a critical defect is discovered, or a key requirement must change, the RTM is the map that shows the blast radius. It answers the crucial question: “If we touch this, what else breaks?”. A leader should therefore view the RTM not as a compliance checkbox, but as a live dashboard for strategic oversight. The health of the traceability matrix is a direct proxy for the team’s control over the project’s complexity. A powerful question to test this is: “If we had to cut 20% of the scope tomorrow to meet a new budget, how would our traceability matrix help us make the least damaging decision?” This reframes the RTM from a technical artefact into what it truly is: a system for making better strategic decisions under pressure.

Conclusion: What to Ask Your Team on Monday Morning

Translating goals into requirements is not a dark art; it is a discipline. It requires rigour, evidence, and a relentless focus on the user. As a leader, you don’t need to be a technical expert to drive this discipline. You just need to ask the right questions.
Here are four questions to ask your team on Monday morning to take the temperature of your most critical digital projects. They are simple to ask, but impossible to answer well without the right processes in place.

  1. “Can you show me the single sentence that defines the user problem we’re solving, and how we’re measuring success?” (This tests for a clear, shared problem statement versus solutioneering).
  2. “Who are our three most important user personas, and what’s the most recent piece of feedback we’ve had from talking to people like them directly?” (This tests for genuine user-centricity versus the proxy user trap).
  3. “What’s the one screen or feature that must not fail, and what are our agreed, measurable targets for its speed, reliability, and security?” (This tests for prioritised, quantified non-functional requirements).
  4. “If a critical part of the project is delayed, can you show me the map that links it back to the business objective it supports?” (This tests for traceability and the ability to conduct impact analysis).

These questions can be tough. Getting the answers right requires a disciplined, evidence-led approach from the very start. If you’re looking to de-risk your next major investment and build confidence that you’re solving the right problem in the right way, our team at Devsultants can help. We specialise in the discovery, de-risking, and delivery of critical services for organisations like yours.
Let’s have a conversation about building a solid foundation for your success.

Downloadable Asset: 10 Questions to Turn Goals into Specs

This checklist provides a structured path from a high-level business goal to a specific, actionable requirement.

  1. The Goal: What specific, measurable business metric (KPI) are we trying to move? (e.g., “Reduce average call handling time by 20%”).
  2. The Problem: What is the underlying user problem that is driving this metric? (e.g., “Users are calling because they cannot find their case status online, forcing agents into lengthy system searches.”).
  3. The Users: Who are the primary user groups (personas) affected by this problem? (e.g., “New applicants,” “Experienced caseworkers”).
  4. The Context: In what situation or environment do these users typically face this problem? (e.g., “An applicant on a mobile phone with a slow connection,” “A caseworker with three applications open at once.”).
  5. The User Need: In the user’s own words, what do they need to do? (e.g., “I just need to see where my application is up to without having to call someone.”).
  6. The User Story: How do we frame this need as an actionable development task? (e.g., “As a busy applicant, I want to see a clear, simple status for my application on the service homepage, so that I don’t have to call for an update.”).
  7. The Acceptance Criteria: What are the 3-5 objective, testable conditions that will prove this story is “done”? (e.g., “Status is visible on login,” “Status updates in real-time,” “Status uses plain English.”).
  8. The “How Well” Question (NFRs): How fast, secure, and accessible must this feature be to be successful? (e.g., “The status must load in under 2 seconds,” “It must be fully accessible to screen reader users.”).
  9. The “What If” Question (Risks): What is the single biggest risk to delivering this feature (e.g., data source reliability), and what is our plan to mitigate it?
  10. The “Why” Question (Traceability): How does completing this specific story demonstrably contribute to our original goal of reducing call handling time by 20%?

Works cited

1. Government’s approach to technology suppliers: addressing the challenges - National Audit Office, https://www.nao.org.uk/wp-content/uploads/2025/01/governments-approach-to-technology-suppliers-addressing-the-challenges.pdf 2. UK government admits over 25% of its digital systems are outdated - Tech Monitor, https://www.techmonitor.ai/digital-economy/government-computing/legacy-technology-costs-uk-public-sector-45bn-annually 3. IT Pays a Price for Poor Requirements Practices -- ADTmag, https://adtmag.com/articles/2008/02/07/it-pays-a-price-for-poor-requirements-practices.aspx 4. Nonfunctional Requirements Explained: Examples, Types, Tools, https://www.modernrequirements.com/blogs/what-are-non-functional-requirements/ 5. Discovery stage: exploring the problem digital.gov.au, https://www.digital.gov.au/policy/digital-experience/toolkit/service-design-and-delivery-process/discovery-stage-exploring-problem 6. Agile delivery - Service Manual - GOV.UK, https://www.gov.uk/service-manual/agile-delivery 7. SERVICE DEFINITION - GOV.UK, https://assets.applytosupply.digitalmarketplace.service.gov.uk/g-cloud-14/documents/93304/686296452782991-service-definition-document-2024-05-04-1200.pdf 8. Digital Scotland Service Manual - Service Manual, https://servicemanual.gov.scot/ 9. What a discovery is and why we’re doing this discovery - Design histories, https://design-histories.education.gov.uk/communicating-allocations/what-a-discovery-is 10. How the discovery phase works - Service Manual - GOV.UK, https://www.gov.uk/service-manual/agile-delivery/how-the-discovery-phase-works 11. What is the discovery phase? - YouTube, https://www.youtube.com/watch?v=UVX1BT0oxWU 12. A beginner’s guide to the GDS discovery phase by Alex Hill Bootcamp - Medium, https://medium.com/design-bootcamp/a-beginners-guide-to-the-gds-discovery-phase-a693d461260a 13. Requirements gathering challenges and solutions - Breadcrumb Digital, https://www.breadcrumbdigital.com.au/requirements-gathering-challenges-and-solutions/ 14. The Pitfalls of Solutioneering by Bagavath Mohan Bootcamp Medium, https://medium.com/design-bootcamp/the-pitfalls-of-solutioneering-41e7bd8b5c21 15. Website Project: Discovery Report - Rushmoor Borough Council, https://www.rushmoor.gov.uk/media/cutianew/website-discovery-report.pdf 16. Step 7: Present your findings - Digital.gov, https://digital.gov/guides/hcd/discovery-concepts/present 17. Discovery phase Digital NSW, https://www.digital.nsw.gov.au/delivery/digital-service-toolkit/delivery-manual/discovery-phase 18. Government Digital Service (GDS) Service Standard Discovery phase - Digital Marketplace, https://www.applytosupply.digitalmarketplace.service.gov.uk/g-cloud/services/366976227271883 19. 7 Tips to Writing Great Agile Marketing User Stories, https://www.agilesherpas.com/blog/great-agile-marketing-user-stories 20. User researcher - Government Digital and Data Profession Capability Framework, https://ddat-capability-framework.service.gov.uk/role/user-researcher 21. Personas Centre for Digital Public Services, https://digitalpublicservices.gov.wales/guidance-and-standards/meet-user-needs/service-design-tools/personas 22. User Segmentation - Personas - GOV.UK, https://assets.applytosupply.digitalmarketplace.service.gov.uk/g-cloud-14/documents/92308/958506117634597-service-definition-document-2024-05-01-1552.pdf 23. Understanding your users: User personas – Content style guide - Service manual, https://service-manual.ons.gov.uk/content/writing-for-users/user-personas 24. Accessibility Personas - alphagov, https://alphagov.github.io/accessibility-personas/ 25. Understanding users who do not use digital services - GOV.UK, https://www.gov.uk/service-manual/user-research/understanding-users-who-dont-use-digital-services 26. Writing user stories - Service Manual - GOV.UK, https://www.gov.uk/service-manual/agile-delivery/writing-user-stories 27. Learning about users and their needs - Service Manual - GOV.UK, https://www.gov.uk/service-manual/user-research/start-by-learning-user-needs 28. User Stories Examples and Template - Atlassian, https://www.atlassian.com/agile/project-management/user-stories 29. Wireframe - Victorian Government, https://www.vic.gov.au/wireframe 30. What Is a Wireframe? + How to Create One - Coursera, https://www.coursera.org/articles/wireframe 31. www.interaction-design.org, https://www.interaction-design.org/literature/topics/wireframe#:~:text=Wireframes%20are%20basic%20visual%20representations,UX%20(user%20experience)%20design. 32. What is Wireframing? — updated 2025 IxDF - The Interaction Design Foundation, https://www.interaction-design.org/literature/topics/wireframe 33. What is a wireframe? A guide for non-designers - Balsamiq, https://balsamiq.com/blog/what-are-wireframes/ 34. Wireframing for data-driven applications - Small Multiples, https://smallmultiples.com.au/articles/wireframing-for-data-driven-applications/ 35. resources.scrumalliance.org, https://resources.scrumalliance.org/Article/definition-vs-ready#:~:text=Although%20these%20two%20terms%20seem,is%20ready%20to%20work%20on. 36. Definition of ready and definition of done: What’s the difference? Bigger Impact - Boost, https://www.boost.co.nz/blog/2022/06/definition-ready-definition-done 37. Definition of Done vs Definition of Ready - LiminalArc - LeadingAgile, https://www.leadingagile.com/2021/08/definition-of-done-vs-definition-of-ready/ 38. Definition of Ready vs. Definition of Done: Understanding the Differences, https://resources.scrumalliance.org/Article/definition-vs-ready 39. What Is the Difference Between the Definition of Done (DoD) and the Definition of Ready (DoR)? Scrum.org, https://www.scrum.org/resources/blog/what-difference-between-definition-done-dod-and-definition-ready-dor 40. The costs of bad requirements in software projects - wobe-systems, https://www.wobe-systems.com/en/what-are-the-costs-of-bad-requirements-in-software-projects/ 41. Requirement Gathering - Challenges and Solution in Software Development, https://www.geeksforgeeks.org/software-engineering/requirement-gathering-challenges-and-solution-in-software-development/ 42. Solutionising : r/businessanalysis - Reddit, https://www.reddit.com/r/businessanalysis/comments/114oj1e/solutionising/ 43. What’s the Problem With Proxy Users? - Mind the Product, https://www.mindtheproduct.com/whats-the-problem-with-proxy-users/ 44. How we do user research when we can’t speak to real users - DWP Digital, https://dwpdigital.blog.gov.uk/2019/09/26/how-we-do-user-research-when-we-cant-speak-to-real-users/ 45. 1. Understand users and their needs - Digital Scotland Service Manual, https://servicemanual.gov.scot/understand-users-needs 46. Non-Functional Requirements: Tips, Tools, and Examples - Perforce Software, https://www.perforce.com/blog/alm/what-are-non-functional-requirements-examples 47. What are Non-Functional Requirements (NFRs) and why are they important?, https://acceler8consultancy.com/what-are-non-functional-requirements-and-why-are-they-important/ 48. What are non-functional requirements (NFRs) in project management? - Planio, https://plan.io/blog/non-functional-requirements-nfrs/ 49. Functional Vs. Non-Functional Requirements: Why Are Both Important? - Inoxoft, https://inoxoft.com/blog/functional-vs-non-functional-requirements-why-are-both-important/ 50. Secure by Design Principles - UK Government Security - Beta, https://www.security.gov.uk/policy-and-guidance/secure-by-design/principles/ 51. Public Accounts Committee - Summary, https://committees.parliament.uk/committee/127/public-accounts-committee/ 52. Public Accounts Committees - Commonwealth Parliamentary Association UK, https://www.uk-cpa.org/what-we-do/public-accounts-committees 53. About us - Value for money audit - Audit Commission, https://www.aud.gov.hk/eng/aboutus/about_valm.htm 54. How to Create a Requirements Traceability Matrix — with Examples - Perforce Software, https://www.perforce.com/blog/alm/how-create-traceability-matrix 55. What is a Requirements Traceability Matrix (RTM)? - Perforce Software, https://www.perforce.com/resources/alm/requirements-traceability-matrix 56. How to Make a Requirements Traceability Matrix (RTM) - Project Manager, https://www.projectmanager.com/blog/requirements-traceability-matrix 57. Requirement Traceability Matrix: Definition, Types & Benefits - Saviom Software, https://www.saviom.com/blog/requirement-traceability-matrix-and-why-is-it-important/