
NHS 10 year plan: Can using AI offer a solution to the NHS’ problems?
Health systems globally are under increasing strain. In England, the number of people with major illness is projected to rise by 2.5 million by 2040, reaching over 9.1 million.1 Meanwhile, hospital bed numbers have fallen, waiting lists have soared, and the NHS faces persistent workforce shortages. With public finances under pressure and service demand escalating, the challenge is clear: we cannot deliver 21st century care using 20th century models.
Last week saw the launch of the NHS 10 year health plan for England that advocates for digital solutions for a range of problems, from community diagnostics to improving patient safety at a system level. Framing artificial intelligence (AI) as the solution to complex problems—promising improved productivity, earlier diagnosis, and administrative relief—in the setting of the current integration, or lack thereof, of AI into healthcare appears ambitious to say the least. The few successful examples—fracture detection, sepsis prediction, and discharge summaries—are important but limited in scale. For AI to have a meaningful impact, it must be deployed not just as a technological fix, but as a catalyst for systemic reform.
Although often referred to as a single entity, the term AI represents a collective noun of a multitude of technologies. It encompasses machine learning algorithms that detect patterns, natural language processing that reads clinical text, and computer vision systems that interpret medical images. More recently, generative AI tools—such as large language models like GPT—can summarise notes, answer clinical queries, and even simulate patient interactions.
In other sectors, these tools have transformed how companies work. In finance, AI identifies fraud in real time. In e-commerce, algorithms personalise advertising, forecast demand, and manage inventory. In life sciences, DeepMind’s AlphaFold successfully predicted the structure of over 200 million proteins,2 while Insilico Medicine recently completed a phase 2a trial of a drug designed using generative AI.3 These successes reflect not just technical excellence, but enabling environments: high quality data, clearly defined problems, robust infrastructure, and interdisciplinary teams. Healthcare has much to learn from this.
The limited impact that AI has had on healthcare is not, therefore, a failure of the technology—it’s a failure of system readiness. Three structural barriers stand out.
1. Fragmented data infrastructure
Unlike retail or banking, where data flow seamlessly between platforms, healthcare systems remain siloed. In the UK, interoperability remains patchy despite frameworks like Health Level 7 (HL7), Fast Healthcare Interoperability Resources (FHIR). NHS trusts still charge each other for access to patient data. AI tools rely on structured, real time, machine readable data—yet most of our data remain locked in disconnected systems. Until we treat data interoperability as strategic infrastructure, scalable AI will remain out of reach.
2. Governance and public trust
AI systems trained on incomplete or biased data risk exacerbating existing health inequalities. Underrepresented populations—notably from minoritised ethnic or socioeconomic groups—are often excluded from training datasets. Tools can hallucinate, mislead, or over-refer. Moreover, a model can be accurate and still generate harmful effects if deployed without understanding system dynamics. Current regulation focuses on static performance metrics, but we need a shift towards dynamic oversight, independent audits, and real world monitoring.
3. Culture and capability
Even the most powerful AI tool is ineffective if not adopted. Clinicians need to understand, trust, and work alongside AI—not just as users, but as co-designers. This demands a new culture of experimentation and learning, supported by education, protected time, and new roles such as clinical informaticians and AI implementation leads. Procurement processes must also evolve: AI tools cannot be evaluated like fixed medical devices. They are dynamic, iterative, and require flexible service contracts with clear performance metrics.
This brings us to one further barrier; the temptation to use AI to make existing processes more efficient, or close training and experience gaps in the workforce. Faster radiology reporting, automated triage, and quicker form filling, while helpful in the existing system, risk missing the point. AI’s real value lies in enabling care models that were previously unfeasible.
Consider the coronary care unit, introduced in the 1960s. It halved mortality for cardiac patients—not by inventing new drugs, but by redesigning workflows, staffing, and monitoring around patients’ needs. AI must be approached in the same way—not as a bolt-on, but as a driver of new models of care.
While AI-enabled imaging could vastly increase diagnostic throughput, without investment in specialist interpretation and follow-up capacity, bottlenecks simply shift downstream. Similarly, automating administrative tasks may free up clinician time—but to what end, if workflows remain rigid?
AI should help health systems rebalance constrained resources, particularly labour. Skilled clinicians take years to train and cannot be scaled on demand. AI offers an opportunity to extend the scope of non-physician roles, support community based diagnostics, and reduce low value workload. These shifts won’t happen automatically. They require system level planning, cross professional collaboration, and political will.
The risk is not that AI will fail, but that it will be deployed too conservatively. As Carl Benedikt Frey, economist and professor of AI and work, wrote, “Had the 19th century focused solely on better looms and ploughs, we would enjoy cheap cloth and abundant grain—but there would be no antibiotics, jet engines, or rockets. Economic miracles stem from discovery, not repeating tasks at greater speed.”
Healthcare is not a production line. AI will not deliver miracles through marginal efficiency gains. But if used imaginatively, it could support discovery: of new models of delivery, new workforce configurations, and new ways of organising care around people rather than processes.
The conversation about AI in healthcare must move beyond hype and scepticism. The real question is not “What can AI do?” but “What do we want health systems to become and how can AI help facilitate that?” Used wisely, AI offers a rare opportunity to redesign healthcare to meet the needs of tomorrow—not just by doing more, but by doing better.
Footnotes
-
Competing interests: None declared.
-
David Strain is an associate professor at the University of Exeter Medical. He is working closely with a multidisciplinary team from the Health Economics and Outcomes Research group to explore what can be achieved using AI within the health service.
-
Provenance and peer review: not commissioned, not externally peer reviewed.