
The untapped potential in NHS EPRs got me thinking about the time we cut preventable VTE by 48%
Published: 13 May 2025
Turning clinical audit into action
By Clare Fountain
The Health Foundation recently released a timely piece on electronic patient records (EPRs) and NHS strategy. It makes a strong case for the potential of EPRs to deliver safer, more efficient care – if we can get the design, integration, and usage right.
Yet as the report makes clear, potential doesn’t equal impact. We still face major gaps in how data is shared and used at the front line.
Data needs to flow to those who need it, in real time, to support better decision making.
This got me reflecting on a project I led over a decade ago, during a sabbatical from the NHS at Baylor Health System in Texas (now Baylor Scott & White) in 2009-11. It was a different context, but the principles – especially around standardisation, reliability, and frontline usability – feel more relevant than ever.
Real-time guidance, not retrospective learning
At Baylor, we set out to tackle the preventable harm caused by hospital-acquired venous thromboembolism (VTE). A familiar NHS initiative.
Instead of relying on static protocols or disconnected audits, we embedded a standardised, evidence-based VTE order set directly into the EPR – as a decision support tool. I worked with clinicians to turn US VTE guidelines into an algorithm, coded into the EPR. It was tailored in real time to each patient, triggered by their risk factors at admission.
The headlines:
- The process was clinically-led, mandated by all 13 hospital Clinical Directors, and focused on process reliability, not just education.
- We didn’t ask clinicians to remember the guidelines, we built them into the workflow – and so made it easier to do the right thing than not.
- At the same time as standardising the default, clinicians could use their clinical experience to deviate, and document why.
The result? A 48% drop in hospital-acquired VTE
The impact of the initiative, documented in my MSc dissertation, was clear:
- VTE prophylaxis prescribing became more consistent and appropriate (often chemical and physical prophylaxis combined)
- Real-time data improved adherence and feedback loops
- Hospital-acquired VTE fell by 48% in just six months
These results weren’t achieved through heroic effort or top-down compliance. They came from:
- Standardising what we knew worked
- Making it visible and useable at the point of care
- Focusing on system reliability, not individual blame
It aligned well with improvement theories I’d studied, including Juran and Deming’s concepts of ‘designing in quality’ – standardise until you need to customise – and Nolan’s framework of reliability in healthcare systems.
When we moved beyond hoping clinicians would remember the latest, changing guidance and started designing for consistent performance, things changed.
The opportunity for the NHS
We’re seeing promising signs of clinical decision support in UK EPR systems – particularly in primary care, where tools like EMIS and SystmOne routinely include alerts, risk scores, and prescribing prompts. But secondary care adoption is more uneven.
While some trusts using systems like Cerner or EPIC are embedding real-time decision support, others are still grappling with fragmented workflows and inconsistent functionality. That’s a major opportunity: to standardise not just data collection, but decision support – and to design it into clinician workflows, just as we did at Baylor.
To make real-time decision support work reliably at scale, we need agreed definitions of what ‘good care’ looks like in each clinical scenario. Ideally these should be evidence-based, with clinician and patient input, regularly updated, and already widely recognised.
That’s where existing work done in support of national clinical audits could fit in.
The NCAPOP programme, commissioned by HQIP, already provides a rich set of measures grounded in clinical and patient consensus, often aligned with NICE guidelines. These aren’t just useful for retrospective performance reports – they could also be used to power real-time prompts and care pathways within EPRs.
Given that we already have these measures across a huge number of specialties, are we maximising their benefit? Could we move from collecting data retrospectively to using these same measures to support care as it happens?
Decision support tools may improve consistency and outcome of direct care, as my project showed. And the digital data they generate in doing so is then ripe for secondary use, such as comparative national clinical audit, and also for real time outcomes monitoring.
I’ve seen what’s possible when evidence is built into systems, not just policies. When standardisation reduces variation where we don’t need it, we free up doctors’ cognitive load to focus on clinical judgement for the complex and nuanced decisions where it matters most.
In the NHS today, we are reaching 100% EHR coverage. But reading the Health Foundation report made me wonder:
- How do we enable this kind of intelligent, real-time support at scale across a health system with multiple EPR providers, local variations, and varied digital maturity?
- Does AI bring new opportunity for real-time clinical decision support using national clinical audit measures?
I hope before too many more years pass that we will see for every national clinical audit:
- Data captured once, in real time at the point of care
- Data collated nationally, for benchmarking and equity analysis
- Data interrogated locally, for variation and improvement
- Measures used to support evidence-based care now, with real-time decision support
This isn’t about replacing audit. It’s about realising the full value of the work already being done – turning measures into daily decision support, in addition to retrospective systemic learning.
A role for HQIP and our partners
HQIP has been at the forefront of clinical audit for over 15 years. From the outset, we have championed the use of digital, point-of-care data as the gold standard for audit. But we now need to go further.
We are proactively engaging with EPR vendors to explore how audit measures could be integrated into their platforms. We want to work with clinical leaders, digital teams, and patient representatives to co-design decision support tools rooted in the very measures that national audits already use.
If we succeed, we can reduce unwarranted variation, drive up the quality of care, and generate real-time data that serves both direct care and systemic learning.
The Health Foundation’s call to action is clear. Real-time data is only valuable if it informs real-time decisions.
We already have the measures. We are rapidly acquiring the infrastructure. The missing link is intelligent design and shared commitment.
I have seen what is possible when evidence is embedded into systems, not policies. When we stop hoping clinicians will remember the latest guidance and start building reliability into the workflow, the results speak for themselves.
The question is no longer whether we can do this. It is whether we are willing to.
More information
- Discover more about how HQIP supports organisations use clinical audit to drive improvement
- Contact us on [email protected] for an initial conversation