Throwing the baby out with the bathwater?

Published: 08 Jan 2019

Why calls for a national quality improvement programme overlook the debt owed to clinical audit.

The recent article from Boyle and Keep is interesting as it does set out several of the perceived down sides of clinical audit, be it national or local. However, it suggests that substituting it with a National Quality Improvement Programme will rectify these perceived drawbacks.

Before we wholescale ditch a national audit programme which is the envy of the world, we suggest several approaches to address these concerns. Indeed, is it not that we need both the national audits and quality improvement (QI) working hand in hand?

The National Clinical Audit and Patients Outcome Programme (NCAPOP) – commissioned by the Healthcare Quality Improvement Partnership – is comprised of 30 audits, focused on high priority clinical areas of patient care and associated pathways and funded by NHS England and the Welsh Government.

In addition we provide advice to a series of additional, non-HQIP funded audits, which focus on an additional 50 clinical areas or topics. The Royal College of Emergency Medicine runs several such programmes outside of NCAPOP. We further manage the Clinical Outcomes Review Programme which uses a different methodology (deep dives into clinical pathways of care) reviewing services and events following a particular event, condition or procedure.

Demonstrable improvement

Boyle and Keep imply that the national audit programme has not led to improvement. The NCAPOP has led to improvement although many other factors have contributed to these improvements. The programme itself has tracked these improvements and pointed the way to subsequent areas to be improved. It has allowed clinical teams to concentrate on where they need to focus their efforts to improve their services. In addition, the programme is the best source of information to support other initiatives such as Getting It Right First Time and Model Hospital.

Virtually every surgical intervention included in the programme has demonstrated improvement over the years, and this is despite all operating now on patients with increased co-morbidity. The audits cannot claim to be the sole cause, but they have contributed by shining a light on areas of care, providing local teams with insight into where improvement needs to be made, and bringing together various system levers such as best practice tariff (BPT) and Commissioning for Quality and Innovation (CQUIN) national goals through data provision.

Similarly, we have seen such improvements in chronic disease such as diabetes. The Hip Fracture audit, which concentrates on the NICE standards and is allied with the BPT, has demonstrated significant improved performance so that hip fracture treatment in England is the envy of the world (Fig 1).

Lastly, whole reorganisations of patient care pathways such as those recently seen in stroke and trauma care to save more lives would not have been possible without the stroke (SSNAP) and trauma (TARN) audits and their data being used to improve measurement of processes of care across these pathways and subsequently using that data for measurement of quality and associated improvement goals.

“Benchmarking is one of the most powerful tools we have”

The authors argue that there has been mixed results in the effectiveness of clinical audit to improve care. They seem surprised that the programme was never set up to provide “effective measurement tool to benchmark care against other units and provide detailed quality assurance information”. It is true that to improve care following on from an audit requires more than just presenting the results. However, benchmarking is one of the fundaments linking measurement to improvement. All colleagues in this area of activity know that benchmarking is one of the most powerful tools we have as there is no clinician, unit manager or commissioner who want to see their service at the wrong end of a benchmarking graph –NCAPOP audits provide this. In addition, the units and services at the best end of the spectrum can be used in different ways to share, spread and learn from, in terms of best practice and also lessons learnt from failure.

The Institute for Health Improvement (IHI) model of improvement teaches that a benchmark is a noun, but that benchmarking, on the other hand, is a verb that requires exploration and investigation of why the ‘benchmark’ number was achieved (Lloyd 2017).

National clinical audits are valuable resources to help teams, prior to starting their improvement journeys, answer the following IHI model of improvement questions:

  • Do we know how good we are?
  • Do we know where we stand relative to the best?
  • Over time, where are the gaps in our practice that indicate a need for change (i.e. improvement)?
  • In our efforts to improve, what’s working?
  • Do we know/understand where variation exists in our organisation?

National clinical audit fundamentally provides quality assurance (QA) and stimulates quality improvement (QI). As the programme was set up, the former was predominant but now, with a much more mature programme, it concentrates on the latter. At HQIP we spend a lot of time ensuring that the QI message is clear; increasingly we commission to achieve exactly what the authors envisage for their new QI Programme, i.e. ‘less data entry, real-time national benchmarking and an increased emphasis on testing interventions’ with each iteration of an audit.

Efficiency and financial targets versus quality and patient care

That is why the outputs of audit were never meant to be confined to educational meetings and this highlights an issue in relation to why NHS services may have a block in using data for improvement, be it from clinical audits, inspections, QI projects or other sources. The feedback loop is missing with too much focus on the information found or on the Do in Plan Do Study Act (PDSA) cycles. With this loop incomplete, Boards and practices focus on efficiency and financial targets as opposed to all domains of quality and patient care.

There needs to be a culture of improvement across all organisations from patient to Board if QI is to be embedded. Too often QI is seen as a silver bullet – as described by Dixon-Woods and Martin (2016) – to replace other failed improvement approaches such as audit.

However we know that the same organisational behaviours and issues will still be pertinent further down the line irrespective of QI methodology or other method chosen. Successful organisations that keep on improving have a culture and an approach where they see research as what is possible, audit as what is actual in practice, and QI as trying and making the ‘possible’ actual. To be successful, Dixon-Woods and Martin (2016) state that the use of QI methodology in these organisations must be supported by a significant investment in leadership, expertise and resources for change.

Culture and support

However, we also recognise, as outlined by Boyle and Keep in relation to pain relief in Accident and Emergency departments, simply showing graphs concerning performance is not enough. Clinicians have many skills but QI techniques are not inherent and need to be included in curricular and CPD. In addition, this teaching may take many forms. I agree that giving “audits” where junior doctors are asked to review notes as a “project” is virtually useless and a recent survey by the Royal College of Physicians confirmed the same and informed us that such an approach virtually ensured that all such juniors vowed never to do such an “audit” again. Such notes reviews should be banned unless led, from the front, by a skilled consultant or primary care trainer.

National audit results can bring demonstrable value to multiple audiences within the clinical hierarchy if their utility is appropriately promoted. HQIP has been working with national audit providers producing one-page pdf slides (Fig 1) with the key findings from the latest iteration of that audit widely available and viewable alongside all other topics in the programme. These are designed to be used by Trust Boards, medical directors, governance groups and clinical teams to review performance and support their teams to identify problems and deliver change.

Easy access to reported data and a variety of visualisation and data manipulation tools helps the team use their own national audit data as the backbone to a PDSA cycle, or other similar local initiative to focus on this one area and bring about improvement. All medical staff are required to partake in quality improvement activities and having ready access to this information would ensure that there is no need to trawl for further information. Having access to such information will allow Trusts to ensure their scarce resources devoted to training in this field can focus on the improvement and not the measurement. This is particularly so for junior doctors where training in QI is most important.

I would conclude that we need to harness and synchronise QA and QI together to get the most out of audit in general. Indeed, the challenges to clinical audit and QI are many and include also how we can introduce more routine data into the programme so that the burden to the service is further reduced and the benefits maximised.

Danny Keenan
Medical Director
Healthcare Quality Improvement Partnership
(with help and contributions from the staff in HQIP)

 

Example of hip fracture audit results
Fig 1. A pdf format displaying the hip fracture audit results for the hospital specified for the year 2017/18 with the key audit themes displayed on the left and individual benchmarked results, across English Trusts, displayed on the right. Use the NCAB search facility to access audit data benchmarked searchable by speciality, Trust, hospital or unit.

References
Dixon-Woods, M., Martin, G. (2016) Does quality improvement improve quality? Future Hospital Journal. 3(3): 191–4
Lloyd, Robert. (2017). Quality Health Care A Guide to Developing and Using Indicators (2nd Ed). Jones and Bartlett Publishers, Inc. Burlington, MA.