Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Quality Improvement Report - (2013) Volume 21, Issue 2

Frameworks for improvement: clinical audit, the plan–do–study–act cycle and significant event audit

Steve Gillam MD FFPH FRCP FRCGP[*]

Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, UK

A Niroshan Siriwardena MMedSci PhD FRCGP

Professor of Primary and Prehospital Health Care, Community and Health Research Unit (CaHRU), University of Lincoln, UK

*Corresponding Author:
Steve Gillam
Department of Public Health and Pri-mary Care
Institute of Public Health, University of Cambridge
Robinson Way, Cambridge CB2 2SR, UK.
Email: sjg67@medschl.cam.ac.uk

Received date: 7 February 2013; Accepted date 21 February 2013

Visit for more related articles at Quality in Primary Care

Abstract

This is the first in a series of articles about quality improvement tools and techniques. We explore common frameworks for improvement, including the model for improvement and its application to clinical audit, plan–do–study–act (PDSA) cycles and significant event analysis (SEA), examining the similarities and differences between these and providing examples of each.

Keywords

clinical audit, general practice, primary care, plan-do-study-act cycles, quality improve-ment, significant event analysis

Different perspectives on quality  Evaluating quality

Quality, which has been defined as ‘the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge’,[1] may be seen from different stakeholder perspectives. The ‘desired’ outcomes may be subtly different for managers, patients and clinicians. Patients clearly want treatment that works, and place a high priority on how that treatment is delivered. Clinicians focus on effectiveness, and want to provide treatment that works best for each of their patients. Managers are rightly concerned with efficiency, and seek to maxi-mise the population health gain through best use of increasingly limited budgets. The range of different outcomes desired demonstrates the multidimensional nature of quality. The first stage in any attempt to measure quality is therefore to think about what dimensions are important for you.

Evaluation has been defined as ‘a process that attempts to determine, as systematically and objectively as pos-sible, the relevance, effectiveness and impact of activi-ties in the light of their objectives, e.g. evaluation of structure, process and outcome, clinical trials, quality of care’. Where do we start when thinking about evaluation of a service in the National Health Service (NHS)? Avedis Donabedian distinguished four ele-ments:[2]

• structure (buildings, staff, equipment)

• process (all that is done to patients)

• outputs (immediate results of medical interven-tion)

• outcomes (gains in health status).

Thus, for example, evaluation of the new screening algorithms for the early detection of cancer in primary care[3,4] will need to consider:

• the cost of implementing the programme (add-itional consultations, investigations and referrals)

• the numbers of patients screened, coverage rates for defined age ranges and gender, number and pro-portion of patients screened who are referred, time to referral from first consultation or number of consultations before referral, numbers of true and false positives and negatives (process)

• number of new cancers identified, treatments per-formed (outputs)

• cancer incidence, prevalence and mortality rates, together with patient experience (outcomes).

This distinction is helpful because for many inter-ventions it may be difficult to obtain robust data on health outcomes unless large numbers are scrutinised over long periods. For example, when evaluating the quality of hypertension management within a general practice, you may be reliant on intermediate outcome or process measures (the proportion of the appropri-ate population screened, treated and adequately con-trolled) as a proxy for health status outcomes. The assumption here is that evidence from larger-scale studies showing that control of hypertension reduces subsequent death rates from heart disease will be reflected in your own practice population’s health experience. There are three main types of quality measure in health care: consumer ratings, clinical per-formance data, and effects on individual and popu-lation health.

The model for improvement

The Institute for Healthcare Improvement’s (www.ihi.org) model for improvement provides the basis for the commonly used quality improvement techniques of clinical audit and plan–do–study–act (PDSA) cycles.[5] It is summarised in three simple questions:

• What are we trying to achieve?

• How will we know if we have improved?

• What changes can we make to improve?

How these questions are applied in practical frame-works for improvement is described in more detail below.

Clinical audit

The clinical audit cycle (see Figure 1) involves measuring performance against one or more predefined criteria and standards assessment of performance in criteria against a standard until that standard is achieved or until a new standard is set. The greatest challenge is to make necessary adjustments and re-evaluate perform-ance—in other words, to complete the cycle.

primarycare-audit-cycle

Figure 1: The clinical audit cycle.

Clinical audit is therefore a systematic process involving the stages outlined below.

Identify the problem or issue

Selecting an audit topic should answer the question ‘What needs to be improved and why?’. This is likely to reflect national or local standards and guidelines where there is definitive evidence about effective clinical practice. The topic should focus on areas where prob-lems have been encountered in practice.

Define criteria and standards

Audit criteria are explicit statements that define what elements of care are being measured (e.g. ‘Patients with asthma should have a care plan’). The standard defines the level of care to be achieved for each criterion (e.g. ‘Care plans have been agreed for over 80% of patients with asthma’). Standards are usually agreed by con-sensus but may also be based on published evidence (e.g. childhood vaccination rates that confer popu-lation herd immunity) or on the results of a previous (local, national or published) audit.

Monitor performance

To ensure that only essential information is collected, details of what is to be measured must be established from the outset. Sample sizes for data collection are often a compromise between the statistical validity of the results and the resources available for data collec-tion (and analysis).

Compare performance with criteria and standards

This stage identifies divergences between actual results and standards set. Were the standards met and, if not, why not?

Implement change

Once the results of the audit have been discussed, an agreement must be reached about recommendations for change. Using an action plan to record these recommendations is good practice. This should in-clude who has agreed to do what and by when. Each point needs to be well defined, with an individual named as responsible for it, and an agreed timescale for its completion.

Complete the cycle to sustain improvements

After an agreed period, the audit should be repeated. The same strategies for identifying the sample, methods and data analysis should be used to ensure com-parability with the original audit. The re-audit should demonstrate that any changes have been implemented and improvements have been made. Further changes may then be required, leading to additional re-audits. An example audit is shown in Box 1.

image

The PDSA cycle

The PDSA cycle takes audit one stage further (Figure 2) by focusing on the development, testing and imple-mentation of quality improvement.

primarycare-study-act

Figure 2: The plan–do–study–act cycle.

The PDSA cycle involves repeated rapid small-scale tests of change, carried out in sequence (changes tested one after another) or in parallel (different people or groups testing different changes), to see whether and to what extent the changes work, before implementing one or more of these changes on a larger scale. The following stages are involved.

• First, develop a plan and define the objective (plan).

 • Second, carry out the plan and collect data (do), then analyse the data and summarise what was learned (study).

• Third, plan the next cycle with necessary modifi-cations (act).

Plan

Develop a plan for the change(s) to be tested or implemented. Make predictions about what will hap-pen and why. Develop a plan to test the change. (Who? What? When? Where? What data need to be collected?)

Do

Carry out the test by implementing the change.

Study

Look at data before and after the change. Usually this involves using run or control charts together with qualitative feedback. Compare the data with your pre-dictions. Reflect on what was learned and summarise this.

Act

Plan the next test, determining what modifications should be made. Prepare a plan for the next test. Decide to fully implement one or more successful changes. An example is shown in Box 2.

image

Significant event analysis (SEA)

Significant event analysis is a very different approach to quality improvement that involves the structured investigation of individual episodes which have been identified by a member or members of the health care team as ‘significant’ (see Box 3). SEA improves the quality and safety of patient care by encouraging reflec-tive learning and, where necessary, the implemen-tation of change to minimise recurrence of the events in question.[6] It can improve risk management, enhance patient safety and facilitate the reporting of patient safety incidents by health care practitioners.

image

SEA has been described as the process by which ‘individual cases, in which there has been a significant occurrence (not necessarily involving an undesirable outcome for the patient), are analysed in a systematic and detailed way to ascertain what can be learnt about the overall quality of care and to indicate changes that might lead to future improvements’.[7] The aim of SEA is to:

• gather and map information to determine what happened

• identify problems with health care deliveryidentify contributory factors and root causes

 • agree what needs to change and implement sol-utions.

Common causes of significant events

There are many types of significant event. Most are multifactorial in origin, and for this reason SEA often explores issues such as:

• information: e.g. potentially important data over-lookedoncomorbidities (e.g. previousbronchospasm when considering beta blockers), previous drug side effects or allergies, potential interactions

• patient factors: e.g. the doctor failed to check that the patient understood the reasons for treatment, the dosing, timing, stop and start dates, knew the possible side effects

• professional factors: poor communication skills, lack of medical knowledge or skills, mistakes due to pressure of time, unnecessary interruptions, stress, etc.

• systems failure: e.g. lack of education, training or supervision, poor identification of roles and re-sponsibilities, lack of detailed guidelines, proto-cols, etc. lack of audit or regular reviews.

Six steps in SEA

1 Identify and record significant events for analysis and highlight these at a suitable meeting. Enable staff to routinely record significant events using a

log book or pro forma.

2 Collect factual information, including written and electronic records, and the thoughts and opinions of those involved in the event. This may include patients or relatives or health care professionals based outside the practice.

3 Meet to discuss and analyse the event(s) with all relevant members of the team. The meeting should be conducted in an open, fair, honest and non-threatening atmosphere. Notes of the meeting should be taken and circulated. Meetings should be held routinely, perhaps as part of monthly team meet-ings, when all events of interest can be discussed and analysed allowing all relevant staff to offer their thoughts and suggestions. The person you choose to facilitate a significant event meeting or to take responsibility for an event analysis again will de-pend on team dynamics and staff confidence.

4 Undertake a structured analysis of the event. The focus should be on establishing exactly what hap-pened and why. The main emphasis is on learning from the event and changing behaviours, practices or systems, where appropriate. The purpose of the analysis is to minimise the chances of an event recurring. (On rare occasions it may not be possible to implement change. For example, the likelihood of the event happening again may be very small, or change may be out of your control. If so, clearly document why you have not taken action.)

5 Monitor the progress of actions that are agreed and implemented by the team. For example, if the head receptionist agrees to design and introduce a new protocol for taking telephone messages, progress on this new development should be reported back at a future meeting.

6 Write up the SEA once changes have been agreed. This provides documentary evidence that the event has been dealt with. It is good practice to attach any additional evidence (e.g. a copy of a letter or an amended protocol) to the report. The report should be written up by the individual who led on the event analysis, and should include the following:

• date of event

• date of meeting

• lead investigator

• what happened

• why it happened

• what has been learned

• what has been changed.

It is good practice to keep the report anonymous so that individuals and other organisations cannot be identified.

Purists may wish to seek educational feedback on the SEA once it has been written up. Research has repeatedly shown that around one third of event analyses are unsatisfactory, mainly because the team has failed to understand why the event happened or to take necessary action to prevent recurrence. Sharing the SEA with others, such as a group of GPs or practice managers, provides an opportunity for them to com-ment on your event analysis and also learn from what you have done (see Box 4).

image

Closely related to SEA, root cause analysis is a method of problem solving that seeks to identify the underlying causes after an event has occurred.[7]

Clinical audit, PDSA and SEA compared

image

All three techniques involve gaining a deeper under-standing and reflecting on what we are trying to achieve and what changes can be made to improve (see Table 1). SEA is now routinely used in UK general practice as part of the requirement for the revalidation of doctors. Clinical audit is also commonly used, although unfortunately many ‘audits’ do not com-plete the cycle. PDSA cycles are less well understood by many practitioners, and most have little practical experience of PDSA. Clinical audit and PDSA use a measurement process before and after implementing one or more changes to assess whether improvement has actually occurred. However, this is usually a single measure before and after the change in clinical audit, whereas PDSA involves continuous repeated measure-ment using statistical process control with run or control charts (see Figures 3 and 4). SEA should ideally lead to changes in policy or practice but does not involve measuring the effects of this. The main differ-ence between clinical audit and PDSA is that audit involves implementation of change after the first measurement followed by a further measurement, whereas PDSA involves continuous measurement dur-ing implementation of multiple changes conducted in sequence (i.e. one after the other) or in parallel (i.e. different individuals or groups implementing different changes at the same time).

primarycare-monitoring-azathioprine

Figure 3: Run chart showing the effect of quality improvement in the monitoring of azathioprine.

primarycare-monitoring-azathioprine

Figure 4: Control chart showing the effect of quality improvement in the monitoring of azathioprine. W1, week 1; W6, week 6; W8, week 8.

Peer Review

Commissioned; not externally peer reviewed.

Conflicts of Interest

None declared.

References