Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Quality Improvement Report - (2014) Volume 22, Issue 2

Evaluating improvement

A Niroshan Siriwardena MMedSci PhD FRCGP*

Professor of Primary and Prehospital Health Care

Community and Health Research Unit (CaHRU), Faculty of Health and Social Sciences, University of Lincoln, UK

Steve Gillam MD FFPH FRCP FRCGP

Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, UK

Corresponding Author:
Professor A. Niroshan Siriwardena
Community and Health Research Unit (CaHRU)
School of Health and Social Care, College of Social Science
University of Lincoln, Brayford Pool, Lincoln LN6 7TS, UK
Email: nsiriwardena@lincoln.ac.uk

Received date: 16 February 2014; Accepted date: 17 February 2014

Visit for more related articles at Quality in Primary Care

Abstract

Evaluating quality improvement interventions requires a variety of methods. These range from quantitative methods, such as randomised controlled trials, to quasi-experimental (controlled before-and-after and interrupted time series) and uncontrolled before-and-after studies, including clinical audits, to determine whether improvement interventions have had an effect. Qualitativemethods are often also used to understand how or why an intervention was successful and which components of a complex or multifaceted intervention were most effective. Finally, mixed methods designs such as action research or case study methods arewidely used to design and evaluate improvement interventions.

Keywords

case study methods; evaluation; exper-imental methods; general practice; primary care; qualitative studies; quality improvement

Introduction

A range of methods has been used to evaluate quality improvement interventions. These can vary in terms of the rigour of the methods used and their ability to attribute improvement to the intervention being proposed. Studies can range in design from random-ised controlled trials where attribution is clearer, to other types of experimental methods including quasi-experimental designs, such as non-randomised con-trol group (sometimes call controlled before-and-after) or interrupted time series methods, to uncon-trolled before-and-after studies (including clinical audits) where attribution is less certain (Figure 1).[1]

Figure 1: Experimental studies used to evaluate improvement interventions (adapted fromUkuomunne etal4)

Improvement interventions are often complex (that is, multiple rather than single) and pragmatic so that ‘real-world’ designs are called for, involving evaluation of complex interventions. Improvement often involves a series of interventions including education (of professionals and/or patients), reminders (to pro-fessionals and/or patients), audit and feedback or other measures which vary in content, intensity or timing between different intervention sites so that it is not always clear which components in the so-called ‘black box’ of the intervention are effective.[2]

In order to understand how or why an intervention works, it is often necessary to use methods such as surveys or qualitative interviews, focus groups, docu-mentary (textual) analysis, observational or ethno-graphic methods. It may also be necessary to combine quantitative and qualitative methods, for example, with case study methods, or to work with participants to design the evaluation, for example, using action research methods. Quality improvement methods themselves can also be used to evaluate improvement, which adds to the complexities of improvement evaluations.[3]

Designing evaluations

A starting point for designing an evaluation is the logic model. Logic models can also be used to design improvement interventions by defining the popu-lation and problem that the intervention is aimed at, specifying inputs (in terms of resources provided for planning, implementation and evaluation), outputs (in terms of healthcare processes implemented and the population that is actually reached) and longer term outcomes measured in terms of health and wider benefits or harms, whether intended or incidental and in the short, medium or long term.[3]

In an evaluation logic model, we can add to this by specifying the evidence or data to be collected and the method that will be used to analyse the data. For example, the logic model for an evaluation of a national quality improvement collaborative designed to improve care for acute myocardial infarction and stroke in ambulance services is shown in Figure 2.[5]

Figure 2: Ambulance Services Cardiovascular Quality Initiative (ASCQI): evaluation logic model5

The figure shows that we collected quantitative data, survey data (pre- and post- intervention), quali-tative data from observations and meetings, and analysed these using a mixture of time series, quali-tative analysis, pattern matching to link time series and qualitative findings, and comparison of different sites (cross-case synthesis) to develop an explanation of what happened, as well as why and how this came about as a result of the collaborative.

We describe the individual methods used to deter-mine effect sizes of improvement interventions and to understand how or why an intervention was successful or which components of a complex multifaceted intervention were most effective.

Randomised controlled trials

Because improvement interventions usually involve the education of healthcare staff together with other multiple components, the most common type of ran-domised controlled trial (RCT) used is the cluster randomised controlled trial (CRCT). CRCTs involve the randomisation of practitioners or groups of prac-titioners (in a practice, organisation or area), rather than individual patients, allocated to an intervention or control group.

CRCTs are used because educational interventions for professionals cannot be switched on and off with different patients, i.e. professionals are not able to implement their learning with one patient random-ised to the intervention while forgetting what they have learnt with another patient allocated to a control group.

The unit of analysis in CRCTs can be at the level of the unit of randomisation or at the level of the patient. Although many design flaws of RCTs can also apply to CRCTs (e.g. allocation bias, volunteer bias), there are additional features that should be considered in CRCTs.

These include the potential correlation of outcomes between patients in clusters (termed the intracluster correlation), which occurs because these patients tend to be more similar to each other than to a randomly selected patient. There is an additional risk of patients in control clusters receiving the intervention. This can occur because professionals in the intervention arm move to the control cluster (i.e. switch organisations or locations) or because those in the control arm learn about the intervention from colleagues in the inter-vention arm, an occurrence termed contamination.

An example of a CRCT for an improvement inter-vention is shown in Box 1. In this example, both the unit of randomisation and analysis was the practice.

box

Before and after studies

Single group before-and-after (or pre–post inter-vention) studies without a control group, sometimes termed pre-experimental studies, are often used in improvement studies. An example is shown in Box 2.

box

Pre-experimental designs suffer from significant and often irremediable flaws. It may be impossible to determine whether an improvement or other change in outcome is due to the intervention itself or to a confounding or alternative explanation, such as an external factor or a natural change over time, referred to as a secular trend. Outcomes may also be altered due to the participants changing their behav-iour as a result of being observed (the Hawthorne effect) or due to regression to the mean, where out-lying variables tend to move towards mean values. However, they may be useful for developing an im-provement intervention prior to more rigorous testing.

Quasi-experimental studies

Quasi-experimental trials are more robust than pre-experimental studies, but less so than randomised controlled trials. There are two main types of quasi-experimental study: the non-randomised controlled before-and-after study and the (interrupted) time series study. In the controlled before-and-after design, an intervention is administered to a study group and compared with a control group who continue as usual. An example is show in Box 3. Confounding may be due to external influences on outcomes occurring between the pre- and post-intervention phases. Poten-tial sources of bias include selection bias from the non-random selection of intervention and control groups or areas leading to baseline imbalance in outcomes of other differences between the two groups. Regression to the mean and differences in secular trends between groups may also occur in such studies.

box

The interrupted time series design looks at data for the outcome of interest for a period before, during and after the intervention, and therefore takes secular trends into account. However, this design can be affected by loss to follow-up (or attrition), Hawthorne effects, or contamination. An example is shown in Box 4.

box

Qualitative methods

Although experimental methods can show the extent of any change resulting from an improvement initiative, they cannot explain why or how the change occurred without using qualitative methods (Box 5). Qualitative methods can take the formof interviews (of patients or practitioners or both), focus groups and observations including ethnographic methods and these can provide in-depth information about how and why an improvement intervention might be working.

box

Action research, case study and mixed methods

Evaluations of improvement often involve mixed methods, combining quantitative and qualitative methods to determine both the effect size and deter-minants of an improvement intervention. Action re-search studies involve participants to a greater or lesser extent in the conception, design and evaluation of an intervention; they evaluate the effects of an improve-ment intervention.

Case study methods may be based on a single case or multiple cases.[11] They combine methods to develop an explanatory model for why and intervention might work in some cases and not in others. For example, in the Ambulance Services Cardiovascular Quality In-itiative (Figure 2 and Box 6), we combined interrupted time series and multiple case study methods, matching the patterns of change in ambulance services with a detailed analysis of changes within each service to develop an explanation of what led to differences in improvement.

box

Peer Review

Commissioned; not externally peer reviewed.

Conflicts of Interest

None declared.

References