Editorial - (2009) Volume 17, Issue 3
Foundation Professor of Primary Care, School of Health and Social Care, University of Lincoln, UK
Quality improvement initiatives are a ubiquitous feature of modern healthcare systems because of actual and perceived gaps in the quality of healthcare delivery.[1,2] However, such initiatives are often not subject to evaluation, or when evaluation is conducted this is done poorly.[3]
Quality improvementmethods are increasingly being used to aid diffusion of innovations in health and can be used as a research tool to model and design complex healthcare interventions.[4] However, as well as being components of quality improvement programmes they can sometimes be a useful adjunct to other more traditional evaluation methods, thus serving a dual role.
Evaluation is often undertaken to determine the quality of care being provided by an individual, team or service where quality is taken to mean the effectiveness, efficiency, safety or patient experience of that care.[1] Evaluation is also undertaken to ensure that the aims of care are being met, to provide information for service users, commissioners, healthcare providers or other stakeholders about the quality of services being provided, and finally to establish the basis for future improvements. Quality improvement research is applied research involving evaluation of quality improvement initiatives which is aimed at informing policy and practice.[5] Current guidelines for reporting quality improvement include ‘descriptions of the instruments and procedures (qualitative, quantitative or mixed) used to assess the effectiveness of implementation, the contributions of intervention components and context to effectiveness of the intervention and the impact on primary and secondary outcomes’.[6]
A useful starting point for an evaluation is a logic model where the clinical population and problem that the healthcare intervention is aimed at, inputs (in terms of resources provided for planning, implementation and evaluation), outputs (in terms of healthcare processes implemented and the population that is actually reached) and longer-term outcomes are measured in terms of health and wider benefits or harms, whether intended or incidental and in the short, medium or long term (see Figure 1).[7]
A logic model can be expanded, either as a whole or in specific areas to form a ‘cause and effect’ (sometimes call a fishbone or Ishikawa) diagram (see Figure 2). The central line representing the patient pathway, is affected by patients themselves, but also by the other inputs and outputs (processes) as patients are travelling through the healthcare system being evaluated.[8]
Traditional evaluation methods look at the structure, processes (outputs) or outcomes of care using various qualitative or quantitative methods (see Box 1).[9]
However, a number of quality improvementmethods can also be used for evaluation and these overlap considerably with traditional evaluative techniques (Box 2). These methods have potential to enable better understanding of the processes of care and, importantly, to shed light on how to improve upon these.
Clinical audit, which is the ‘systematic, critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources and the resulting outcome for the patient’[10] builds evaluation into the process. It involves measurement of care (‘how are we doing?’) against established criteria and standards (‘what should we be doing?’) through which performance and changes in performance can be measured (‘have the changes we have made led to improvement?’). Audit can and has been used as an evaluation method, even in randomised studies.
Significant event audit is another technique that is frequently used to evaluate care, particularly care that is considered to fall below standards or that is outstandingly good.[11] It is a powerful tool for evaluating healthcare processes by attempting to understand the detailed factors that led to care being outside the norm, but it can also help improve communication, team building and quality.[12]
Plan, do, study, act (PDSA) cycles are another means of investigating care processes while rapidly implementing evidence-based or common sense changes to processes of care, enabling changes to be spread more easily and effectively.[13] The third stage of the PDSA cycle involves studying the effect of a change using numerical or qualitative data – even with smallscale changes, the effect over time on processes of care can be measured and analysed using statistical process control techniques. The PDSA model is a useful means of evaluating while introducing rapid change to healthcare processes.[14]
Focus groups and individual interviews are important traditional techniques for gathering data about the experiences of patients and staff about services. An important quality improvement tool, which is a development from this, is the ‘discovery interview’.[15] This narrative technique involves listening to the stories of patients and carers of the care that they have received in order to understand experiences from a user perspective. Other narrative techniques for quality improvement research and evaluation include naturalistic story gathering during a project or collective sense-making of a complete project by a participant observer and the organisational case study.[5]
Root cause analysis is a specific type of significant event analysis which aims to find explanations for adverse or untoward events through the systematic review of written and oral evidence to establish underlying causes.[16] The analysis involves defining the problem, gathering evidence, identifying possible root causes and the underlying reasons for these and then deciding which causes are amenable to change. This leads to recommendations, the effect of which can be further evaluated.[17]
The Pareto (or 80/20) principle (see Figure 3), describes how a relatively small number of key causes will lead to most of the important outcomes, for example, 80% of outputs, outcomes or harms are due to 20% of inputs or causes. This can help to distinguish the most important causes.[18]
Process mapping can describe the patient journey through the system of care and even complex pathways can be visualised using spaghetti diagrams or ‘swim lane’ diagrams (see Figure 4) to separate processes into different job roles or team activities.
Components of a process which are critical to quality (CTQ) can be represented as aCTQtree (see Figure 5). Such evaluations can determine whether the right treatment is given by the right person at the right time and place.[19]
Another important aspect of evaluation is the human factors involved in change.[20] Ownership of change is particularly important for healthcare professionals, such as doctors and nurses, who at the front line of care have the power to promote or subvert change. This, the inverted pyramid of control,[21] has been applied to health care to emphasise the importance of clinical leadership.[22] An understanding of internal strengths and challenges (weaknesses) as well as external opportunities and threats, together with individual and group drivers and barriers to change is critical to successful health services, an approach which has its basis in Lewin’s ‘forcefield theory’.[23]
Comparing and benchmarking individual or organisational performance using statistical process control can help identify differences or gaps in performance, [24] which enable ‘special causes’ to be highlighted and explanations to be sought to look at ways of changing practice to improve performance (Figure 6).
Statistical process control charts plotted against time can also show where improvements have occurred in response to planned interventions,[25] and feedback using this technique as part of ongoing evaluation can contribute to improvement.[26,27]
Larger-scale evaluation or more robust evaluations may require more complex techniques such as quasi-experimental methods including time series or non-randomised control group designs as well as cost analysis.[28,29]
Quality improvement methods, despite their increasing application to health services,[30] have not been widely considered or used as part of healthcare evaluation but could provide a useful addition to the evaluative techniques that are currently in use.
None.