Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 29
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Clinical Governance in Action - (2005) Volume 13, Issue 3

Evaluation of an interface audit programme

Margaret Chawke RN DipN (Lon) BSc (HSM)

Clinical Governance Facilitator

J Grellier BSc MSc

Clinical Governance Facilitator

Steven Smith MB BS DRCOG MRCGP*

Clinical Adviser

Clinical Governance Resource Group, Dulwich Hospital, London, UK

Corresponding Author:
Dr Steven Smith
Clinical Advisor
Clinical Govern-ance Resource Group
1st Floor West Wing, Dulwich Hospital
East Dulwich Grove, London SE22 8PT, UK
Tel: +44 (0)20 7737 4000, extension 6543
Email: steve.smith@selssp.nhs.uk

Received date: 28 January 2005; Accepted date: 3 May 2005

Visit for more related articles at Quality in Primary Care

Abstract

Interface issues probably represent the most difficult areas where quality needs to be improved within the NHS. The issues relate to the way in which different cultures and working practices try to engage with each other. In 2000, the Clinical, Audit and Effectiveness Network (CAEN) in southeast London responded to the need for dynamic support in implementing change across the local interfaces with the commissioning of an interface audit programme. The objective of this programme is to facilitate the implementation of change, in discrete areas of patient care, across the interfaces of local healthcare organisations and theirpartner organisations. The local clinical governance resource group (CGRG), on behalf of the partner organisations, manages the programme. This study presents the findings of the evaluation of this programme (September 2000–June 2004), which includes five interface audit projects undertaken within the period. These are in the areas of stroke, coronary heart disease, antenatal education, deliberate self-harm and emergency contraception. All projects span two or more organisations, aremultidisciplinary and involve primary and secondary healthcare teams. The evaluation involves a retrospective analysis of the projects using quantitative and qualitative methods. Notwithstanding the very small sample size the findings of the evaluation provide significant insights that suggest modification of the project approach could enhance the programme’s potential as a model for implementing change in this complex and dynamic environment.

Keywords

audit, clinical governance, evaluation, interface audit

Introduction

The importance of developing models and tools for managing change across the boundaries of healthcare organisations is unambiguous given the emphasis on partnership working, seamless care, cross-boundary collaboration, and patient and user involvement promoted in all UK government health policy guide-lines in recent years.

The varying size, nature, environment and scope of change problems found within the NHS clearly require an equally diverse range of change solutions. Clinical audit, described as ‘a quality improvement process that seeks to improve patient care and outcomes’, is one such model that is often employed to implement change in discrete areas of patient care.[1] Although an activity that has historically been confined largely to single organisations (and often single departments within these organisations), a growing number of interface audit projects have been reported in the literature in recent years.[2] Despite this, there still remains a scarcity of literature particular to the evalu-ation of interface audit. The sole national survey of interface audit activity reported in the literature, found that the majority of projects stopped short of imple-menting change, and that audit cycles were incomplete.[3] It concluded then that audit was not yet reaching its potential to improve the quality of care.

Given the current emergence of clinical governance from a peripheral position to one of a real and con-tinuing priority in the NHS and the key position of clinical audit within it’s framework, the need to prove the value of clinical audit grows.[4] At the same time the importance of evaluation in promoting learning from quality improvement programmes is gathering wide recognition.[5]

Background

The interface audit project approach has been adapted from clinical audit and project management method-ology. This model assesses how stakeholder organisa-tions deliver changes as a prerequisite for taking on each project. The model used therefore seeks to identify all key players involved in the likely change manage-ment early in the project, and to maintain their involve-ment at all stages. High standards of communication with the key players are seen as key to the success of the project. The project seeks to define an agreed business case early, which describes the project and its potential benefits and risks. This business case is then reviewed and maintained through the life of the project.

The projects are managed by the clinical governance resource group (CGRG) on behalf of the participating trusts. Each of the three clinical governance facilitators responsible for managing the projects has been trained in PRINCE2 project management methodology,[6] and each has substantial experience working with clinical audit. The steering group comprised key stakeholders representing participating organisations and is re-sponsible for directing the project.

The selection criteria for the clinical audit interface projects are as follows:

• the topic must reflect the local priorities of primary care groups (PCGs)/primary care trusts (PCTs), Health Improvement Modernisation Plans (HIMPs) and national priorities such as National Service Frameworks (NSFs) and National Institute for Clinical Excellence (NICE) guidance

• the topic needs to engage all sides of the interface and have relevant change issues for both

• the topic must be do-able in the sense that:

– there is a clear statement of aims

– there are clear change mechanisms available to the project once the audit data have been reported

• key stakeholders are identified and have signed up to delivering changes.

Method

Documentation pertaining to all five projects was audited using a checklist of 29 assessment points as cited in Principles for Best Practice in Clinical Audit.[1]

The audit aimed to ascertain the degree to which the projects’ audit process had adhered to best practice in this area.

Two online surveys were used to solicit the opinion of key stakeholders in each of the five projects. The first surveyed the opinions of the former members of the projects’ steering groups. The second surveyed the opinions of the project facilitators. Opinions on the success or failure of a project, lessons learnt and factors influencing the project, were collected.

The data from the five projects were aggregated for the purpose of evaluating the interface audit programme as a whole. The data were collected, collated and analysed by the CGRG.

Findings and analysis

Documentation audit: best practice in clinical audit

All five projects were assessed using the NICE assess-ment points for best practice in clinical audit.[1] Two facilitators assessed the project documentation for explicit evidence of criteria having been met at each assessment point. Where evidence was not found, a response of ‘no information’ was recorded. Figure 1 presents the findings for each grouping of criteria, showing the percentage of criteria that were achieved across all projects.

Figure 1: Performance against criteria for best practice in clinical audit

The findings in the area of involving service users reflect developments in this area over the life of the programme. Only two of the projects had involved users. However, these two projects had entirely met the criteria at this assessment point.

Online survey of steering groups’ members

The poor response rate (29%) from members of former steering groups can be largely attributed to change in post-holders. In addition, members of a steering group whose project had completed two years previously felt unable to comment given the pace and volume of change in the intervening period.

Impact of projects on relationships, systems, services and patient care

An online survey of members of the former five steering groups yielded a combination of qualitative and quan-titative data from 17 respondents (out of a possible 58).

The first section asked respondents to indicate their views of the project’s impact on four key outcome areas: relationships, systems, services and patient care. These outcomes were cited by the King’s Fund as influencing perceptions of success within a project.[7] Figure 2 represents the findings in these areas. Most respondents indicated that these areas had ‘somewhat improved’ as a result of the project.

Figure 2: Steering groups’ members’ views of the projects’ impact on relationships, systems, services and patient care

Projects’ impact on skills, knowledge and change in practice

The survey also asked respondents to estimate the influence of the project in three key areas of improved knowledge, improved skills and in sustaining practice change (see Figure 3). Again, these were cited by the King’s Fund as influencing perceptions of success within a project.[7]

Figure 3: Steering groups’ members’ views of the influence of the project in the areas of improved knowledge, improved skills and in sustaining of change in practice

Steering groups’ perceptions of project leadership

Respondents were asked who, in their opinion, was responsible for the project and how well they felt this worked. Although responsibility for projects was in principle with the individual steering groups, the find-ings suggest that members of these groups felt this responsibility lay elsewhere.

Only three of the 14 respondents to this question considered project responsibility to be with the steering group. The majority (eight respondents) considered the project to be the responsibility of the project facilitator. Another three respondents felt the project was the responsibility of an individual clinician whose area was identified as requiring the greatest number of changes.

Where the CGRG or the steering group were per-ceived to have responsibility, the respondents com-mented that this had worked well. Where an individual clinician was perceived to have responsibility, respon-dents commented this did not work well. One respondent suggested a senior manager/clinician should provide leadership to the project.

Steering groups’ members’ perceptions of project success

The majority (11 out of 14) of respondents perceived the project was successful to some degree in terms of ‘meeting objectives’. Three respondents perceived the project had failed using this definition.

Eight respondents perceived the project was a suc-cess in terms of ‘learning from experience’. Six re-spondents perceived the project had failed using this definition.

Steering groups’ members’ opinions on lessons learned

Ten respondents replied to the question ‘what lessons, if any, have you learned from your work with this project?’.

Four respondents felt the large size of the project was an issue. Three respondents felt a shorter time frame for projects was required. Three respondents commented on the changing nature of the project.

Online survey of project facilitators

Project facilitators’ opinions on project leadership

All five respondents perceived responsibility for the project to be with the steering group.

However, all respondents noted that this worked quite well during the initial stages of the projects but was ineffective in driving the implementation of change.

All respondents perceived an executive role to be preferable to the consensus approach in leading the project.

Project facilitators’ opinions on project success

Questions that asked project facilitators whether they considered the project to have been a success elicited the following responses: three projects were judged to have been partly successful in ‘meeting objectives’ by the projects’ respective facilitators, and one project was judged to have been a complete success in ‘meet-ing objectives’. One respondent felt unable to com-ment as the project had already ended when they took up post.

All five projects were judged to have been successful in ‘learning from experience’.

Project facilitators’ opinions on lessons learned

Lessons learned by the project facilitators led them to express the following opinions:

• responsibility for implementing change needs clearer definition

• accountability for providing project leadership should be with a credible chair/executive.

• the time frame should be shortened to reduce the project’s exposure to an unstable project environ-ment

• the scope of change programme should be reduced as project complexity increases

• barriers to change need to be re-explored in greater depth before embarking on a change action plan.

Discussion

What went well with the evaluation method

The audit of project documentation measured adher-ence to the criteria for best practice in clinical audit and proved relatively straightforward to administer.[1] Much of the documentation needed to judge these criteria was appropriately archived within our systems.

The use of online surveys was quick to set up and the results easy to deal with.

Limitations of the evaluation method

The evaluation methodology employed in this study had certain aspects that proved somewhat limiting. Because of these factors it has proved difficult for this methodology to demonstrate objectively, clear-cut im-provements to patient care.

Overall, the small number of respondents to the survey places limitations on the representativeness of the findings. Nevertheless, the experiences of these individuals and the lessons they identify remain valid and applicable to the further development of interface audit.

A problem arose in retrospectively seeking infor-mation about projects. Two of the projects had ended two years previously and the majority of steering group members had left the organisation. Those remaining felt that the pace and amount of change that had taken place in the interim meant they were unable to say what effect the project had had over and above other initiatives.

Gaps in documentation were found, perhaps un-surprisingly. In particular these related to a lack of clarity about the rationale underpinning decision-making pro-cesses. The process of the audit has raised the team’s awareness of this important area. The team are work-ing towards a system with better logging of issues as they arise and the decisions made to resolve those issues. This is being done using the PRINCE2 project management tools by maintaining an issue log and appropriate reporting of the management of such issues to the project board, and will ensure that a comprehensive audit trail exists documenting the project processes.

What the evaluation showed

Areas of good practice

• The interface projects were managed in accordance with the NICE criteria for best practice in clinical audit.[1]

• The two projects that involved users wholly met the NICE criteria for involving users in clinical audit.[1]

• The majority of responses from stakeholders indi-cate that relationships, systems, services and patient care were improved by the interface projects.

• The majority of responses from stakeholders indi-cate that improvements in knowledge, skills and practice change are influenced by the interface projects.

• The majority of stakeholder respondents perceived the projects to have been successful in ‘meeting objectives’.

• The continued implementation of PRINCE2 pro-ject management methods will ensure that the interface projects are managed consistently and professionally when measured against the NICE criteria.

Areas for improvement

Ownership of the project was uncertain, as seen reflected in the perceptions of stakeholders with regard to responsibility for the project. Most of the steering group members attributed responsibility for the project to one individual, not to the steering group as a whole. In addition, a bottom up approach was preferred, which, while giving a good sense of involve-ment, did not take advantage of having higher board input giving weight to the project.

A paradox exists between using a consensus part-nership model on the one hand, and having an indi-vidual stakeholder driving the project. Up to now the interface projects have concentrated on a partnership model to ensure equal buy-in and involvement on both sides of the interface. The evaluation suggests that the balance of responsibility should be shifted more towards a credible individual in an executive role, while maintaining partnership working.

The length of these projects presented a major risk. Clinical personnel changed frequently. New priorities arose at a rapid pace that delayed some implemen-tation of change and rendered parts of the projects out of date. For this reason we have concluded that we must seek ways to reduce timescales substantially. A classic audit methodology used in this setting takes approximately 18 months before resulting in changes to practice. It is then takes a further substantial period of time while reaudit takes place to identify whether changes have actually worked. We are currently piloting a smaller-scale rapid-cycle audit methodology, adapted from Langley et al’s ‘Plan, Do, Study, Act’ (PDSA) improvement model.[8] Using PDSA cycles means the clinical team is able to identify areas of change that they would like to work on early in the project. This has the advantage that changes are made from the beginning, so maintaining enthusiasm. Feedback is quick, and subsequent redesign can therefore take place in a cyclical manner.

Despite its limitations, this evaluation has proved a catalyst for us to rethink our strategies in delivering effective interface projects that can achieve palpable changes to patient experiences. We are now in the process of implementing these changes to our working models, and plan to repeat the evaluation after the next group of projects has been completed.

Conflicts of Interest

None.

References