Research Paper - (2010) Volume 18, Issue 5
2Postdoctoral Researcher, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, USA
4Senior Associate Center for Practice Innovation, American College of Physicians, USA
5Senior Vice President, Division of Medical Practice, Professionalism and Quality, American College of Physicians, USA
Received date: 28 September 2009; Accepted date: 25 May 2010
BackgroundSmall primary care practices may face difficulties in staying abreast of patient safety recommendations and implementing them. Some safety issues, however, may be easily and inexpensively addressed, given the necessary information on what is required. Aim To assess changes in patient safety measures in small practices and describe simple mechanisms that appear to have facilitated change. Methods The design uses pre–post bivariate tests to determine the effect of a quality improvement intervention provided by the Center for Practice Innovation (CPI) of the American College of Physicians (ACP) to 34 small internal medicine practices. Compliance with safety measures was reassessed in 30 practices after the intervention. The CPI intervention involved two site visits, a practice assessment, self-selection of clinical, operational and financial focus areas for improvement and ongoing ‘directed guidance’ of the practices in their efforts, including weekly ‘Practice tips’ email alerts. Data used in this study came from the practice assessment form completed by the CPI team, which included 21 safety measures. The Wilcoxon signed-rank test and McNemar’s test were used to compare the practices’ safety compliance before and after the intervention. ResultsMany safety measures had high compliance rates at the first site visit; for other safety measures, fewer than half the practices followed the recommended procedures. The intervention was associated with statistically significant positive change on over 70% of the 21 safety issues. The positive effects were most profound in safety measures regarding how a practice managed sharps, hazardous materials, medications and vaccines. Conclusion This study provides insights into mechanisms that assist practices to make initial steps to improve patient safety and care quality. The study also suggests that with concrete recommendations, small practices can make significant changes in a short period of time and at relatively low cost.
general practice, patient safety, primary care
How this fits in with quality in primary care
What do we know?
Primary care settings face patient safety issues as do other healthcare settings. While some issues are very complex to address, best practices for other areas can easily be complied with, given the right information.
What does this paper add?
This paper evaluates the patient safety related results of a customised quality and practice management improvement intervention undertaken in small American primary care practices. The results can be useful to primary care providers, managed care organisations, regulatory agencies and others interested in primary care safety issues.
Most physician office visits in the USA occur in medical practices of fewer than five physicians.[1] Many small practices lack the human, financial and technical resources to make improvements in office management and quality of care.[2–4] However, one area in which improvement is usually not very costly, and where shortfalls may be primarily due to a lack of awareness, is that of basic patient safety tasks and procedures, such as disposal of sharps, sample medication management and refrigeration of vaccines.
A wide range of topics can be considered to be patient safety issues in primary care, including inappropriate prescribing, failure to address the health literacy needs of patients and misdiagnosis or missed diagnoses. Receiving less attention, but nevertheless important, are simple procedural steps that protect patients from accidental punctures, use of expired or impotent medications and other therapeutic or diagnostic misadventures. Wilson, Pringle and Sheikh cite three simple ‘areas for immediate action’ that even ‘overwhelmed’ primary care practices can undertake to make patients safer:
•.. ensuring that messages are taken in a safe manner through the use of message books; placing sharps boxes on a shelf, out of the reach of children; and identifying patients who do not attend their warfarin checks so that they can be offered safer alternatives such as aspirin.[5]
Also underscoring the need for greater attention to patient safety issues in primary care in the USA, the Joint Commission recommended National Patient Safety Goals (NPSGs) for ambulatory care settings (as well as for other settings).[6] Among the goals relevant specifically to primary care are:
• Goal 1 – Improve the accuracy of patient identification
• Goal 2 – Improve the effectiveness of communication among caregivers
• Goal 3 – Improve the safety of using medications
• Goal 7 – Reduce the risk of healthcare associated infections
• Goal 8 – Accurately and completely reconcile medications across the continuum of care
• Goal 13 – Encourage patients’ active involvement in their own care as a patient safety strategy.
To meet these goals, primary care practices must implement some basic patient protections, including such activities as proper supply management, storage, labelling, instrument calibration and chart documentation, in addition to more complex efforts like ensuring appropriate training, practicing appropriate hand hygiene and reconciling old, current and new medication lists. Some of these procedures protect staff as well as patients.
Several projects have sought to assist primary care practices in quality and practice improvement activities, including the Improving Performance in Practice (IPIP) programme, the TransforMED programme and a regional health network effort in Wisconsin organised by Gundersen Lutheran Medical Center.[7,8] The Royal College of General Practitioners (RCGP) in the UK has introduced the Quality Practice Award, which features a nine-step process that practices may undergo to earn the award. They must meet defined criteria in six modules or areas that focus on patientcentered care, management of illness, records, special groups, becoming a learning organisation and the practice team. The process involves advice and assessment visits by the RCGP.[9]
The ACP Center for Practice Innovation (CPI) in 2006 undertook a practice improvement project specifically designed for small practices. The CPI was created to assist 34 small primary care practices in improving quality and efficiency, with funding support from the Physician’s Foundation for Health Systems Excellence (PFHSE). The quality improvement team at the CPI collected data on safety practices in 2006 and 2007, in volunteer practices ranging in size from one to six physicians. The aim of this study was to assess change in patient safety measures in these organisations, and to describe the simple mechanisms that appear to have facilitated change.
Study participants and the intervention
Thirty-four small internal medicine practices from a field of 99 complete applications were invited and agreed to participate in a two-year pilot of practice management (PM) and quality improvement (QI) activities tailored to the primary care setting. Invited practices were selected on the basis of:
1 practice size (to include representation of solo practices and those with up to six clinicians)
2 diversity in patient factors such as ethnicity and disease conditions
3 apparent dedication to making practice improvements and
4 geographic location where clusters were identified among applicants to minimise travel.
Practice location varied, including suburban, urban and rural areas.
The CPI intervention involved two site visits, a three-hour assessment of the practice by the CPI team, and ongoing ‘directed guidance’ of the practices in their efforts to improve self-selected clinical, operational and financial focus areas. This intensive customised support differentiates the CPI project from other initiatives that might uniformly apply preselected interventions to each participant. The CPI support required two staff to dedicate about two hours daily and consisted of helping practices to find existing tools, sometimes customising or developing them for the practice, answering questions and responding to practice needs to facilitate quality and operational improvements. Practices also participated in seven hourlong didactic conference calls on topics of interest to them that were not necessarily related to their safety or clinical performance improvement targets. A detailed description of the intervention can be found in another ACP CPI report.[10]
One piece of the intervention that CPI staff developed in response to the practice assessment was a ‘Practice tips’ email alert. These brief alerts covering a range of useful topics (see Box 1) were sent on a weekly basis to all participants, with an emphasis on remediable patient safety deficits. The selection of these topics was based on findings fromthe CPI team’s site visits to each practice. The practice tips were intended to alert practices to patient safety risks and provide practices with actionable suggestions for fixing the patient safety problems identified by the CPI team. The alerts presented aggregate data, to protect anonymity while providing practices with evidence that most participants were not performing these safety procedures or tasks in their offices, thus nudging practices to review their own compliance. Box 1 lists the topics of the alerts sent to the practices. Figure 1 shows a sample ‘Practice tips’ email alert on proper sharps disposal.
Data collection
Two members of the CPI team had previous experience in infection control procedures and had led the preparation of a community health centre for ambulatory reaccreditation by the Joint Commission. As a result, the team decided to incorporate some of the Joint Commission patient safety standards and metrics applicable to small practices into the CPI site visit data collection process. Data used in this analysis came from the practice assessment form, filled out by CPI team members upon review of practice operations and clinical records during the three-hour practice assessment on each of the two site visits, which occurred about one year apart. Altogether, 21 measures (listed in Table 1) were assessed. Not all safety measures were assessed each time in every practice due primarily to the time pressures of a one-day site visit. Also, sometimes no opportunity to observe a specific safety standard presented itself. The visit schedules were not extended because the CPI staff sought to minimise the intrusion on practices and the costs associated with making the visits.
Sharps boxes
Do you have safety type sharps containers? Sharps boxes should be of the straight drop type and mounted on the wall at a level where even the shortest member of your staff can clearly see the top of the box.
During our site visits, we noted the following regarding sharps boxes:
Statistical analyses
We calculated descriptive estimates and pre–post intervention analyses of statistical difference. The Wilcoxon signed-rank test and McNemar’s test were used to compare each practice’s performance before and after the intervention, depending on the response levels for each safety measure. We performed the Wilcoxon signed-rank test, a non-parametric alternative to the paired t-test, on safety measures that had three response categories (i.e. yes, sometimes and no). For safety measures that had two response categories (i.e. yes and no) we used McNemar’s test, a nonparametric method for nominal data, to assess whether the differences between pre- and post-intervention were significant. All statistical analyses were performed using Stata statistical software, Version 8 (Stata Corp., College Station, Texas). Johns Hopkins researchers conducted the analyses reported here as secondary analyses of de-identified data, deemed exempt from review by the Johns Hopkins School of Public Health Institutional Review Board.
In the first round of site visits, all 34 practices were assessed. Four practices dropped out of the project, two due to their self-perceived lack of time or ability to implement improvements and one to begin a specialty residency, while the fourth was asked to leave the project due to unresponsiveness. In the second round of site visits, the 30 practices that remained in the project were assessed a second time. As mentioned above, not all safety measures were assessed in each practice during each visit, primarily due to time constraints. Figures 2–5 show the number of practices included in the calculations for each safety measure at the first visit (pre-intervention) and at the second visit (post-intervention). The number of practices in which a measure was assessed in the first site visit ranged from 10 to 34 (or between 29 and 100% of possible). The number of practices in which a measure was assessed in the second site visit ranged from 14 to 30 (or between 47 and 100% of possible). Table 1 indicates the number of practices for whom each measure was assessed in both the pre-intervention and postintervention periods. The number of practices in which a measure was assessed on both site visits ranged from 9 to 29 (or between 30 and 97% of possible assessments).
Figures 2 to 5 show the compliance rates with each measure in the pre-intervention period as compared to the post-intervention period for all sites assessed on a measure (in pre-intervention only, post-intervention only or in both time periods).Notably, on many safety measures – such as hand washing, training staff on new equipment, having fire extinguishers present and appropriate storage of medication – a good majority of practices were already compliant at the first site visit. For many other measures, however, fewer than half the practices followed recommended procedures at the first site visit. The figures demonstrate improvement by the second site visit in the percentage of compliant practices for almost all safety measures.
For measures that were assessed in a given site during both time periods, Table 1 confirms that the intervention was associated with statistically significant positive effects on over 70% of the 21 safety issues assessed. The intervention appeared to have the most profound effects in the following safety measures:
1 sample medications are managed appropriately
2 sharps are secured
3 sharps boxes are mounted, locked with safety covers
4 hazardous waste receptacles are clearly labelled
5 hazardous waste materials are appropriately stored
6 medications and vaccines are properly stored
7 a temperature log is maintained for refrigerators
8 vaccine information is documented and
9 vaccine information sheets are provided (P-values all <0.01).
In general, these safetymeasures assessed how a practice managed sharps or hazardous materials as well as medications and vaccines. Among the measures demonstrating sizable improvement, seven out of nine were the subject of a ‘Practice tips’ email alert over the course of the CPI project (those listed above as 1, 2, 3, 6, 7, 8 and 9). In addition to those practices that saw positive or negative changes after the intervention, some practices did not show any changes on a given safety measure.
Measures that did not show improvement included ‘good hand-washing techniques practised’, ‘two identifiers used routinely in patient care documentation’, ‘staff are trained and assessed on equipment and procedures’, ‘‘‘clean’’ (unopened) supplies are stored appropriately’, ‘fire extinguishers are present’, ‘had a record of fire extinguisher inspection’ and ‘storage of medication appropriate’. As already indicated, good compliance with several of these measures was assessed on the first site visit, leaving little room for improvement. For ‘two identifiers used routinely in patient care documentation’, the P-value nears statistical significance at the 0.05 level, suggesting that the small sample size may prevent us from detecting a true effect on that measure. Power calculations were not done a priori because the CPI sought to maximise the number of sites assisted, within the constraints of available funding.
In a project to improve patient care quality and practice management in 34 small internal medicine practices, the CPI assessed the practices’ basic safety procedures. Many of the practices had not integrated several of the 21 safety measures the CPI team reviewed. In addition to other elements of directed guidance, the CPI sent weekly ‘Practice tips’ email alerts that presented overall compliance data across the 34 practices. The CPI team did not necessarily anticipate ahead of time that the tips would result in much change in processes. However, CPI staff now believe that the tips did influence practices to improve, because they presented aggregated data in a non-judgemental way and at the same time supplied logical, achievable short-term recommendations that appealed to physicians’ sense of safety for their staff and patients. Between the first and second rounds of CPI site visits, the practices as a group made statistically significant changes in the proportion assessed as compliant with basic safety measures. Most of the measures with significant improvement over time were the focus of a ‘Practice tip’ email alert.Other guidance provided by the CPI focused primarily on making improvements in clinical performance measures and so was not likely to be as important to changes in these patient safety measures.
Although the CPI team was somewhat surprised over the initial lack of implementation of basic safety measures, they were impressed by the quick responses of these practices once the deficits were brought to their attention. This study suggests that by providing brief non-judgemental explanations with concrete recommendations of how to fix certain discrete patient safety issues, such as proper securing of sharps, professional societies or safety groups can help small practices to make significant changes in a short period of time and at relatively low cost.
Several of the patient safety improvements made by practices in the CPI project are related to the NPSGs for ambulatory care, particularly Goals 1, 3, 7, 8 and 13. With relevance to Goal 1 – ‘Improve the accuracy of patient identification’ – the CPI measured the reliable use of two identifiers on all patient information. A fundamental supposition of Goal 3 – ‘Improve the safety of using medications’ – is that medications and vaccines are stored and labelled properly. Goal 7 – ‘Reduce the risk of healthcare associated infections’ – clearly envisions appropriate hand hygiene and proper sharps disposal, and Goal 8 – ‘Accurately and completely reconcile medications across the continuum of care’ – must be supported by accurate documentation of all medications in the first place. Finally, for Goal 13 – ‘Encourage patients’ active involvement in their own care as a patient safety strategy’ – providing vaccine information sheets is an applicable strategy. While some of the NPSGs have slightly different main focuses compared to what was measured in the CPI project, many of the items measured in the project support the realisation of the NPSG sub-goals. The fact that many practices in this project did not initially dispose of sharps properly or fully document medications indicates that some preliminary steps may be necessary for many small medical offices to begin to achieve the official, higher-order aims of the NPSGs.
Among the limitations of this analysis is that the relatively short, one-year time period between measurements was set by the availability of funding, and may not have been enough time for practices to make changes. There may have been some selection bias introduced by the fact that practices volunteered and were by design located near to at least a few other practices. In addition, as a quality improvement intervention (as opposed to a research study), this project used an observational design in lieu of a randomised controlled trial. As a result, we cannot definitively ascribe causality to the CPI intervention. Finally, the lack of two measurements for several of the safety items reduced the sample available to assess change, and may make it difficult to reach statistical significance on some items that may have improved in reality. We believe the pattern of the missing data is largely random, however, since it was usually related to time pressures during the site visits.
Despite its limitations, this study is one of the few that has quantified compliance with a range of basic safety measures in a group of small practices. Further, significant effects could be detected despite the small sample sizes, due to the substantial progress made by medical offices in adhering to recommendations by the second round of site visits. In addition, it suggests a replicable path to improving patient safety in small practices across the nation.
Small internal medicine practices were able to improve their compliance with basic safety tasks and procedures after receiving actionable advice from the CPI of the American College of Physicians.
The success enjoyed by practices participating in the CPI project provides some insight into mechanisms that can be used to help practices take first steps toward meeting NPSGs. A non-punitive, primarily informational campaign by medical societies or the Joint Commission could help practices address some of the patient safety deficits that they may not even realise they currently have. The success of such an approach would be contingent on providing concrete recommendations for change and having some mechanism in place to monitor changes in compliance over time.
We would like to acknowledge the hard work undertaken by the 34 practices that participated in the ACP CPI project.
This research was supported by funding from the Physicians’ Foundation for Health System Excellence and in-kind contributions from the American College of Physicians.
Ethical approval not required by the Johns Hopkins School of Public Health Institutional Review Board as this research was deemed secondary data analysis and therefore not human subjects research. The original project was conducted by the ACP CPI as a quality improvement activity, and therefore was not considered to be research.
Not commissioned; externally peer reviewed.
None.