Continuous quality improvement (CQI) - both a management and a problem-solving methodology - was pioneered in other industries but has inherent appeal in medicine because it uses tools of scientific origin to measure and improve performance.1 Today, implementing a CQI program is expected or required in most health settings. The literature is replete with opinions, anecdotes, and various studies showing how CQI can benefit patients, clinicians, and health care organizations. The Joint Commission on Accreditation of Healthcare Organizations' regulatory initiatives and National Patent Safety Goals, which require CQI monitoring, have been among the most significant change-producing forces improving care.2 Although CQI principles are considerably simpler than those of organic chemistry or pharmacokinetics, somehow implementing them and sustaining improvement challenge many hospital pharmacy programs.
Regardless, many CQI projects' long lead time (3 to 5 years to improvement), incremental results, spotty applicability, taxing requirements, heavy paper burden, and marginal benefits have prompted critics to speak out.3 They ask, "Does CQI as we use it work?" The irony is that, although health care providers value "evidence-based guidelines" when they treat patients, few well-designed studies have evaluated whether CQI improves health care quality effectively.4,5 Often, one sees improvement but cannot verify that the intervention employed caused the improvement. Before-and-after comparisons are easy but risky; background factors can confound outcomes seriously.
Further, budget constraints, competing demands, organizational pressures, and rapidly changing technology can lead managers to give quality assurance low priority. Some organizations (the Institute for Healthcare Improvement and the Leapfrog Group, specifically), frustrated with health care's slow progress toward seemingly clear-cut goals, are examining ways to use proven interventions to make significant gains faster. Somehow, it is necessary to put science back into CQI.
Does CQI Work?
To determine whether CQI works, Alemi et al looked at 92 improvement projects conducted in 32 large health care organizations over 3 years (Table).6 Because some projects were ongoing when data collection ended, the average time to first tangible result was 17 months or longer. Participating organizations met an average of 14 times per project, with the mean meeting duration 1.5 hours. Most projects were not developed to cut costs, and of those that were only 6% did. The results were clear: in some cases patient outcomes
improved, but the impact was quite variable, and financial impacts were small to nonexistent, even in projects designed to save money.
Therefore, CQI works sometimes, not all the time. CQI works best when:
Sustained over an adequate duration
Conducted at an appropriate pace
Focused on significant problems
CQI as Behavioral Change
It is not the "what," it is the "how" that makes the difference between success and failure when starting a CQI program. Leadership buy-in is essential, but the system must be driven by patient and clinician needs. The most popular models use Shewhart cycles (plan change, do change, study results, act on results). CQI participants apply Shewhart cycles quite differently, based on their training and needs,7 making it difficult to compare projects or organizations. Experienced project coordinators can ensure that the cycle moves briskly, and appropriate help is consulted when necessary. For example, inefficient, poorly planned meetings are a known problem. Often, lengthy but infrequent meetings bog down project participants.6 Instead of meeting monthly for 90 minutes, participants might be more productive and motivated if they met weekly for 20 to 30 minutes. Also, including a nucleus of physicians is essential to sustain change.4
Appreciate Pilot Studies
Often, tangible results are delayed as project participants look at an overwhelming number of patients, samples, or locations and try to anticipate every potential problem. Using pilot studies - think of these as the equivalent of a phase 1 clinical trial - is a good way to narrow the project's focus.5 Whether the project is internal to the department or multidisciplinary, testing the project on 10% or less of the targeted population can identify problems before launching a full-scale assault, and ensure smooth sailing as the project expands.
Some failures occur because participants choose interventions poorly. According to Shojania and Grimshaw, "The decision to administer pills without any understanding of their active ingredients or their mode of action would be completely unsupportable. Yet comparatively unsupportable activities occur routinely in quality improvement research."5 CQI interventions are like medication - they have many shapes, forms, and even dosages. Conceptual clarity in design and implementation is key to success. CQI project planners should select
interventions after root cause analysis.1 Many interventions are possible: actions that target very specific problems; guidelines; report cards; critical pathways; clinician or patient education; financial incentives; storyboarding; and others. Using a reminder system to help clinicians remember a preventive step amid competing tasks is sensible; having clinicians repeat disease management training is overkill and unlikely to be effective. Root cause analysis reveals that clinical competence is generally not the problem.
To address complex problems effectively, multiple interventions delivered concurrently are usually necessary; this is why disease and case management systems tend to be the most successful (but still ill-defined) of interventions.5 Ultimately, this requires employee and patient buy-in, which will only be achieved if participants know more than the simple answer to "Why are we doing this?" Managers and leaders need to explain why the project has been developed and what end point they hope to achieve.
For example, medication adherence improves significantly if heart failure patients start on complex regimens promulgated by guidelines before leaving the hospital. Employee interest in the guidelines will vary. Few will relish reading long, technically complex guidelines; a short, 1-page version or flowchart often suffices.
Reporting results is a frequent problem.5 Often, data are recorded slowly or incompletely; clinicians have been known to save information in curious places (inside matchbook covers, on old envelopes, on napkins) and enter it into databases or record it manually in batches later. Understandably, omissions and transcription errors occur. Recording convenience is critical.
When possible, organizations should use existing automated clinical records or carefully crafted software. If automation is new to the organization, all users should understand that automating patient charts will not re-create paper processes. Automation always changes work flow. Once the data are collected, measurement and analysis must be fastidious; many if not most sites will need a trained statistician to help interpret results.
Differences and Similarities
Sometimes, CQI interventions fail because they are modeled on interventions proven successful elsewhere. Organizations may implement the interventions without considering how their populations may differ from those of the innovating organization, or consider themselves quite atypical and resist using key parts of the intervention, or not apply or deliver the intervention exactly as the developing organization did.
We have rallied around the CQI campfire, believing the theory quite sound. Also, it is sound if individuals, groups, and organizations are willing to commit to the CQI process; it is its practice that can be troublesome.