Comparative Effectiveness Research: Is "Synchronicity" a Valid Outcome?
Efforts to compare strategies for healthcare delivery must evaluate effectiveness in terms of the outcomes that we care about most.
In our highly fragmented healthcare system, patients deal with untold complexity when managing their chronic conditions. Patients frequently take multiple medications at several different times during the day. Prescriptions for these drugs often are written by many different prescribers, and patients may fill these prescriptions at different pharmacies, often on different days.1 This complexity of care contributes to nonadherence to prescribed therapy2 and to the inefficiency and high costs of care, and may meaningfully affect health outcomes. As a result, efforts to improve the quality of care and reduce healthcare costs must account for complexity, and we must strive to create simpler systems to streamline care and to make it easier for patients to adhere to therapy.
But first we need to understand the problem. In this issue of The American Journal of Pharmacy Benefits, Nordstrom and colleagues compared the levels of synchronicity of care for 2 different preparations of erythropoiesis-stimulating agents (ESAs) and the chemotherapeutic agents that cause the anemia that these therapies ameliorate.3 The authors found that most ESAs are administered on the same visit as chemotherapy, but that one agent (darbepoetin alfa) was more likely to be synchronized with chemotherapy than the other (epoetin alfa). The authors suggest that this synchronicity of therapy offers potential benefits in terms of patient convenience and healthcare costs.
Nordstrom and colleagues’ study is of particular interest because it is, in essence, a novel type of comparative effectiveness analysis. Comparative effectiveness research is intended to provide a better understanding of how different treatment regimens affect outcomes by comparing their impacts on health benefits (eg, length or quality of life) and safety. However, comparative effectiveness studies also should evaluate different models of care because how care is delivered may be just as important as what is delivered. This factor is especially important because of substantial deficits in healthcare quality and the massive variability in care that patients receive in the United States. As a result we must look to comparative effectiveness evaluations to compare how different healthcare systems, policies, and structures influence outcomes.
In the specific case of cancer, we need to better understand how to coordinate complex care, because we must treat the underlying disease while also managing a myriad of symptoms and side effects. We also need to move toward a shared understanding about reasonable goals, expectations for therapy, and futility, so that we do not needlessly administer expensive care that can do little to improve outcomes. Comparative effectiveness studies that compare different approaches to cancer management will unquestionably move us in this direction.
But comparative effectiveness studies, regardless of whether they are evaluating specific therapies or their mode of delivery, are of value only when they provide truly important guidance about outcomes. In theory, improving the synchronization of complex treatment regimens sounds appealing, especially given the existing literature documenting the relationship between synchronization and improved medication adherence. But in the case of cancer and ESA, do we know whether patients whose care was more synchronized are any more likely to adhere to the therapy that was advised by their physician? More importantly, do we know whether greater adherence to ESAs and synchronization with chemotherapy administration lead to improvements in clinical outcomes?
As the authors note, during the period that this study was conducted, evidence surfaced indicating that ESAs may cause harm when administered to patients with hemoglobin levels greater than 10 g/dL. As a result, increased synchronization may well have led to the overuse of ESAs in patients for whom the medication results in more harms than benefits. Thus, comparative effectiveness research can help us establish whether it is preferable to carefully follow a patient’s blood counts and only administer ESAs when hemoglobin falls to a specified level or to administer them on a regular basis.
Studies that compare ways in which we deliver care are every bit as essential as studies that compare the effects of different medications. For comparative effectiveness studies comparing medication effects, we have a limited set of potential acceptable outcomes. Similarly, efforts to compare strategies for healthcare delivery must evaluateeffectiveness in terms of the outcomes that we care about most. We must not blindly assume that greater synchronization of care will benefi t patients and save the healthcare system money. Such an assumption would be no different from assumptions made by pay-for-performance programs when they create incentives for changes in surrogate markers that turn out to be associated with worse, not better, healthcare outcomes.4
We must be very thoughtful about the outcomes on which we focus in comparative effectiveness research and be sure that these outcomes correspond to real benefits for our patients. Nordstrom and colleagues have described patterns in the complexity of cancer management. This study should be interpreted as just the beginning of a line of inquiry to better understand the effects of simplification on patient outcomes and to improve the systems we use to deliver that care.