Establishing Usability Criteria to Maximize Value of CER

AJPB® Translating Evidence-Based Research Into Value-Based Decisions®January/February 2013
Volume 5
Issue 1

Usability criteria will help investigators structure CER to answer the questions that are most important to patients, their family caregivers, healthcare providers, and policy makers.

The search for effective medications for a child with epilepsy is a tough balancing act. His healthcare providers use trial and error involving dozens of medications over several years to fi gure out what controls his seizures. Many children with severe epilepsy have to wear bulky, protective headgear in case they fall, and are unable to play with their friends or go to a neighborhood birthday party like other children. Imagine how hard it must be to watch your child grow up under such circumstances.

Much of the research conducted today adds new layers of information to our existing body of knowledge. But little of the research is directed at answering the challenging questions that many patients face. A thoughtful new approach is under way to establish rigorous methodological standards for conducting comparative effectiveness research designed to help patients, their family caregivers, healthcare providers, and policy makers make more-informed decisions. For people with chronic conditions and their family caregivers, this creates an exciting opportunity. To achieve this important outcome, however, additional questions need to be answered. When do the results of comparative effectiveness research become useful enough to meaningfully inform a patient’s healthcare decisions? Who evaluates the end product and makes that determination?

The Affordable Care Act, which created the Patient- Centered Outcomes Research Institute, clearly lays out a process for developing and updating methodological standards to produce high-quality comparative effectiveness research.1 However, the law fails to identify a process to evaluate the usability of the research findings. Even the best methodological standards do not guarantee that the research will be useful in making more-informed decisions.

Focus group research conducted by the National Health Council demonstrates that people with chronic conditions and their family caregivers often do not know what comparative effectiveness research is.2 When it is explained to them and they are informed that it is already occurring, they express concern that such research could be used to deny them treatments that work for them.

This fear is especially real for people who have conditions with a high degree of variability in response to treatments, such as mental health, neurologic, autoimmune, and many rare conditions. For these individuals, their experiences with step therapy, specialty tiers, or a simple lack of coverage lead them to believe that comparative effectiveness research will become an additional barrier to obtaining treatments that work for them.

Care decisions are already being made that are based in part on comparative effectiveness research findings released without a transparent evaluation of the results. For example, a study in 2011 found that there was low to insufficient evidence that switching the medicine of a patient with epilepsy would increase short-term risks.3 Although it is unclear whether the conclusions led to formulary restrictions, anecdotal reports indicate that some people are experiencing coverage denials causing them to switch their medications and in some cases experience significant side effects.4

As a result, patients, family caregivers, and providers have growing concerns about the credibility and usefulness of comparative effectiveness research. Such concerns threaten to undermine the entire comparative effectiveness research enterprise and the potential of high-quality research to improve health outcomes.

Having usability criteria would f ll this void. The purpose of such criteria is to help decision makers understand whether research findings are conclusive and determine what their signifi cance is in the context of other evidence and current medical practice. Such criteria would identify up front the potential uses of comparative effectiveness research results and the thresholds the findings would need to meet for each use. Usability criteria would be created by an entity knowledgeable about research methodology and address uncertainty, relevance, heterogeneity, and other issues pertinent to evaluating research findings. More importantly, the criteria would help investigators structure their research to answer the questions that are most important to patients, their family caregivers, healthcare providers, and policy makers.

After the research is conducted, an independent and transparent evaluation of the results would be conducted against the predetermined usability criteria. This evaluation would place the comparative effectiveness research findings in the context of existing evidence and their usefulness in making different kinds of health decisions. In other words, did it answer the questions it was intended to answer or simply contribute to the general body of knowledge?

Without predetermined usability criteria, and a transparent process for evaluation comparative effectiveness research findings, the intended audiences are left without a roadmap to identify high-quality, useful information. Study results become simply more, but not necessarily helpful, information. As a result, inappropriate decisions may be made, and the credibility of comparative effectiveness research diminished.

People with chronic conditions want scientifi c research that helps them make decisions that are aligned with their individual goals for health outcomes and quality of life. Done well, comparative effectiveness research will lead to more-informed health decisions and, ultimately, to better health outcomes for patients, the true and rightful focus of our healthcare system.

Related Videos
Practice Pearl #1 Active Surveillance vs Treatment in Patients with NETs
© 2024 MJH Life Sciences

All rights reserved.