Understanding Predictors of Opioid Abuse: Predictive Model Development and Validation

Publication
Article
AJPB® Translating Evidence-Based Research Into Value-Based Decisions®September/October 2014
Volume 6
Issue 5

This study developed and tested predictive models of opioid abuse using Humana commercial membership data that were subsequently tested in the Truven commercial data set.

Studies have reported the dramatic rate of increase in sales of opioids, which have paralleled the rise of opioid abuse and mortality associated with these drugs.1-9 Estimates from 2010 and 2011 national survey data found the rate of nonmedical use of prescription pain relievers among those 12 years or older was 4.6% nationally.10 Further evidence of the rise in opioid abuse comes from the 2000-2010 Treatment Episodes Data Set,11 which reported admittance to treatment facilities for nonheroin opioid abuse rose steadily from 9% to 38% over the 10-year period.

The impact on the cost of healthcare in the United States from the increase in opioid abuse has been substantial.1,4,9,12,13 The increase in direct healthcare cost has been documented to be as high as 8 times greater for abusers than nonabusers ($15,884 vs $1830).12 Furthermore, the estimated cost to society because of the use of nonprescription opioids has been reported to be more than $55 billion annually, with healthcare accounting for 45% of the total.13

In order to curtail these numbers, several attempts have been made to develop models predicting the abuse of opioids for chronic pain.7,14-17 Rice and others17 specifically focused on the use of healthcare claims data to build their predictive model, and confirmed and expanded on the model developed by White and others.15 These types of models are important as a means to provide early identification of potential abusers and prevent this outcome rather than attempting to intervene after addiction has been diagnosed. However, to date, no one has documented the testing of a validated model in more than 1 national health plan to ensure applicability and generalizability across the United States.

The purpose of this study was to develop, validate, and test predictive models of diagnosed opioid abuse in the commercial member population of a national health insurance provider, Humana Inc, and to test model stability using commercially available data. Demonstrating consistent model performance across commercial health plan memberships would support generalizability and applicability with other US health plans.

METHODS

Study Data

This study utilized data from Humana Research Database (Humana, Louisville, Kentucky) containing enrollment, medical, and pharmacy claims data from 2009 to 2010 for model development and validation. All data sources were merged using de-identified member identification. Model testing was conducted in the same database using 2011 data, and subsequently in Truven Health Marketscan Commercial Claims and Encounters database using 2011 data (Truven Health Analytics, Ann Arbor, Michigan). The finalized protocol was approved by an independent institutional review board.

Study Design

Two predictive models of opioid abuse were developed, validated, and tested: (1) one for the overall population of members with opioid abuse diagnosis regardless of opioid use and (2) another limited to a subset of members with a record of prescription opioid use before opioid abuse diagnosis. Stepwise logistic regression was used to determine significant risk factors of opioid abuse using the 2010 data. The models were validated using the original 2010 Humana data and tested against the 2011 Humana and Truven data.

Study Population

Two cohorts of members newly diagnosed with opioid abuse (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] codes 304.0X, opioid type dependence; 304.7X, combination of opioid abusers with any other; 305.5X, opioid abuse; and/ or 965.0X, poisoning by opiates and related narcotics, excluding poisoning by heroin 965.01) in 2010 and 2011 were identified from the commercial population at Humana). The earliest date of diagnosed opioid abuse constituted the index date. Moreover, cases required a 210-day preindex continuous enrollment criterion, no prior diagnoses of opioid abuse or opioid poisoning, no residence in a skilled nursing facility, and no claims for pregnancy (Figure 1). These 2 cohorts constituted the population for modeling purposes. A total of 3567 cases were used to develop and test the predictive models.

To create a control group for model development, a random sample of commercial members without an opioid abuse diagnosis in their claims history was used. The control group members were assigned an index date based on the first date of service in the calendar year 2010 that allowed for 210 days of continuous enrollment before that service date. A ratio of 5:1 (controls to cases) was used in developing the models. The exclusion criteria applied to the cases were also applied to the controls. Diagnoses of opioid abuse are of low prevalence (approximately 1 in 1000),18 which makes predictive modeling for their occurrence especially difficult. By increasing its prevalence (ie, limiting controls to 5 for each case), the hope was that the pattern of variables associated with cases would be salient compared with their controls and would make variable selection much more evident.

The 2011 population (noncases) included in testing the models also had an index date based on the first date of service in 2011 that allowed for 210 days of continuous enrollment before that service date. The exclusion criteria applied to the cases were also applied to the 2011 population. The look-back period was not limited to the calendar year of the index event.

Statistical Analysis

Variables. For model development, a total of 20 clinical and utilization measures were identified as variables that may be associated with being at risk for a diagnosis of opioid abuse. They were identified from published research13,14,16 and prior studies conducted by this research group.18 The variables (and variable type) are listed below, and relevant ICD-9-CM codes to identify diagnoses and definitions of inefficiencies measures are listed in the eAppendix (available at www.ajmc.com).

Pharmacy-related variables were the following:

• Opioid prescription

• Number of total pain medication prescriptions

• Number of opioid prescribers

• RxRisk-V score

Diagnosis-related variables were the following:

• Low back pain/back pain

• Neuropathic pain

• Other chronic pain

• Nonopioid poisoning

• Substance abuse

• Psychiatric diagnoses

• Hepatitis A, B, or C

Medical utilization variables were the following:

• One or more visits to a mental health specialist

• One or more mental health inpatient admissions

• One or more emergency department visits

Variables showing evidence of inefficiencies in the treatment of pain were the following19:

• Uncoordinated opioid use

• Multiple opioid trials

• Early opioid refills

• Excessive postsurgical opioid use

• Concomitant long-acting opioid use

• Morphine-equivalent dosing (opioids prescribed by >3 prescribers for >90 days at >120 mg of morphine-equivalent dosing).

These variables were complemented with members’ sex, age (at index), race, and geographical region of residency. Race/ethnicity was derived using geocoding software.

Model Development. Two logistic regression models, one for the entire sample and 1 for the subset with at least one filled prescription for an opioid, were developed using all identified variables from the 2010 cases and controls. The stepwise variable selection technique was applied, and the significance level for variable entry and for a variable to remain in the model was set to P <.10. Once the final sets of variables were identified, the final models were created and run using the data from the entire sample to generate the final parameter values. The resulting models were then tested using the data from the 2011 Humana commercial plan membership that met the requirement for inclusion. Finally, the models were applied to a subset of the Truven data set.

The analyses focused on characteristics of the models and their performance on predicting new cases of opioid abuse diagnoses. Although many cutoff levels were considered, the accuracy of the models was defined using a probability value of .90 as a cutoff, corresponding to an estimated probability that the member will receive a diagnosis of opioid abuse within the next 210 days. Sensitivity, specificity, and positive and negative predictive values of the predictive models were generated for the cutoff value. Receiver operating characteristic (ROC) analyses of the logistic models were also conducted. Statistical calculations for Humana data were performed using SAS version 9.3 and for Truven data were performed using SAS version 9.22 (SAS Institute Inc, Cary, North Carolina).

RESULTSFactors Associated With Risk of Receiving a Diagnosis of Opioid Abuse

Table 1

compares the demographic and clinical characteristics of members with and without a prescription opioid abuse diagnosis. For the overall 2010 sample, a total of 10 variables were identified as being associated with being at risk for a diagnosis of opioid abuse (

Table 2

,

subhead A

). For the subset of members with an opioid prescription, a total of 9 variables were identified (

Table 2

,

subhead B

). For the overall sample, indicator variables for a substance abuse diagnosis, psychological diagnoses, and hepatitis were strong predictors of diagnosed opioid abuse (adjusted odds ratios [ORs] 5.32 [P <.0001], 2.36 [P <.0001], and 3.29 [P = .0084], respectively). Filling 1 or more prescriptions for an opioid in and of itself increased the risk of diagnosed opioid abuse by nearly 3 times (P <.0001). Having a low back pain diagnosis increased the odds of diagnosed opioid abuse by 1.62 (P <.0001). A record of a visit with a mental health specialist increased the risk for diagnosed opioid abuse by 1.45 (P = .0187). The total number of opioid prescriptions and total number of pain medication prescriptions elevated the risk of diagnosed opioid abuse by 1.19 and 1.06, respectively (P <.0001 in both cases). For the subset of members with prescription opioid use, results were similar (Table 2, subhead B).

Predictive Model for Risk of Receiving a Diagnosis of Opioid Abuse in the Original Cohort

The resulting overall model was applied to the original cohort (1319 cases) and had an accuracy (efficiency) of 88.4% at the cutoff level of >.90 (

Table 3

). The resulting model for the subcohort of opioid users (821 cases) had an efficiency of 62.8% at the cutoff of >.90 (Table 3). Given the sensitivity of providers in diagnosing patients with opioid abuse, a key performance metric was a low false-positive rate, ie, scoring a member as at risk for a diagnosis of opioid abuse when no such diagnosis was found. A probability level of >.90 was deemed acceptable, though it meant sacrificing sensitivity. The models’ performance and fit statistics are reported in Table 3 and ROC curves in

Figure 2

.

Predictive Model for Risk of Receiving a Diagnosis of Opioid Abuse in the 2011 Plan Membership

The overall model was applied to the 2011 commercial plan membership that qualified for inclusion in the study (n = 831,149; 2248 cases) and had an efficiency level of 99.4% at >.90 (

Table 3

,

subhead C

). The model for opioid users was applied to the 2011 commercial plan membership with a prescription for an opioid that qualified for inclusion in the study (n = 103,790; 1044 cases) and had an accuracy of 97.1% at >90 (

Table 3

,

subhead D

). False-positive rates observed at test were very low; however, the sensitivity rate of each model was also low. The C statistics at test were slightly lower than those obtained during model development.

Predictive Model for Risk of Receiving a Diagnosis of Opioid Abuse in the Truven Commercial Data Set

To determine whether the models developed utilizing the data from one health plan would generalize to data from other health plans, the models were applied to a random sample of 300,000 commercial members obtained from the Truven data set, consisting of individuals from more than 100 health plans. The variables identified using the Humana data were used with the Truven data set, and local coefficients for these variables were calculated using logistic regression. The resulting parameters approximated those described in Table 2 for the Humana cohorts and are displayed in

Table 4

. The data were scored using the local coefficients. For the overall sample, the model achieved 99.7% accuracy (efficiency) at >.90. Applying the model for opioid users to the subset of prescription opioid users from the Truven data set (n = 47,825) achieved a 98.9% accuracy level at >.90. The models’ performance and fit statistics are reported in

Table 5

.

DISCUSSION

This study demonstrated that predictive models of opioid abuse developed and tested using Humana member data can be applied successfully to other health plans without loss of model performance. The risk factors identified are consistent with the published literature examining the predictors of opioid abuse and/or misuse.13,14,16 In particular, this study supports the research by White and colleagues15 and Rice and colleagues,17 who found that healthcare claims data can add significantly to the tools available to screen for potential opioid abuse. For example, in the model for all members, a history of 1 or more visits with a mental health specialist was shown to be a significant predictor (adjusted OR 1.45, P = .0187), as well as a history of several types of diagnoses: psychological (adjusted OR 2.36), substance abuse (adjusted OR 5.32), and hepatitis (adjusted OR 3.29) (P <.01 for all). These factors were consistent with the adjusted ORs reported by Rice and colleagues for mental illness (2.45) and for hepatitis (2.36), and those reported by White and colleagues for mental health—related outpatient visits (1.99) and for hepatitis (2.57). Of note, uncoordinated opioid use was an inefficiency measure that predicted future diagnosed opioid abuse, but missed statistical significance (adjusted OR 1.51, P = .074) in the current study’s model for the subset of prescription opioid users, whereas the coefficient on a similar variable measuring the number of pharmacies where opioid prescriptions were filled was statistically significant in the study by White and colleagues (adjusted OR 1.96, P <.001).

Interestingly, a low back pain diagnosis was predictive of prescription opioid abuse, whereas neuropathic pain and other chronic pain diagnoses were not (Tables 2 and 4). This may be because of a statistical artifact resulting from a larger sample size of members diagnosed with low back pain relative to neuropathic or other chronic pain. Additionally, it may have been because neuropathic and other types of pain have clearer pathophysiologies for which nonopioid therapies have shown to be effective, whereas there is less definitive evidence for the effectiveness of nonopioid analgesics or other treatments to treat low back pain.16,20

One important limitation of the studies by White and colleagues15 and Rice and colleagues17 was that the timing of the opioid abuse diagnosis appeared to be irrelevant to when risk factor data for their predictive models were collected; ie, individuals identified with diagnosed opioid abuse could have been identified before the time when the associated risk factor data were collected. The use of buprenorphine or methadone as risk factors may be indicative of this shortcoming; unfortunately although these characteristics may be associated with diagnosed opioid abuse, they may only become observable after diagnosis.

The collection of risk factor data after as well as immediately before the diagnosis of opioid abuse may explain why the C statistics for the studies by White and colleagues15 and Rice and colleagues17 were above .90. In comparison, the C statistics for the tested models from current study were lower at .80 for the overall Humana member population, .81 for the subset of Humana members with opioid use, .82 for the overall Truven Health population, and .89 for the subset of Truven Health population with opioid use. Discarding the 30 days of data immediately before the index date was an additional deliberate decision based on the practical assumption that a health plan has an already tight window of opportunity to take preventive action—whether to notify or provide educational assistance to the physician or offer preventive services to the individual member. Any intervention designed and implemented with the goal of prevention in mind would need to rely on risk factor identification well before the potential diagnosis. It may be that health plans are flagging inappropriate drug use via drug utilization review programs; however, they may not be identifying members at risk early enough to change the trajectory of behaviors related to the diagnosis of opioid abuse.

Additionally, the trade-off between specificity and sensitivity was continually examined during model validation and testing. One limitation of opting to minimize the rate of false positives was the consequence that the model did not flag as many cases as hoped. Given this trade-off, further examination of a higher number of false-positive cases is needed to determine if their pattern of utilization warrants closer attention and monitoring, even in the absence of a diagnosis of opioid abuse in their observable future.

To confirm the economic value of implementing any intervention programs intended to address the underlying causes that increase the likelihood of diagnosed opioid abuse, the following steps are suggested when applying the predictive model to health plans generally:

1. Conduct a multivariate logistic regression analysis for the specific health plan of interest, using the final list of variables from this study (Table 2).

2. Compare parameter estimates from the specific health plan of interest with parameter estimates from Humana’s model reported in Table 2.

3. When testing the model, utilize plan-specific coefficients to predict the risk of diagnosed opioid abuse. In health plans with large sample sizes and low rates of prevalence of diagnosed opioid abuse, first limit the sample by restricting it to members with RxRisk-V scores greater than 2 and with more than 2 pain medication prescriptions.

4. Determine whether members accurately predicted to be diagnosed with opioid abuse were associated with a higher cost to the plan than members not so identified.

Taking steps 3 and 4 would ensure that model predictions would identify individuals both at high risk of diagnosed opioid abuse and at higher risk for increased healthcare resource utilization from the perspective of the health plan. Health plans should note, however, that regardless of any economic value, there is clinical value in identifying individuals at risk of opioid abuse before any diagnoses.

Limitations common to studies using administrative claims data may be applicable to the current study and may include lack of certain information in the database (eg, lab results, weight, and health behavior information) and error in claims coding. No causal inference can be ascertained from this study because it was an observational study that used retrospective claims data. Although multivariate regression modeling was used to reduce selection bias and strengthen the causal inference, this approach can only reduce bias caused by measured covariates. It cannot reduce bias caused by unmeasured covariates.

CONCLUSIONS

This study presents predictive models of diagnosed opioid abuse that can easily be applied to any US health plan to identify members at risk for diagnosed opioid abuse. Once these members are identified, health plans can implement targeted intervention plans to reduce abuse behaviors, ultimately curbing the rise in the rate of diagnosed opioid abuse across the United States.

Acknowledgment

Editorial support was provided by Mary Costantino, PhD, who is an employee of Comprehensive Health Insights, a wholly owned subsidiary of Humana Inc, and was funded by Pfizer Inc.

Related Videos
Practice Pearl #1 Active Surveillance vs Treatment in Patients with NETs
© 2024 MJH Life Sciences

All rights reserved.