Hospital's Risk Assessment Tool Pinpoints High-Risk Patients
MAY 14, 2014
Eileen Oldfield, Associate Editor
At Middlemore Hospital, pharmacists can spend more time counseling high-risk patients and reconciling their medications, thanks to a tool that predicts which patients are at the highest risk for medication errors or adverse drug events.
Since its launch in October 2011, the tool has allowed pharmacists in the public health system in Auckland, New Zealand, to move from a centralized pharmacy model to one that assigns pharmacists to specialized medical teams. According to a study on the tool published in the February 15, 2014, edition of the American Journal of Health-System Pharmacy, it has also allowed pharmacists to identify patients for discharge support services, such as medication reconciliation, and discharge counseling.
According to the study, the tool allowed the health system to prioritize 765 patients over an 8-month period and to identify 526 medication errors while doubling the number of patients receiving electronic medication reconciliation and medication review.
The tool was developed in-house, through a collaboration among the hospital’s pharmacy staff, Centre for Quality Improvement, and information technology (IT) department.
Clinical pharmacists isolated 4 categories associated with adverse drug events: high-risk patient groups, including patients taking multiple medications or older patients; high-risk medications and medication classes; high-risk hospital settings; and high-risk social settings, such as economically disadvantaged patients or those who do not speak the native language. Prior to instituting the review technology, pharmacists handled medication review manually, a task taking up to 1.5 hours per 8-hour day. Pharmacists worked in multiple wards and usually served between 30 and 60 patients per day.
The resulting time constraints meant patient monitoring and intervention activities were less intense than the staff preferred. Pharmacists would screen medication charts for most patients, but few received medication reconciliation or pharmacist review services, the study noted. In addition, the reconciliation or review interventions were recorded infrequently.
After medication error and adverse drug event risk factors were captured from historical data, the shortcomings in the existing technology became apparent. The system did not allow real-time data transmission and could only offer patient status at a certain point in time, rather than track outcomes continuously. Receiving the report depended on the pharmacy department’s central communication link because the pharmacy department computer e-mailed it each morning.
In addition, the sheer amount of data and complexity of certain queries meant report reliability and performance were always a problem.
Remedying the system’s shortcomings required an entirely new software platform, one that would integrate with several sources of electronic patient data. Integration of the system with the hospital’s other systems would allow pharmacists to access necessary data on an as-needed basis.
This allowed the health system’s IT staff to create the patient prioritization tool. The application prioritized patients based on the clinical criteria established from the historical data and included 38 risk flags spread among 5 groups. Senior clinical pharmacists derived appropriate risk scores for each flag. The system would later add the scores for all flags and determine whether patients were at high, medium, or low risk for medication errors or adverse drug events. The high-, medium-, and low-risk designations were color coded to allow easy identification of patient risk.
Data from the hospital’s systems that would go into calculating the risk scores were extracted 3 times daily, and updates to the risk scores occurred at 6:00 am, 10:00 am, and 1:00 pm, the study noted.
Clinicians tested the tool using virtual patients prior to deploying it, to ensure that the risk factors would trigger the appropriate flags. Testing continued after the tool deployed as well, which determined whether the tool flagged actual patients correctly.
Programming the tool took ~2000 hours and was done in-house.