Friday, January 14, 2011

SO, ARE EHRs A WASTE OF TIME AND MONEY?

The 2009 Health Information Technology for Economic and Clinical Health Act (HITECH) authorized incentive payments, potentially totaling some $27 billion over ten years, to clinicians and hospitals when they implement electronic health records in such a way as to achieve “meaningful use,” in terms of advances in health care processes and outcomes.

But, are EHRs really “meaningfully useful” or are they more likely to be costly and ineffective?

The latter seems to be one possible interpretation of a recent RAND study of EHR adoption in US hospitals.

The RAND study statistics are impressive: five study authors tallied 17 “quality measures” for three medical conditions against three possible levels of EHR capability (no EHR, basic EHR, advanced EHR) for more than two thousand hospitals for each of 2003 and 2007. They then related changes in quality over the four year timeframe against changes in EHR status (for example, from no EHR to an advanced EHR).

The reported results were disappointing to EHR proponents. Among the hospitals whose EHR capability remained unchanged over the four years, there was no statistically measurable difference in quality improvement between hospitals with EHR capability and those without. For hospitals which upgraded their EHR capability, the performance improvement was generally less than for those who didn’t change, including those with no EHR at all.

So, should we forget about EHRs? Maybe defund HITECH?

Not necessarily.

As the study’s authors point out, there are a several possible explanations for their results other than ineffectiveness of EHRs. Implementation of an EHR—a very demanding effort—might temporarily disrupt other quality improvement efforts. Hospitals with EHRs typically had higher quality measures to begin with, and—like trying to catch up with the speed of light—would likely find improving quality more challenging as 100 percent quality is approached. Results might have been different for other medical conditions. And the timeframe of the study may have been inadequate to measure the impact of new EHRs, some of which may have been implemented only just before the end of the time period.

It can also be argued that the measurement methodology was flawed. Using simplistic indicators of quality like whether or not aspirin was dispensed on arrival or discharge instructions were provided is a little like judging the quality of a meal by whether or not there was a caterpillar in the salad. Presence of a caterpillar definitely indicates a problem, but its absence says nothing about other aspects of the meal. The study authors indicate their awareness of this limitation in stating “we are concerned that the standard methods for measuring hospital quality will not be appropriate for measuring the clinical effects of EHR adoption.”

Perhaps most importantly, as with other IT systems, EHR success depends on the competence of the implementers and the willingness of the users to accept change, with poorly managed projects more likely to foul up existing processes than improve them. The RAND authors praise programs initiated by the Office of the National Coordinator for Health Information Technology to improve EHR implementation, and comment—in spite of the inconclusive results of their study—that “We believe that these programs are well conceived and anticipate that they will lead to more effective use of EHRs, which will in turn lead to improved quality in US hospitals.”

EHR systems are no panacea, and clearly there have been both successful and troubled EHR implementations. What is needed now is a closer look at what works and what doesn’t, how well EHRs perform over a longer timeframe than the RAND study, and a much less simplistic look at what is really happening to clinical quality as a result.

No comments:

Post a Comment