The claim that measuring and publishing quality indicators will always improve health care quality has become an article of faith in health policy circles. The passage of ObamaCare made it dogma. Contrary to widespread belief, evidence-based research suggests that devotion to poorly designed quality measures reduces quality. The literature on this was reviewed by David Dranove and Ginger Zhe Jin in the December, 2010, issue of The Journal of Economic Literature.
Imprecise measurements are a major obstacle. For example, hospital report cards are generally based on easily observable outcome measures like mortality. But mortality is a relatively rare event, so quality measures based on it may have large errors associated with them. In one Medicare study, only 3 percent of hospitals could be identified as having either high or low quality. Another source of imprecision is that in measuring observable outcomes we may be indirectly measuring the qualities of a particular patient population. Some insurers include immunization rates in physician report cards, for example, even though it is well known that immunization rates are strongly affected by parental education and income.
Attempts to adjust even for the known qualities of the patient population with measures of its underlying riskiness (“risk adjustment”) are often unsatisfactory. The range of potential risk adjusters is “vast,” and their “predictive power” is “low.” The problem is orders of magnitude worse if quality is measured along more than one dimension. While it might be straightforward to measure the relative quality of drugs to treat hypertension based on the number of points they lower blood pressure, developing a ranking with any meaning is far more difficult if it must also be based on the rate at which patients on the drugs develop diabetes, suffer from myocardial infarctions, experience drug side effects affecting their quality of life, or die from some other cause while taking the drug.
Quality measures also degrade quality by distorting behavior. Just as teachers who are rewarded or punished based on student test scores will often “teach to the test,” doctors and hospitals pressured to improve their performance on reported measures may allow their performance to deteriorate in ways that are not measured. For example, overall nursing home responses to federal quality indicators suggest that there were few net benefits for patients. The quality of care along reported dimensions showed insignificant improvement, the quality of care along unreported dimensions declined, and there was no evidence that nursing homes increased quality related inputs. Another example is provided by the mandatory cardiovascular mortality report cards developed in New York and Pennsylvania. They increased resource use while degrading patient care simply because surgeons stopped operating on sicker patients who were more likely to die in an effort to protect their mortality rankings.
There is also bad news for those who believe that government quality certifiers act in the best interests of consumers. Contrary to the articles of faith in some policy circles, government inspectors often rely on subjective measures in making their judgments, and their personal preferences may differ from those that they are supposed to protect. The article cites the wide variability in FDA and Nuclear Regulatory Commission inspections as examples of this.