Our health care system is notoriously inefficient. Spending is too high, while quality is too low. Some patients undergo expensive treatments that provide little or no benefit. At the same time, other patients don’t receive some inexpensive treatments that could materially improve their health.
When I was CFO of a medical software start-up back in 2000, we diagnosed this problem quite simply: actual medical practice falls far short of best practices. Good treatment regimes are often well-known, yet are overlooked by a large fraction of practicing physicians. (The classic example at the time was that doctors were substantially under-prescribing beta blockers, which can help many patients after a heart attack; I would welcome comments about whether that’s still true.)
The implied treatment for our health care system is also simple: find ways to get patients, physicians, and other providers to adopt best practices. We were focused on information technology as one potential way to do this, but many others have also gotten attention, including:
- Changing provider payments to reward healthy outcomes rather than just paying more for more procedures;
- Changing patients incentives to encourage more efficient decisions about what services to use;
- Investing in comparative effectiveness –i.e., study what works and what doesn’t — and sharing that information (and, perhaps, using it as a basis for payment rates);
- Encouraging preventative care that may avoid future illnesses.
Done right, each of these approaches could undoubtedly increase the value we get from our health system. Unfortunately, that potential has sometimes been oversold, with advocates arguing that policies to implement such “silver bullets” would dramatically reduce the cost of health care.
My view is more cautious. We know that there are substantial — some would say embarrassing — inefficiencies in the system. And we have reason to believe that various steps — greater adoption of health IT, comparative effectiveness, better incentives for providers and patients, etc. — might be able to reduce those inefficiencies. But we don’t know whether actual policy actions, with all their warts and blemishes, can actually tap into that potential and, if so, to what degree.
Policy should therefore focus on figuring out which policy interventions might work and learning how to calibrate them for maximum benefit. In short, policymakers should view health spending as an R&D problem. The goal is not to select the optimal policy once-and-for-all, but to set us on a path where we will learn what we need to know to make fundamental reforms down the road.
[M]any of the specific changes that might ultimately prove most important [for reducing health spending] cannot be foreseen today and could be developed only over time through experimentation and learning. Modest versions of such efforts — which would have the desirable effect of allowing policymakers to gauge their impact — would probably yield only modest results in the short term.