Health Care’s Broke: Physician Rating and Quality Indicators
A number of services like
HealthGrades
and
RateMDs
allow patients to rate their doctors. Most of these ratings are based on things like “ease of scheduling an office visit,” “Wait time before seeing
the physician,” or “Helps patients understand their medical conditions.” Fair enough things to grade a physician on.
A number of services like Medicare’s Hospital Compare or Cal Hospital Compare allow patients to compare hopsitals based on a set of standard “quality indicators” for things that we know will help patients do well — things like making sure patients with heart attacks get the right medicines or treatments in the right amount of time, or giving older patients pneumonia vaccines. Not only are hospitals publicly evaluated on these indicators, but Medicare is considering paying hospitals (and doctors) based on standard indicators. This is know as “P4P” or “Pay for Performance.”
However, physicians and hospitals rightfully argue several things:
- You’re only going to get the extreme patients to evaluate you (and often the unhappy ones
- The unhappy patients will have the opportunity to publicly say how terrible the doctor is, but because of privacy rules, the doctor or hospital cannot comment or defend him or herself
- There is too much emphasis on bedside manner and convenience, and not enough information on outcomes — “How Good A Doctor Am I?”
- Even if there is information on outcomes (like in the hospitals’ cases) if this determines payment or discourages future patients because of a bad rating of outcomes, the physician will be much less likely to risk treating a very sick patient, who will likely have a bad outcome no matter what
- These systems also ignore where a hospital operates or where a doctor works — hospitals with large populations of poor patients are likely to be sicker than hospitals in affluent areas. Academic hospitals which often care for many incredibly complex, sick patients might be compared to relatively straightforward, simple patients at another hospital down the street.
Medicare has run a trial of this P4P stuff, and it’s been written up in the New England Journal, called Pay For Performance: At the Tipping Point . Let’s look at the some of the outcomes — they took hospitals and either paid the hospitals for their quality improvements, or told the hospitals their outcomes would be publicly available online:
Now, some might say, “Wow, look at that, if you pay people for doing better on quality, they get better!” But the keen observer would point something else out: “Wow, even if you don’t pay people, but make results publicly available, people do better, too!”
Folks (and by folks, I mean doctors and hospitals), this stuff isn’t going away. If you don’t have patients blogging about their encounter with you by
name, it’s only because they’re in their 20s or 30s and are young and healthy. (I happen to be friends with one Dr. Gilbert,
who is mentioned on the Stanford Hospital Yelp page
as the Stanford ER’s McDreamy.) Seriously. The personal evaluations about you are coming, whether you like them or not. It’s in our best interests to
argue for the best, most objective and accurate standards. If you’re one of the people that says “Medicine is a business above all,” (I’m not)
then fine, but look at every other business in the US: it’s getting revolutionized, criticized, and evaluated online.
We don’t need less transparency, we need more. And the only way we’re going to get there is by having more data.
Physicians and hospitals should certainly be judged by if they’re taking good care of their patients. That means a lot of things: patient rapport, having a “good experience” (whatever that means), but more importantly, outcomes and guidelines. These are, however, guidelines, not rules. It should be simple and straightforward for the hospital or physician to not follow a guideline: if the patient’s heart rate is already 40, they shouldn’t be getting a beta blocker, which slows the heart rate, for example.
For physicians: do patients want the gruff surgeon who’s the best, or the one who’s pretty good but is kind and nurturing (I swear, there’s nurturing surgeons out there)? This data will soon be out there–but in subjective form. “I saw Dr. Green and two weeks later, my cancer had metastasized!” We need accurate, fair standards to examine how we’re doing as doctors, that take into consideration things like patient complexity and compare apples to apples .
For hospitals : I think a lot of the P4P and outcomes and quality indicators stuff for hospitals is worthless for the public. Hospitals get patients for primarily two reasons:
- The patient comes to the hospital’s ER and gets admitted.
- The patient has a particular doctor who has admission privileges at Hospital X, so the patient gets admitted to Hospital X.
Patients, when sick, do not launch a web browser and see which hospital was more likely to give an ACE inhibitor to its diabetics. They go where their doctor tells them, or barring that, wherever is closest or where they’ve had a good experience before.
P4P and Incentives
One of the big concerns in P4P is “how do we define good, and how do we reward that?” Do we pay the doctors who are really crappy, but then start to
improve, while ignoring the docs who are already outstanding? Both? Neither?
For physicians: It’s a little frustrating that we would have to pay physicians to practice appropriate medicine, instead of expecting them to simply keep up with modern medicine, no? Isn’t that what CME requirements are for? I’m not talking about the cutting edge, latest-issue-of-NEJM stuff, but stuff that we’ve known for 10 years? ACE inhibitors, beta blockers, aspirin?
For hospitals: It’s a bit more complex to improve indicators in hospitals, because there’s so many different steps involved to coordinate successes, so hospitals should be rewarded handsomely. But hospitals shouldn’t be competing for patients. They should be competing with other hospitals. Perhaps we create “reward funds” for 3 hospital types: community, academic, and county. And hospitals compete with each other for “most improved,” “best indicators, etc,” — again, apples to apples hospital comparisons. Each 6 months, the top hospital in each category gets a reward that gets split up between the hospital and its staff; this would create innovation between hospitals, everyone trying to do better a job. And let’s say the winning hospitals have to give away their secrets to “best pneumonia vaccine rate” to everyone else.
Perhaps for the next 5 years, we start with carrots. If you’re either improving or continuing to do a good job (as compared to your equals), you get a bonus. 5 years after that, if you’re not improving, or doing significantly worse, in comes the stick, with some sort of punishment.
(Look, I’m well aware that the indicators aren’t always practical or the best, and they certainly need to be improved drastically, taking into account the differences between community and academic medical centers, etc, but the evaluations and ratings are coming. I’d much rather setup a system that is created by health care providers and reasonably fair than be evaluated by the subjective masses whose opinions are often muddied by sad, tragic bad outcomes.)
Update: Case in point, I just last night got an email from a close friend who sent a group email to have us “check out this new site that shows hospital ratings on Google maps.” How are they determining who’s red and who’s green? The data I discuss above.