Journalism in the Public Interest

Should Hospital Ratings Be Embraced — or Despised?

Can patients trust the many websites that rate hospitals? ProPublica’s Charles Ornstein talks to health-care reporters and editors to find out. 


(Joe Raedle/Getty Images)

Editor’s Note: As we reported last month, preventable harm in hospitals is now the third-leading cause of death in the U.S. That makes it more important than ever to know if your hospital is doing a good job. But can patients trust the many web sites that rate hospitals? ProPublica’s Charles Ornstein compiled the post below to help journalists interpret the ratings, but the advice is just as critical for health care consumers. The bottom line: Use the ratings with caution.

Few things in health journalism make me cringe more than news releases touting hospital ratings and awards. They’re everywhere. Along with the traditional U.S. News & World Report rankings, we now have scores and ratings from the Leapfrog Group, Consumers Union, HealthGrades, etc.

I typically urge reporters to avoid writing about them if they can. If their editors mandate it, I suggest they focus on data released by their state health department or on the federal Hospital Compare website. I also tell reporters to be sure to check whether a hospital has had recent violations/deficiencies identified during government inspections. That’s easy to do on the website, run by the Association of Health Care Journalists. (Disclosure: I was a driving force behind the site.)

Last week, I got an email from Cindy Uken, a diligent health reporter from the Billings (Mont.) Gazette. She was seeking my thoughts on covering hospital ratings. I sent her a story written by Jordan Rau of Kaiser Health News about the proliferation of ratings. Two of every three hospitals in Washington, D.C., Rau reported, had won an award of some kind from a major rating group or company. He pointed out how hospitals that were best-in-class in one award program were sometimes rated poorly in another.

This got me thinking: What should reporters tell their editors about hospital rankings, ratings and awards?

I sought advice from Rau, along with ProPublica’s Marshall Allen, Steve Sternberg of U.S. News & World Report, and John Santa of Consumers Union. Here’s what they told me:

* * *

Steve Sternberg, Deputy Health Rankings Editor at U.S. News & World Report

Reporters should cover hospital rankings and ratings, going deeper than the knee-jerk stories you often see suggesting that proliferating rankings and ratings confuse consumers. Hospital rankings and ratings shouldn’t be expected to tell one story; they provide different perspectives on hospital care. U.S. News Best Hospitals and Best Children’s Hospital rankings evaluate the extent to which hospitals can provide sophisticated care for really sick adults and children. We’re now in the early stages of figuring out how to evaluate how well U.S. hospitals perform routine procedures for patients who need routine care. Other rankings and ratings evaluate other dimensions of care, such as patient safety.

This is fertile ground for reporting and an important public service. Most people wouldn’t think of buying a house without checking into the sale price of the houses next door, local transportation, the quality of local schools and shopping nearby. Many don’t realize that they can also check into the performance of the local hospital, starting with the hospital’s website, the government’s Hospital Compare website and other data, including the rich hospital data on the U.S. News website. Rankings and ratings can provide useful information, as long as you do your homework, recognize their limitations and, of course, talk to your doctor. Good reporting can guide people to the information they need to make critical healthcare choices.

* * *

Marshall Allen, Reporter, ProPublica

Reporters should be cautious about hospital rating methods for many reasons: They use different types of data, different collection methods, different quality measures, and indexing/weighting/aggregating of the data that is presented in different ways. So it’s hard to draw conclusions that are too broad because there are so many different variables in play. 

For example, Medicare measures what are called Hospital Acquired Conditions (certain type of infections, falls that result in injury, etc.), which are derived from billing records, which can be inconsistent in the way they are coded. That inconsistent coding could result in some hospitals appearing to have more HACs than others. But the differences in these numbers may just show which hospitals more carefully and accurately code for HACs, not which hospitals actually have more patients suffer from them. Making this worse is the fact that the hospitals that most carefully and honestly code the HACs would appear to have the worst rates, whereas hospitals that just don’t code them would appear to have the highest levels of quality.

Medicare obviously uses the HAC data, and the Leapfrog Group also makes them part of its Patient Safety Score measure. Personally, I think that when HACs are coded they are likely to have happened, but the inconsistency in coding makes them something I would be wary about using for any comparison between hospitals. 

The U.S. News & World Report rankings depend in part on how other doctors feel about a hospital. But this reputation measure is determined by U.S. News asking doctors to write down the hospitals that they think are best for each specialty. 

It’s very important for reporters to read the methodology of each scoring system. You have to understand the source of each type of data that makes up a measure, how it’s gathered, processed and presented. That’s the only way to really understand each quality measure.

* * *

John Santa, M.D., Director, Consumer Reports Health Ratings Center

There’s a lot happening with hospital report cards. There’s more and better information especially around safety; more and maybe better organizations wading in. Some raters are challenging hospitals in a variety of ways — i.e., if you disagree with the rating, make the data that supports your criticism public.

It is important to inform readers about the bias publishers of report cards may have. I recently saw a piece where the writer was surprised to find that HealthGrades does consulting work for hospitals, including many of the hospitals it grades well. Likewise, many folks do not connect the dots and realize that U.S. News supports its hospital ratings efforts in part by selling the ratings to hospitals to use in their ads.

Realize the most powerful “raters” are hospitals themselves. They are spending hundreds of millions of dollars on ads which often “rate” themselves — their ads use terms like “best,” “most advanced,” “leading,” usually with little if any transparent validation. I recently saw an ad in Southwest Airlines magazine from University of Texas Medical Branch in Galveston that said their 30 minimally invasive surgeons were the “best” in the country. Really — where did that data come from?? Amazing that every one of their surgeons is in the best group. Seems unlikely. But this stuff makes a difference because consumers see it over and over again.

(Sternberg later wrote me to say, “Hospitals that perform well may license our logo for advertising or promotional purposes. In no way, however, do we ‘sell’ the rankings to anybody or allow hospitals to influence our results.”)

* * *

Jordan Rau, Kaiser Health News

Many of the judgments these groups make about a hospital — whether to put them on a top 10 list or give them an A or a B — are based on statistically insignificant comparisons. In the Joint Commission’s rankings of top performers, 583 hospitals missed out on making that list because they fell short on just one of 45 different measures. Often the difference between a ranking of two hospitals is just 1 percent or 2 percent, and the one that’s higher gets a better grade or makes it on a best hospital list even though that difference doesn’t matter.

That’s not just the case with private report cards. On Medicare’s Hospital Compare website, measures of patient satisfaction, timely and effective care, medical imaging use and hospital acquired conditions are not presented in a way in which you can tell where a difference between two hospitals — or from the national or state rate — is significant or not. Hospital Compare data is a common source for most of these private ratings, so that becomes a problem as they are aggregated into composite measures like grades or list rankings.

As a rule of thumb, reporters should be wary if you are presented with two numbers and no way to know at what point the difference matters. Think of it like a public opinion poll: Until you know the margin of error, you don’t know if the results matter.

Another reason to be skeptical is that, as we noted in one of our stories, “Hospital Ratings Are In The Eye Of the Beholder”, the economic models behind most of these report cards provide little incentive to downplay differences. These rating groups actually have a financial incentive to be liberal in handing out accolades, because they often make their money by licensing their awards or ratings to the hospitals that get the nod. It’s hard to sell an analysis that says 90 percent of hospitals are average.

A broader problem is that these report cards are mostly measuring a small subset of procedures and conditions, so reporters need to be careful about extrapolating from them to a hospital’s overall quality. If a hospital does well or poorly on pneumonia, heart attack and heart failure patients — the group that is most commonly measured — that doesn’t necessarily mean they do a great job with knee surgery or a heart transplant. What often matters there is the surgeon and team, not the hospital.

It’s probably safer to extrapolate about things like hospital acquired infections, because those reflect the safety culture of a hospital, but as Marshall notes, the places that are more faithful in reporting problems can look worse than those that are cavalier about reporting. A good maxim to keep in mind is this one from (sociologist) William Bruce Cameron: “Not everything that can be counted counts. Not everything that counts can be counted.”

All that said, the measures that identify hospitals that are statistically better or worse — mortality is probably the best example — are worth using. And I think it’s fine to use a rating as one piece of a broader assessment, one that might include regulatory fines or reports or evidence in lawsuits.

There are two reasons why consumers should be highly suspect of hospital ratings: 1) Some ratings are, in reality, pseudo-accolades bestowed upon hospitals that must then pay a marketing fee to promote their ratings; 2) tertiary care hospitals are horrifically complex organizations with innumerable “moving parts” and a diverse cast of characters delivering care—creating global ratings for such organizations may simply scratch the surface of their true performance when treating any given patient.

Do we need to move towards reliable, standardized ratings? Of course, but the task is a Herculean one. For now, we remain blissfully unaware of the trials and tribulations that plague most hospitals in America…albeit on some relative scale.

this is helpful counsel that makes sense.  but I wonder whether there is any evidence that good ratings improve hospital occupancy and profit rates.  most people I know go to the facility their doc recommends and the fact that it has won a gold star from some website is seen as marginal icing on the cake.  so guess the question is whether patients are using the ratings to change their behavior and, if so, whether they’re doing it well.  but I’m told there is some evidence that hospitals actually do make improvements based on the ratings and quest for better ones and more than one insider has suggested to me that’s the real value here, as a goad to providers to do better.

Jared Mishbuccha

Oct. 28, 2013, 4:01 p.m.

Yes, hospital rankings are subject to extensive manipulation and can often mislead.  Journalism awards are one notch better, but the same may happen.
The Federal site you cite is also subject to serious manipulation.  Look at its sources of data, especially the awful questionnaire from recent patients.  I have one of those from recent major surfer at one of the purported top three hospitals in the country.  It has nothing about infections or sloppiness leading to medical errors.  It looks like the AHA must have had a crack at this questionnaire.  The results have some use, but not much.
What is an ironclad way to compare hospitals regarding medical error and infection rates?
And would you agree that these two problems have a lot to do with the quality of medical training and supervision of staff.  Even at top hospitals, you can still suffer the consequences of both serious medical errors and infections, including death.  Doctors have an endearing habit of listing death as a side effect of their procedures and other therapies.

Trine Tsouderos

Oct. 28, 2013, 4:01 p.m.

We (PwC’s Health Research Institute) did some consumer research not long ago looking into who uses these rankings. We found that about half of our survey respondents reported that they had looked at healthcare ratings sites; a third said they had based medical decisions based on them. For more detail (breakdowns by age, etc), a link to our report:

Morris Foutch

Oct. 28, 2013, 5:27 p.m.

I worked as a Clinical Lab manager/director many years ago and actively participated in inspections of other labs and my own over about 5 years under the auspices of the College of American Pathologists. We sent the to be inspected lab a file of things to be inspected and if we found that they met all of the criteria they would be approved to do work and bill for procedures. There were 3 levels of approval and the highest required near zero levels of fail in any procedure. The lowest level required immediate remedy and provision of proof by signature in order to continue the process and bill for it. Any institution using this service had to pay for it and could not in any way influence the outcome. This inspection became increasingly more difficult for any lab participating and we had the temerity to downgrade labs to the point that it was economically painful for their institution. I don’t know if this system is ongoing but believe it worked quite well for the era of some 40 years ago. It involved 6-7 clinical lab disciplines.

Betsy Jacobson

Oct. 28, 2013, 6:12 p.m.

Recently, a hospital which I avoid like the plague, you should excuse the expression, due to its many mistakes on MY behalf and its shoddy work on behalf of many patients with whom I’ve spoken, was rated # 1 hospital in my state.  “Use the ratings with caution?”  I should say so.

Don’t forget that every car is the best in its class.  Everyone is special in its own way, and there’s no reason that hospitals would be different.

The question isn’t whether there are bad ratings.  The question is where the objective ratings are.  Who looks at triage time, throughput, survival rate, and so forth?  Those are a bit more important than the number of MDs with awards.

Dina J. Padilla

Oct. 29, 2013, 1:55 p.m.

Kaiser Permanente, non profit touts itself as #1 and so now,  does CMS, non profit for medicare. Kaiser receives blank checks from medicare so the two non profits cover for each other. As the above patient stated, “use the ratings with caution” but look at who is also doing the rating! 5 stars, REALLY? BTW:, kaiser just let go OF 160,000 newer members,was it because (KAISER) they didn’t offer mental health care. The idea for their mental patients is to give them large amount of pills and blow in a bag for decades and their employees know about that too!

As someone who works in hospital management I understand a need for a mechanism that educates patients on how their choice of hospital stacks up against others.  The problem is that often the data used to rank a hospital is not anywhere near current and sometimes is even years old.  Also when presented in percentages can make a hospital look very bad when indeed it is not.  For instance; if a rural hospital only has one to three patients of the type in the rating report and has a negative outcome with 1 the negative score can put them between 0 - 66%, often not taking the whole state (co-morbidiity, age, and other factors) of that patient into consideration.  At the same time a hospital that is large and has a 1000 patient a year of the same type and has 100 negative outcomes will see a positive score of 90% yet likely not have any better practices or procedures than the rural entity.

Additionally quality can be very subjective.  For instance the nursing interaction can be interpreted as good by one patient and bad by another simply based on their perception of the encounter.  For instance one nurse got a bad score because she did not immediately bring the patient a second jello when they requested it.  Some patients are more demanding of others and the negative scores affect smaller hospitals a lot more negatively than larger ones.

Another problem around quality is that the patient never gets to see the accreditation findings in a hospital.  I for one would see the number of negative findings as an indicator of quality in relation to others but such data is seldom every published. 

Also what about the patient population?  Where you live, the level of age, co-morbidity, work patterns, and a host of other factors can affect outcomes which is really all that any of the so-called quality sites measure.  In the end it is easy to challenge any of the findings and the responsibility of the patient to question anything they read or see on the web.

Maribeth Shannon

Oct. 31, 2013, 12:38 p.m.

Charlie, thank you for this well considered piece. As an entity that engages in the production of quality rating sites, the California HealthCare Foundation recognizes many of the limitations you note and that no report card is a perfect predictor of quality. However, we believe that sites like ours (, which will be integrated with long term care facility ratings and medical group ratings and re-launched as later this year), encourage providers to focus on quality (what gets measured gets managed), which is a good thing. They also serve to educate consumers about the enormous variation in quality that has long existed in American health care.  But standardization of measures and transparency about how the ratings are calculated is vital.

Nov. 3, 2013, 5:33 a.m.

Very well written article.  There are many ways to implement hospital ratings.  One can look at “never events” such as falling out of bed, iatrogenic sepsis, bedsores, etc.  One can look at 30 day readmission rates.  At the voices of patients, nurses, doctors and other hospital staff can be heard, and used to rank and determine the best hospitals in the United States.

Robert Holman

Nov. 3, 2013, 7:16 p.m.

I would think that some of these rating sources simply regurgitate Medicare data and some are pay to play. I see everywhere, or nearly everywhere the term in advertising “Top 100 Hospital”. What does that exactly mean? Top 100 in the county, state, or nation?

Dina J Padilla

Nov. 4, 2013, 2:47 p.m.

Just as kaiser rates itself number ONE then it adds 5 stars & then medicare says the same thing in kaiser’s federally paid for ads. Medicare gets ripped (for decades of federal money) off by the likes of a kaiser insurance with blank checks and cost shifting of their worker’s claims go straight to medicare and Social Security and kaiser paved the way for others to do the same. But hopefully, the billions of federal tax dollar to a “non profit” ripoff will stop as well as the self-imposed ratings. NOW that’s what you call THE END of fraud and abuse!

Get Updates

Our Hottest Stories