In January, Uber reported that cities using its ridesharing service have seen a reduction in drunk driving accidents, particularly among young people. But when ProPublica data reporter Ryann Grochowski Jones took a hard look at the numbers, she found the company's claim that it had "likely prevented" 1,800 crashes over the past 2.5 years to be lacking.
She explains on this week's podcast that the first red flag was that Uber didn't include a methodology with its report. A methodology is crucial to show how the statistician did the analysis as well as to note any caveats to the data, like the time of year, and how they may impact results, Grochowski Jones says.
Uber eventually sent her a copy of the methodology separately, which showed that drunk-driving accidents involving drivers under 30 dropped in California after Uber's launch. The math itself is fine, Grochowski Jones says, but Uber offers no proof that those under 30 and Uber users are actually the same population.
This seems like one of those famous moments in intro statistics courses where we talk about correlation and causality, ProPublica Editor-in-Chief Steve Engelberg says. Grochowski Jones agrees, showcasing how drowning rates are higher in the summer as are ice cream sales but clearly one doesn't cause the other.
Engelberg also asks how do we know that the people using Uber after a night of drinking aren't those who used to use a taxi service? If the people simply shifted over, this study might have a problem, he says.
"Uber could definitely have an effect on the drunk driving rate but if they're trying to claim that they are the cause of this with this study, the study is lacking," Grochowski Jones says.