Close Close Comment Creative Commons Donate Email Add Email Facebook Instagram Mastodon Facebook Messenger Mobile Nav Menu Podcast Print RSS Search Secure Twitter WhatsApp YouTube
PROPUBLICA Expose Corruption. Defend Truth. Support Investigative Journalism.
DONATE

When Big Data Becomes Bad Data

Corporations are increasingly relying on algorithms to make business decisions and that raises new legal questions.

A recent ProPublica analysis of The Princeton Review’s prices for online SAT tutoring shows that customers in areas with a high density of Asian residents are often charged more. When presented with this finding, The Princeton Review called it an “incidental” result of its geographic pricing scheme. The case illustrates how even a seemingly neutral price model could potentially lead to inadvertent bias — bias that’s hard for consumers to detect and even harder to challenge or prove.

Over the past several decades, an important tool for assessing and addressing discrimination has been the “disparate impact” theory. Attorneys have used this idea to successfully challenge policies that have a discriminatory effect on certain groups of people, whether or not the entity that crafted the policy was motivated by an intent to discriminate. It’s been deployed in lawsuits involving employment decisions, housing and credit. Going forward, the question is whether the theory can be applied to bias that results from new technologies that use algorithms.

Asians Are Nearly Twice as Likely to Get a Higher Price from The Princeton Review

One unexpected effect of the company's geographic approach to pricing is that Asians are almost twice as likely to be offered a higher price than non-Asians, an analysis by ProPublica shows. Read the story.

The term “disparate impact” was first used in the 1971 Supreme Court case Griggs v. Duke Power Company. The Court ruled that, under Title VII of the Civil Rights Act, it was illegal for the company to use intelligence test scores and high school diplomas — factors which were shown to disproportionately favor white applicants and substantially disqualify people of color — to make hiring or promotion decisions, whether or not the company intended the tests to discriminate. A key aspect of the Griggs decision was that the power company couldn’t prove their intelligence tests or diploma requirements were actually relevant to the jobs they were hiring for.

In the years since, several disparate impact cases have made their way to the Supreme Court and lowercourts, most having to do with employment discrimination. This June, the Supreme Court’s decision in Texas Dept. of Housing and Community Affairs v. Inclusive Communities Project, Inc. affirmed the use of the disparate impact theory to fight housing discrimination. The Inclusive Communities Project had used a statistical analysis of housing patterns to show that a tax credit program effectively segregated Texans by race. Sorelle Friedler, a computer science researcher at Haverford College and a fellow at Data & Society, called the Court’s decision “huge,” both “in favor of civil rights…and in favor of statistics.”

So how will the courts address algorithmic bias? From retail to real estate, from employment to criminal justice, the use of data mining, scoring software and predictive analytics programs is proliferating at an exponential rate. Software that makes decisions based on data like a person’s ZIP code can reflect, or even amplify, the results of historical or institutional discrimination.“[A]n algorithm is only as good as the data it works with,” Solon Barocas and Andrew Selbst write in their article “Big Data’s Disparate Impact,” forthcoming in the California Law Review. “Even in situations where data miners are extremely careful, they can still affect discriminatory results with models that, quite unintentionally, pick out proxy variables for protected classes.”

It’s troubling enough when Flickr’s auto-tagging of online photos label pictures of black men as “animal” or “ape,” or when researchers determine that Google search results for black-sounding names are more likely to be accompanied by ads about criminal activity than search results for white-sounding names. But what about when big data is used to determine a person’s credit score, ability to get hired, or even the length of a prison sentence?

Because disparate impact theory is results-oriented, it would seem to be a good way to challenge algorithmic bias in court. A plaintiff would only need to demonstrate bias in the results, without having to prove that a program was conceived with bias as its goal. But there is little legal precedent. Barocas and Selbst argue in their article that expanding disparate impact theory to challenge discriminatory data-mining in court “will be difficult technically, difficult legally, and difficult politically.”

Some researchers argue that it makes more sense to design systems from the start in a more considered and discrimination-conscious way. Barocas and Moritz Hardt established a traveling workshop called Fairness, Accountability and Transparency in Machine Learning to encourage other computer scientists to do just that. Some of their fellow organizers are also developing tools they hope companies and government agencies could use to test whether their algorithms yield discriminatory results and to fix them when necessary. Some legal scholars (including the University of Maryland’s Danielle Keats Citron and Frank Pasquale) argue for the creation of new regulations or even regulatory bodies to govern the algorithms that make increasingly important decisions in our lives.

There still exists “a large legal difference between whether there is explicit legal discrimination or implicit discrimination,” said Friedler, the computer science researcher. “My opinion is that, because more decisions are being made by algorithms, that these distinctions are being blurred.”

Latest Stories from ProPublica

Current site Current page