Sept. 22, 2017: In a post on Facebook, Chief Operating Officer Sheryl Sandberg detailed the steps that Facebook will take to police offensive ad categories: adding more human reviewers; creating a way for people to report abusive ad categories; and stepping up enforcement of the company’s rules against hateful targeting.

In the wake of ProPublica’s report Thursday that Facebook advertisers could have directed pitches to almost 2,300 people interested in “Jew hater” and other anti-Semitic topics, the world’s largest social network said it would no longer allow advertisers to target groups identified by self-reported information.

“As people fill in their education or employer on their profile, we have found a small percentage of people who have entered offensive responses,” the company said in a statement. “…We are removing these self-reported targeting fields until we have the right processes in place to prevent this issue.”

Facebook had already removed the anti-Semitic categories — which also included “How to burn jews” and “History of ‘why jews ruin the world’” — after we asked the company about them earlier this week. Then, after our article was published, Slate reported that Facebook advertisers could target people interested in other topics such as “Kill Muslim Radicals” and “Ku-Klux-Klan.” Facebook’s algorithm automatically transforms people’s self-reported interests, employers and fields of study into advertising categories.

Because audiences in the hateful categories were “incredibly low,” the ad campaigns targeting them reached “an extremely small number of people,” Facebook said. Its statement didn’t identify the advertisers. Conceivably, those who might find it helpful to target anti-Semites could range from recruiters for far-right groups to marketers of Nazi memorabilia.

ProPublica documented that the anti-Semitic ad categories were real by paying $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

Facebook’s advertising has become a focus of national attention since it disclosed last week that it had discovered $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.

Like many tech companies, Facebook has long taken a hands-off approach to its advertising business. Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Traditionally, tech companies have contended that it’s not their role to censor the internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.

Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.