Like many of you, I’ve spent a lot of time during the past few months frustrated by the spread of disinformation on social media. As 2020 crawls to a conclusion at the speed of a Georgia vote count, I thought it might be useful to ask an expert to explain what passes for rules on the major platforms. Fortunately, we have just the right person on staff at ProPublica’s virtual headquarters:
I’m Kengo Tsutsumi, ProPublica’s platform editor and a member of our audience team. I spend my time thinking about how best to get our journalism in front of people and where we should focus to make that happen. There are lots of ways to do this, and social media is one of them. I’m as exhausted as the rest of you after the past few weeks, but to help make things less confusing, I thought it might be helpful to share some things I know to be true about social platforms.
It might seem like social media is the Wild West, with hardly any sheriffs and a set of rules that seemingly change from day to day. That’s not quite true, though it certainly has looked like that in the past four years. In fact, the biggest platforms — Twitter, Facebook, Instagram — do have rules which are enforced ... extremely selectively. And therein lies the problem.
Twitter, for example, has “terms of use” that, among other things, bar people from threatening violence or “hateful conduct.” For example, in November, Steve Bannon was permanently banned from Twitter (but not Facebook!) for calling for Anthony Fauci’s beheading on his podcast. Rapper Talib Kweli’s account was likewise permanently suspended over the summer for targeted harassment. In October, Charlie Kirk, a conservative activist, was temporarily suspended from Twitter for spreading false voter information when he shared a ProPublica and Philadelphia Inquirer story, incorrectly stating that it showed Pennsylvania had rejected 372,000 mail-in ballots. (We reported that 372,000 ballot requests, most of which were duplicates, were rejected, not ballots themselves.)
Up to this point, it all probably sounds pretty logical. Ignoring the fact that lots of tweets flagged for breaking the rules stay posted, the system really collapses when someone with, say, 85 million followers who happens to be president of the United States starts tweeting outright lies. By any of Twitter’s definitions (even under the public service exceptions Twitter outlines for world leaders), President Donald Trump could have been kicked off the platform many months ago, for example when he called for shooting Black Lives Matter protesters (the tweet was flagged but remained on the platform) or when he explicitly threatened other countries and their leaders or any of the number of times he’s insulted or attacked individuals.
There was clearly little stomach for removing the nation’s leader from social media. Instead, by the 2020 election both Facebook and Twitter chose to label presidential posts that spread clearly untrue election information as “disputed.” At times, the platforms appended factual information completely at odds with the president’s version of reality. That left us with a number of posts that looked like this: