After tragedies like the Parkland school shooting or the Boston Marathon bombing in 2013, we’ve seen false news stories and rumors spread on social media at a frightening speed — often outpacing the truth.
In 2013, people on Reddit and Twitter circulated completely false allegations of various individuals purported to be the “suspects” in the bombing.
In the wake of Parkland, a conspiracy theory about the shooting survivors being actors trying to gin up support for gun control reached hundreds of thousands of people before the social media platforms could step in and shut it down.
Today, the journal Science has published a study validating this pattern — at least when it comes to the spread of misinformation on Twitter. It’s a huge analysis, that brings data to bear on the suspicion many have that social media, as a platform for news, has a bias for the sensational, unverified, emotional, and false. And it’s concerning, considering how social media has become a dominant force for news distribution.
The study analyzed millions of tweets, sent between 2006 and 2017, and came to this chilling conclusion:
Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information.
But perhaps even more important, is what the study reveals about what’s responsible for fueling the momentum of false news stories. It’s not influential Twitter accounts with millions of followers. Or Russian bots designed to automatically tweet misinformation. It’s ordinary Twitter users, with meager followings, most likely just sharing the false news stories with their friends.
What’s clearer than ever now is that the spread of false news is a consequence of flawed human psychology — and platforms like Twitter simply amplify it. But it’s unclear if it’s a problem that the platforms can truly ever solve.
How to track a lie throughout all of English-speaking Twitter
The new Science paper comes from a trio of computer science researchers at MIT, who started this project two years ago before “fake news” was a common household term.
The goal was to study something that sounds simple, but involved an enormous amount of data. We’ve all see the chain reaction of a viral tweet. One person tweets something, another person retweets it, a third starts a totally separate thread, but on the same story. Tweets scatter and replicate in a chain reaction reminiscent of nuclear fission.
The MIT researchers wanted to see if they could capture the entirety of that chain — to see how fast, deep, and to how many people false rumors spread — and then compare it to how true stories spread. (Twitter allows people to see where tweets originate and spread. It would be much tougher to do that sort of analysis on a more algorithmically driven network like Facebook.)
They started by going to fact-checking websites like Snopes, Politifact, and factcheck.org, to find verifiably true and false stories from 2006 to 2017. “Then we went to the Twitter data, and worked backwards,” finding the very first time on Twitter the individual false new stories were mentioned, Sinan Aral, the senior author on the paper, says.
They then found all the times those false news stories were either retweeted from the original tweet, or asserted again by a separate user. The analysis didn’t include quote retweets (where users can add a comment on top of a tweet). Which meant the researchers couldn’t tell if quote retweets were “actually spreading the misinformation or correcting it,” Soroush Vosoughi, a co-author, explains.
But all in all, the researchers ended up with a sample of 126,000 “cascades” of tweets (meaning chains of retweets), spread by 3 million people.
It’s an amazing amount of data, which Twitter provided special access to. “This is rare for any of the platforms to [allow],” says David Lazer, a Northeastern University political and computer scientist, adding that Twitter deserves some credit for being so willing for the researchers to query any public tweets they wanted to include in the analysis. More commonly, Twitter caps the amount of data researchers can pull from its servers.
False news on Twitter spread faster, deeper, and more widely than true news
Once the researchers had their data, they went about measuring the impact of the false news stories in three ways.
1) Speed: how quickly the story spread to given number of people
2) Depth: how many retweets each false news story got
3) And breadth: how users saw the misinformation
The spread of the false stories were compared to true stories that the fact checking verified, as well as a few news stories verified by research assistants.
And this was the very troubling result: On each measure, false news beat out true news. “False news spread further, faster, deeper, and more broadly than the truth in every category of information,” Aral says, with the effect being particularly pronounced when it came to political news.
Overall, the analysis found “it took the truth about six times as long as falsehood to reach 1,500 people.”
A big limitation: Is the comparison between the false news stories and the true news stories fair?
The biggest limitation of the study is the comparison sample of true news stories. The researchers pulled stories that were verified as true by the fact-checking websites, and similarly traced their paths through the Twitter ecosystem. They also added a few other true stories selected by research assistants.
But still, it may be comparing apples to oranges. Fact-checking websites ascertain the veracity of false stories once they are already pervasive and widely disseminated. The true stories may not have been as viral.
It’s also the case that we get true news through a wide array of outlets, whereas many rumors originate on Twitter.
When there’s an active shooter situation on a school campus, for example, we might hear about it from several sources: television, radio, text messages from friends. It’s qualitatively different from a Twitter rumor that can be traced back to a handful of people. “There’s a million important true stories that got a lot of attention,” David Rand, a Yale psychologist who studies the spread of misinformation, says.
That’s why it’s hard to know what the right comparison group should be in this research, says Rand, who was not involved in the Science paper. Is it right to compare the virality of conspiracy theories about students acting as “crisis actors” to news about the weather, or stock market numbers? It’s hard to say.
Regardless, the paper shows that false news stories do have an unnerving momentum on Twitter
But even if we forget about the comparison to true news stories, this Science paper does demonstrate something true and frustrating about news on Twitter: False stories can spread quickly, and deeply.
“Falsehood reached more people at every depth … than the truth, meaning that many more people retweeted falsehood than they did the truth,” the study found.
And it’s important to know why, and who is behind it.
To this question, the researchers looked at whether false news stories were spreading peer-to-peer, or being broadcast by huge accounts with millions of followers.
And they found that “people who spread false news had significantly fewer followers, were less often verified, and were less active on Twitter,” Aral says. In other words, it’s the rank and file who spread false news.
The analysis doesn’t have much to say about popular accounts that commonly spread mistruths, like InfoWars. “We are not saying that big influential accounts didn’t have a role in the spread of false news,” Vosoughi says. They are saying that, in sum, the accounts spreading false news reached more people than the accounts spreading true news.
The analysis also accounted for Twitter bots (meaning the accounts that are set up to automatically tweet without human input). Curiously, overall, the bots seemed to spread false stories and true stories at equal rates. Still, it was apparent that the speed and depth at which false news disseminates is attributable to humans spreading it.
False news is more novel, and more emotional than true news. That’s always going to make it more clicky.
The paper wasn’t designed to understand the motivations of people spreading false news. Were these users willfully spreading misinformation? Were they trolling? Were they retweeting sarcastically? The study can’t say.
But the researchers were able to tease out one possibility: The false news stories were more novel, more surprising than the true stories. This is where the huge data-set came in handy. The researchers could figure out if users tweeting false stories had previously seen the false stories in their feeds. If they hadn’t seen it before, the false stories were considered to be more novel.
“False news was significantly more novel than true news,” Aral says. “And that makes sense, when you are unconstrained by reality, you can come up with much more novel information.” A sentiment analysis in the Science paper revealed that replies to false-news tweets contained more expressions of surprise or disgust than true news. And perhaps that’s why fake celebrity deaths so often pervade Twitter: They’re surprising, emotional, irresistible to share.
And this “novelty” hypothesis has been shown in other studies. In a 2017 paper that Rand coauthored, when participants see headlines repeated, they’re more likely to believed (a consequence of what’s known as the illusory truth effect). Other research has found that the more morally or emotionally charged a tweet, the more likely it is to spread within a particular ideological group.
Can Twitter actually solve this problem?
This is the problem with getting news from Twitter. So often it arrives in our feeds filtered through the human emotional system. The most viral tweets are the ones that tug on our hearts. And fake news is often designed with this in mind.
“Fake news is perfect for spreadability: It’s going to be shocking, it’s going to be surprising, and it’s going to be playing on people’s emotions, and that’s a recipe for how to spread misinformation,” Miriam Metzger, a UC Santa Barbara communications researcher who was not involved in the Science study, tells me. The flaws in human psychology mean many of us are going to be attracted to the false content, and want to spread it.
Rand sees “an unhealthy synergy between the trolls and the platforms. It’s good for the platforms for people to engage with the content, and the trolls want to create engaging content.”
This is the tough line Twitter has to walk. Twitter wants to be a go-to source for breaking news. It also wants to provide its users with an engaging, validating experience. Those two goals might always be in conflict. Meanwhile, it’s profiting (in part) off people engaging and spreading in fake content.
I wonder if this is a problem that social networks can really ever really fix. If they want the experiences to be user-driven (and if algorithms take simple cues from users to determine what’s important), false news stories will always have this greater momentum in spreading through their platforms.
Twitter has been making strides: kicking off bots, and calling for a reevaluation of the health of the conversation on its platform. It’s inviting researchers like Aral to investigate misinformation on the platform. It can do better at discovering conspiracy-theory laden tweets and suppressing them. It could kick off more users who are willfully spreading misinformation.
But fake news is not just attractive to those who want to see the world burn. It’s attractive to a lot of us. And we’re on Twitter.