Three new studies—based on internal user data shared by Meta—show that social media may not be responsible for political polarisation. The reality is way more complicated.
Background on the studies
Why they matter: These are the first results of a unique partnership between researchers from leading universities and from Meta. All other research on social media has been based on publicly available data—or were based on small numbers of users with information that was ‘scraped’ or downloaded, from the internet. This is the first time a social media company has shared its internal data with researchers. There are 16 such peer-reviewed papers planned—including studies of data around the time of the Capitol Hill riots. These three studies were published last week in leading journals such as Nature and Science.
The aim: They set out to study the effects of social media on political polarisation—the divide between conservatives and liberals that has severely sharpened over the past decade. They looked at whether what you see on your social media feeds affects your understanding and opinions about news, government and democracy.
The methodology: Researchers changed the feeds of tens of thousands of Facebook and Instagram users between late September and December in 2020—in the lead-up to the US presidential election. Some of the studies looked at the data of at least 208 million Americans on FB.
Study #1: Changing the algorithm
Researchers replaced the Meta algorithm with a reverse chronological feed (most recent post first)—for 23,000 Facebook and 21,000 Instagram users. They wanted to test claims by critics—such as whistleblower Frances Haugen—that Meta’s algorithm “amplifies and rewards hateful, divisive and false posts by raising them to the top of users’ feeds.” And many have argued that a simple reverse chronology will minimise the divisive content seen by users.
What they found: The reverse chronology feed was more boring—and users spent far less time on the platform. More notably, it exposed users to more untrustworthy content—because the Meta algo downgrades sources who repeatedly share misinformation. And it had very little effect on “levels of polarisation, political knowledge or offline political behaviour.”
The big ‘but’: The reverse chronology feed also cut the amount of hateful content in half. And the amount of content from moderate friends and sources with ideologically mixed audiences increased on Facebook.
The interesting bit: Changing the logic of the feed made no difference to actual political participation. So people were not more likely to vote or sign a petition etc. But it did affect online engagement: people were far less likely to post or like political content, sharing that they had voted or mention politicians and candidates running for office.
What it tells us: According to experts, it shows that you can’t change people’s beliefs by simply changing an algorithm: “What this tells us is that you can change people’s information diet, but you’re not going to immediately move the needle on these other things. I was a little surprised by that.”
OTOH: Others like Haugen argue that the results don’t capture the damage already done—and criticise the timing of the study:
She argued that by the time the researchers evaluated the chronological approach during the fall of 2020, thousands of users already had joined mega groups that would have flooded their feeds with potentially problematic content. She noted that during the months leading up to the 2020 election, Meta already had implemented some of its most aggressive election protection measures to address the most extreme posts. The company rolled back many of those measures after the election, she said.
Others offer this climate change analogy:
It’s a bit like cutting carbon emissions in a handful of towns, says David Garcia, a social scientist at the University of Konstanz: This is unlikely to alleviate the effects of climate change locally, because even though carbon emissions drive climate change, they do so at the global level.
The main takeaway: the study primarily establishes the fact that changing the algorithm cannot end political polarisation: “The results of these experiments do not show that the platforms are not the problem, but they show that they are not the solution.”
Study #2: Analysing the news feed
This study was a lot more straightforward. Researchers looked for correlations between the news stories on a user’s feed and how liberal or conservative they are—according to Facebook’s own internal classifier.
The results: are less surprising. Liberals and conservatives do indeed live in their own echo chamber—and “read and engaged with different sets of political news stories.” This self-insulation was more pronounced on the right. Conservatives also saw a lot more misinformation:
Conservatives tended to read far more political news links that were also read almost exclusively by other conservatives, according to the research. Of the news articles marked by third-party fact checkers as false, more than 97% were viewed by more conservatives than liberals. Facebook Pages and Groups, which let users follow topics of interest to them, shared more links to hyperpartisan articles than users’ friends.
But, but, but: One of the authors notes: “Only a small fraction of all content was rated false, however, and much misinformation may be going under the radar, some of which could be concentrated on the left.”
Study #3: The ‘friends’ effect
This study limited a user’s exposure to content shared by “like-minded” friends—cutting it by a third. And they increased exposure to information from ideologically opposed viewpoints. Here’s what they found:
- For starters, the majority of the content seen by Facebook users comes from like-minded friends, pages and groups—which isn’t exactly news.
- Reducing exposure also reduces engagement—since users did not click or react to the posts quite as much.
- But it also decreased exposure to “uncivil” content from “sources that repeatedly post misinformation.”
- However, none of this had any effect on a person’s beliefs or level of polarisation.
The big debate: Meta’s end game
A number of experts are sceptical about the value of collaborating with Meta—and wary of the company’s ability to influence the results.
Were the researchers truly ‘independent’? Meta had no control over the published research:
Meta collaborated with 17 outside scientists who were not paid by the company, were free to decide what analyses to run, and were given final say over the content of the research papers.
In fact, Meta is already arguing with some of the scientists over the interpretation of their findings. But it is also important to note that these researchers were not allowed to handle raw data due to privacy concerns. And Meta had the power to veto any request that violated its privacy policy.
But, but, but: Some experts say that the partnership is just a way to buy time—and stave off regulation. In a commentary accompanying the papers in Science—titled ‘Independence by permission’—Michael W Wagner concludes the researchers conducted “rigorous, carefully checked, transparent, ethical, and path-breaking studies.” But he offers this caveat:
But Meta set the agenda in ways that affected the overall independence of the researchers. Meta collaborated over workflow choices with the outside academics, but framed these choices in ways that drove which studies you are reading about in this issue of Science…
And there’s only so much researchers understand of the complicated “data architecture and programming code are at corporations such as Meta. Simply put, researchers don't know what they don't know, and the incentives are not clear for industry partners to reveal everything they know about their platforms.”
Point to note: Meta’s PR release claims that the studies clearly show that its platforms do not “play a central role in polarising users and inflaming political discourse”—a conclusion that the researchers strongly disagree with.
The bottomline: Meta certainly proved Wagner right. He predicted Meta pushed to get these papers done first for a good reason: “You could read it as the big splash is going to be that there aren’t huge effects that are so deleterious to democracy that we need to have a bunch of new regulations on our platform.’”