Splainer

Monday, October 25 2021


Dive In

 

Why should we stop delimitation? Nothing is going to stop. After delimitation, there will be elections and then restoration of statehood.

That’s Home Minister Amit Shah promising a path to statehood on his trip to Jammu & Kashmir over the weekend. FYI: Delimitation is the redrawing of boundaries of Lok Sabha and state Assembly seats to represent various segments of the population. But J&K was exempted from the process until Article 370 was revoked—a decision that Shah described as “irreversible.”

 
Big Story

A series of scoops on Facebook India

The TLDR: A consortium of 17 news publications have come together to roll out a flood of stories on Facebook. These are based on tens of thousands of internal documents shared by whistleblower Frances Haugen—a former product manager. The so-called Facebook Papers also offer a number of significant revelations about India. The big picture: the platform’s algorithm is rigged to promote hate and disinformation—and the company’s reluctance to crack down on groups associated with the RSS, Bajrang Dal etc. ensures that anti-Muslim content spreads unchecked.

 

First, some background

This isn’t the first big scoop on Facebook’s activities in India. Two previous Wall Street Journal investigations revealed the following:

 

  • In August, 2020, WSJ reported that Facebook India’s Public Policy Director, Ankhi Das, intervened to prevent action against Hindutva BJP leaders. The reason: “punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country, Facebook’s biggest global market by number of users.”
  • In December, 2020, WSJ revealed that the company refused to ban Bajrang Dal—even though it qualified as a ‘dangerous organisation’ under its rules. The reason: Doing so will risk “infuriating India’s ruling Hindu nationalist politicians” and “might precipitate physical attacks against Facebook personnel or facilities.”

 

Please note: WSJ is behind a paywall but we explained each of these investigations here and here

 

Ok, so what’s new in these papers?

#1 A hate-promoting algorithm: In February 2019, a Facebook employee created a dummy account as a person from Kerala. The aim was to answer a key question: what would a new user see on their feeds if they followed all the recommendations generated by the platform’s algorithm to join groups, watch videos and explore pages? The experiment was conducted for three weeks—during which the Pulwama attacks occurred in Kashmir. Here’s what happened next:

 

“[T]he employee… said they were ‘shocked’ by the content flooding the news feed which ‘has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.’

 

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant. The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic… ‘Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,’ the researcher wrote.”

 

#2 ‘Gateway groups’: A similar experiment with a test user was conducted in the United States—and produced almost identical results. More importantly, a related internal report found that the algorithm repeatedly pushed users toward extremist groups:

 

"The body of research consistently found Facebook pushed some users into 'rabbit holes,' increasingly narrow echo chambers where violent conspiracy theories thrived. People radicalized through these rabbit holes make up a small slice of total users, but at Facebook’s scale, that can mean millions of individuals."

 

A different report concluded: “Group joins can be an important signal and pathway for people going towards harmful and disruptive communities. Disrupting this path can prevent further harm.”

 

Why this matters in India: Many of the groups joined by the test user in India had tens of thousands of users. And an earlier 2019 report found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.

 

#3 The protected groups: Earlier this year, an internal report found that much of the violent anti-Muslim material was circulated by RSS-affiliated groups and users. But it also noted that Facebook hasn’t flagged the RSS as dangerous or recommended its removal “given political sensitivities.” 

 

Another report called out the Bajrang Dal for using WhatsApp to “organize and incite violence”—and also noted that the group is linked to the BJP. Employees considered designating the Bajrang Dal a dangerous group, and listed it under a recommendation: “TAKEDOWN.” However, it is still active on Facebook.

 

Response to note: When contacted by the Wall Street Journal, the RSS said, “Facebook can approach the RSS anytime”—but noted that it had failed to do so. Bajrang Dal’s response was more blunt and revealing: “If they say we have broken the rules, why haven’t they removed us?”

 

#4 Lack of attention/resources: One big reason that Facebook has failed to stem the tide of hate is that it simply hasn’t invested enough resources. According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” And yet most of the anti-Muslim content was “never flagged or actioned” since Facebook lacked “classifiers” (hate-detecting algorithms) and “moderators” in Hindi and Bengali languages. 

 

Big data point to note: Responding to the reporting, Facebook claims that it has since “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali”—which has resulted in “reduced the amount of hate speech that people see by half” in 2021. 

 

But, but, but, as the New York Times notes, 87% of the company’s global budget for time spent on classifying misinformation is allocated to the United States—and only 13% is set aside for the rest of the world. This is despite the fact that India constitutes FB’s biggest market with over 340 million users—and nearly 400 million are on WhatsApp. In comparison, American users make up only 10% of the social network’s daily active users.

 

Why isn’t Facebook doing more?

One: Facebook’s algorithms that were rolled out in 2018 are designed to maximise engagement—anything that keeps you spending hours on their platform. Quite simply, fear and hate are more powerful hooks than other emotions. And when a product tweak to crack down on hate decreases engagement, the company has chosen to not to implement it. 

 

Two: The algorithms developed in 2019 to detect hateful content—called ‘classifiers’—only work in certain languages. And despite Facebook’s claims, reports from this year show that the controls for Hindi and Bengali are weak and ineffective. And many other Indian languages are entirely missing.

 

Three: The company’s first priority is to protect its relationships with those who wield power. A 2020 internal document noted that “Facebook routinely makes exceptions for powerful actors when enforcing content policy.” It also quotes a former Facebook chief security officer, who says that this is an even bigger bias outside the United States. In countries like India, “local policy heads are generally pulled from the ruling political party and are rarely drawn from disadvantaged ethnic groups, religious creeds or casts” which “naturally bends decision-making towards the powerful.”

 

The bottomline: As we’ve seen, Facebook is only too happy to kowtow to the government’s demands for stricter social media rules. But it will continue to optimise hate in India until and unless the government tells it not to. 

 

Reading list

All the latest reporting is in the Associated Press, New York Times and Wall Street Journal (paywall). NBC News looks at how ‘gateway groups’ function. MIT Technology Review has a very good deep dive into how Facebook algorithms work—and what we learned from whistleblower Frances Haugen. We highly recommend checking out our previous explainers for more context. The first puts Facebook’s actions in context to its close relationship with Modi and the BJP. The second connects the dots with its big investment in Jio Platforms.

 

 
Headlines that matter

A setback for space tourism

A lot of media hype was generated by Jeff Bezos, Richard Branson and Elon Musk’s SpaceX’s recent flights. But an upcoming SpaceX flight had to be canceled due to the lack of takers—because “the mix of price, timing and experience wasn’t right.” Translation: We couldn’t find enough filthy rich billionaires with money to waste. Why this matters: space tourism may still be a very distant dream. (Futurism)

 

A ‘Squid Game’ ripoff

In case you think only Bollywood is capable of shameless plagiarism, the Chinese streaming service Youku outdid everyone by unveiling a game show titled ‘Squid Victory’—slated as a “game and social networking show” that features “children’s game.” Even the posters looked, umm, familiar. The backlash was immediate. Youku’s solution: It has changed the name to ‘Game Victory’. (South China Morning Post)

 

An increase in tuskless elephants

Elephants in Mozambique have been relentlessly slaughtered by poachers for their ivory—and the population fell by 90% during the civil war from 1977 to 1992. But the poaching had an unexpected long-term effect. There has been a rise in the number of females without tusks—which now account for 50.9% of the reserve’s elephants. Scientists say such a quick evolutionary response in such a large animal is truly “remarkable.” The downside: This puts the tusked males in greater danger than ever. (Popular Science)

 
Login

In today’s edition

Sanity Break

  • Sage Sohier’s unusual collection of photos taken at animal rescue centres

 

Smart & Curious

  • The foreign roots of the greatly beloved vada pav
  • Do we talk too much, too often and to too many people?
  • The culture war that pits Urdu against Hindi
  • Why the concept of ‘laziness’ is useless in economic theory
Login

Share your love!

Sign up your friends & fam (and anyone else!) by copy/pasting your special referral link below! Or just click on the link and share that specially coded subscription page the usual way. We will say a big 'thank you' by offering you a very nice token of our appreciation. Check out our FAQs. to know more. We grow and thrive because of you!

REF_CODE

Become a subscriber!

Discover why smart, curious people around the world swear by splainer!

Sign Up Here!

Gift splainer today!

Love spending your mornings with us? Share the joy by gifting a subscription to someone you ❤️

Gift splainer

Complaints, suggestions or just wanna say hi? Talk to us at talktous@splainer.in

Join our community

© 2021 splainer.in
You are receiving this email because you opted in via our website.
Unsubscribe from this list.