How Facebook was wrong on India again: An explainer

These are from dozens of studies and memos written by Facebook employees grappling with the effects of the platform, and a large part of them deal with hate and misinformation in India
Facebook’s products and services have been used to spread hate in India (REUTERS) PREMIUM
Facebook’s products and services have been used to spread hate in India (REUTERS)
Updated on Oct 24, 2021 08:10 PM IST
Copy Link

Facebook’s products and services have been used to spread hate in India and the company has dragged its feet on remedies, multiple media reports said, publishing details from a trove of leaked internal company reports and memos.

The reports, published by the Wall Street Journal, The New York Times and the Associated Press, and are based on disclosures made to the United States (US) Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organisations, including CNN, Le Monde, Reuters and the Fox Business network.

These are from dozens of studies and memos written by Facebook employees grappling with the effects of the platform, and a large part of them deal with hate and misinformation in India.

The following are some of the specific internal documents that have been cited and the problems they detail.

Internal note: Indian Test User’s Descent into a Sea of Polarising, Nationalistic Messages

According to AP, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself.

The employee created a test user account and kept it live for three weeks, a period during which the Pulwama terror attack killed 40 CRPF jawans. The employee, whose name is redacted, said they the feed “become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore”.

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, the AP report cited the note as saying, with hate speech, unverified rumours and viral content rampant. A “Popular Across Facebook” feature at one point showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook’s fact-check partners.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

Internal document: Lotus Mahal

The AP report went on to cite a document in which Facebook researchers found that members with links to the Bharatiya Janata Party (BJP) had created multiple Facebook accounts to amplify anti-Muslim content, ranging from “calls to oust Muslim populations from India” and “Love Jihad,” a conspiracy theory that accuses Muslim men of using interfaith marriages to coerce Hindu women to change their religion. The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages, the report added.

Internal report: Communal Conflict in India Part I

According to the WSJ, this report was prepared after Facebook dispatched researchers to interview dozens of users. One of the respondents, a Hindu man in Delhi, said he received frequent messages on Facebook and WhatsApp “that are all very dangerous,” such as “Hindus are in danger, Muslims are about to kill us,” the researchers reported. A Muslim man in Mumbai was quoted as telling the researchers, saying he feared for his life. “It’s scary, it’s really scary.”

In this report, the researchers recommended that Facebook invest to build out technical systems that can “detect and enforce on inflammatory content in India,” the way human reviewers might and create a “bank” if inflammatory material to study what people were posting.

Internal report: Adversarial Harmful Networks: India Case Study

The authors of this report, the WSJ added, said that much of the content posted by users, groups and pages from the Rashtriya Swayamsevak Sangh (RSS) is never flagged because there is insufficient detecting material in Hindi and Bengali languages. This is despite many of the posts from people part of RSS groups and pages were posting a high volume of content about “Love Jihad” and making “dehumanising posts comparing Muslims to pigs and dogs”.

WSJ said RSS and the Prime Minister’s Office did not respond to requests for a comment.

Unidentified report on Bajrang Dal

The WSJ cited a report, which was not identified by a title, as saying that the researchers found Bajrang Dal had previously used WhatsApp to “organize and incite violence.” The group had been considered for designation as a dangerous group, which would result in a permanent ban, and listed under a recommendation: “TAKEDOWN.” Bajrang Dal remains active on Facebook, the Journal report added.

Internal report: Indian Election Case Study

According to the NYT, after India’s general elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study. The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners and increasing the amount of misinformation it removed. But it also noted how Facebook had created a “political white list to limit P.R. risk,” allowing a list of politicians who received a special exemption from fact-checking,

Facebook’s responses

Facebook spokesman Andy Stone declined to comment, the WSJ report cited above said. But he said that the company bans groups or individuals “after following a careful, rigorous, and multidisciplinary process.”

He said some of the reports were working documents containing investigative leads for discussion, rather than complete investigations, and didn’t contain individual policy recommendations.

In a separate response, the company has invested significantly in technology to find hate speech across languages, and globally such content on the platform has been declining, he said. Stone said Facebook has technical systems in place to catch offending material in five languages spoken in India, including Hindi and Bengali, and has human language expertise in many more languages. He said the company continues to work to improve its systems.

On the case of the test user experiment in the first instance, the Facebook spokesperson said study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them”.

Enjoy unlimited digital access with HT Premium

Subscribe Now to continue reading
freemium
SHARE THIS ARTICLE ON
  • ABOUT THE AUTHOR

    Binayak reports on information security, privacy and scientific research in health and environment with explanatory pieces. He also edits the news sections of the newspaper.

Close Story
SHARE
Story Saved
OPEN APP
×
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, December 08, 2021