Facebook struggles with hate speech, celebrations of violence in India: Report
Facebook’s internal documents show “a struggle with misinformation, hate speech and celebrations of violence” in India, with researchers at the social media giant pointing out that there are groups and pages “replete with inflammatory and misleading anti-Muslim content” on its platform, a US media report has said.
In February 2019, a Facebook researcher created an account to look into what the social media website will look like for a person living in Kerala, the New York Times reported on Saturday.
“For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site. The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month,” the US newspaper said in its report.
“Internal documents show a struggle with misinformation, hate speech and celebrations of violence in the country, the company’s biggest market,” said the report based on disclosures obtained by a consortium of news organisations, including the New York Times and the Associated Press news agency.
The documents are part of the material collected by data engineer and whistleblower Frances Haugen, a former Facebook employee who recently testified before the US Senate about the company and its social media platforms.
The internal documents include details on how bots and fake accounts tied to the “country’s ruling party and opposition figures” were wreaking havoc on India’s national elections, the report said.
In a separate report produced after the 2019 national elections, Facebook found that “over 40 per cent of top views, or impressions, in the Indian state of West Bengal were fake/inauthentic”, the newspaper reported. One inauthentic account had amassed more than 30 million impressions.
In an internal document – titled Adversarial Harmful Networks: India Case Study – Facebook researchers wrote that there were groups and pages “replete with inflammatory and misleading anti-Muslim content” on the social media platform.
The documents also detail how a plan “championed” by Facebook founder Mark Zuckerberg to focus on “meaningful social interactions” was leading to more misinformation in India, particularly during the pandemic.
Another Facebook report detailed efforts by Bajrang Dal to publish posts containing anti-Muslim narratives on the platform. “Facebook is considering designating the group as a dangerous organisation because it is “inciting religious violence” on the platform, the document showed. But it has not yet done so,” the New York Times report said.
The documents showed that Facebook did not have enough resources in India and was not able to grapple with the problems it had introduced there, including anti-Muslim posts.
“This exploratory effort of one hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems, and contributed to product changes to improve them,” A Facebook India spokesperson said. “Product changes from subsequent, more rigorous research included things like the removal of borderline content and civic and political groups from our recommendation systems.”
The company has strengthened its hate classifiers to include four Indian languages, the spokesperson said.
The company has reduced the amount of hate speech that people see by half this year, the spokesperson said. “We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali. As a result, we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05%. Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online.”
After India’s national elections had begun, “Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country,” according to an internal document called Indian Election Case Study, the New York Times reported.
“The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed,” the report said.
“The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots – or fake accounts – linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.”
(With PTI inputs)