New Delhi -°C
Today in New Delhi, India

Aug 21, 2019-Wednesday
-°C

Humidity
-

Wind
-

Select city

Metro cities - Delhi, Mumbai, Chennai, Kolkata

Other cities - Noida, Gurgaon, Bengaluru, Hyderabad, Bhopal , Chandigarh , Dehradun, Indore, Jaipur, Lucknow, Patna, Ranchi

Wednesday, Aug 21, 2019

40 teams, 30,000 people: Facebook’s army against fake news ahead of LS polls

Teams across operational centres in Menlo Park, India, Dublin and Singapore scan videos, audio and the written word while India’s elections get underway.

lok-sabha-elections Updated: Apr 08, 2019 22:21 IST
Smriti Kak Ramachandran
Smriti Kak Ramachandran
Hindustan Times, New Delhi/California
Facebook has put in place 40 teams of 30,000 people across the globe to track and take down hate speech and fake news on its platform.
Facebook has put in place 40 teams of 30,000 people across the globe to track and take down hate speech and fake news on its platform.(Facebook)
         

As the political discourse edges past cat videos and family photos on timelines, thousands of people are keeping track of posts being shared on social media platform Facebook to scan if any of these violate the policies against hate speech and fake news; or have the potential to impact the upcoming general elections in India.

Teams across operational centres in Menlo Park, India, Dublin and Singapore scan videos, audio and the written word while India’s elections get underway.

A little over 900 million people are registered as voters for the seven-phase general election beginning April 11, which is being fought as much on the social media stratosphere as in the constituencies.

This operations centre workforce - 40 teams of 30,000 people across the globe - that includes experts from across sectors such as cyber security and engineering has been put in place after the social media giant faced heat from governments and privacy watchdogs in the aftermath of the Cambridge Analytica controversy, which exposed privacy lapses on the platform.

Typically, the forensic exercise - of earmarking a post or an account, handing it over to the operations teams that run 24X7, for investigation to assess if it qualifies for a take-down - is completed in hours.

According to Kaushik Iyer, engineering manager for civic integrity at Facebook, these teams determine what content is acceptable and does not breach the code of ethics. In India, doctored videos and audio with an aim to misrepresent are one of the major challenges for the company says his colleague Rita Aquino, who is the Indian elections lead for civic integrity.

To assuage the concerns of the election commission that holds elections, the government and the political parties, all of which have expressed concern over the possible misuse of the platform, the company says it has tweaked its operations to ensure more checkpoints for stopping misinformation flow.

“Over the last 18 months, we built systems that allowed us a picture of trends that are building in India, so that we can find content harmful for community and take action,” said Aquino. Part of the preparations for the Indian elections was planning scenarios and running through simulations so that the teams could test the systems for handling crisis, she explained.

Katie Harbath, public policy director for global elections at Facebook, said there are five main pillars that work is carried out on. “...The first is cracking down on fake accounts, when we look at bad activity that could be happening on the platform, most people are not using their real identity. We use a combination of automatic systems and humans to find these accounts. We have increased our safety and security team from 10,000 to 30,000 people over the last year-and-a-half. Most of these accounts are detected even before they are reported to us.”

Facebook, she said, is good at detecting automatic creation of accounts while enforcing transparency in advertising has been the other big step.

“The third is reducing the distribution of false news. Here, we look at it in three ways: what content to remove because it violates our policies, where is the problematic content so that we can reduce its reach and how can we give the community additional context on who it is that is sharing this information,” she said.

Governments and civil society were up in arms and alarm bells were set off when news reports in 2018 revealed that Cambridge Analytica, a political consulting firm, had harvested the personal data of millions of people’s Facebook profiles without their consent and used it for political purposes.

While the Congress in the United States summoned Facebook CEO Mark Zuckerberg to testify, last November, a joint hearing of lawmakers from nine countries including UK, Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia and Singapore had some tough questions for the company on its handling of misinformation online.

The social media giant also faced flak for not doing enough to limit foreign interference, allegedly out of Russia, during elections in the United States and the European elections.

In India, where the election commission has already flagged concerns over the potential of social media as a spoiler; a parliamentary committee in March told Facebook to ensure that its platform and those of its photo-sharing site Instagram and messaging app WhatsApp are not misused to create divisions, incite violence, pose a threat to India’s security, or let foreign powers meddle in the general election.

While the social media giant is in the process of sharpening its response and defences ahead of the upcoming European Union (EU) elections and the polls in Australia; the sheer scale of the Indian elections that comes with specific challenges such as the diversity of languages and dialects will be the litmus test of its safeguards.

“In India, for instance, we recognised the linguistic diversity and the need to prepare for it. We did some of this work during the last state elections and the one core technique that we have invested in is in improving our ability to use machine translation. We also need to understand the ways in which people use our platform,” said Iyer.

He said in India specifically, the company invested in improving coverage of photos and videos. “We have invested in better translation so that we can match the ways in which people speak and understand concepts and so that we can listen to the voice of the community,” said Iyer.

Learning from past elections

Though the company claims to be continually learning and improvising on the ground, handling of issues during the Brazil and the US election has helped create a plan for India as well.

“In Brazil, we saw the value of taking quick decisions to protect against violative content. In the US, we found that coordinating with government, civil society as well and building proactive measures was very valuable. In that elections, we were able to detect harmful content and were able to take down 45,000 pieces of voter suppression before anybody reported it to us,” said Aquino.

On the lessons that were learnt during the assembly elections in five states in India last year, she said: “...We found hate speech against specific castes and religious groups were on the rise. So what we did was improve our ability to understand languages outside of the Hindi belt and we also bolstered our understanding the social and political landscape of each state. I can say now that we can detect hate speech on the platform and are able to act on it.”

Ajit Mohan, the managing director cum vice president of Facebook, in a separate blog post on Monday said the company is relying on using artificial intelligence and machine learning to fight interference. “For example, these tools help us block or remove approximately one million accounts a day. They also help us, at a large scale, identify abusive or violating content, quickly locate it across the platform and remove it in bulk. This dramatically reduces its ability to spread,” he said.

On the process of insulating the elections from interference, he said the process was started 18 months ago with a detailed planning and risk assessment across platforms. “...The findings allowed us to concentrate our work on key areas, including blocking and removing fake accounts; fighting the spread of misinformation; stopping abuse by domestic actors; spotting attempts at foreign meddling; and taking action against inauthentic coordinated campaigns,” Mohan said.

Working with govt

Earlier this month, the company took down over 700 pages linked to individuals that it said were associated with the Congress and the Bharatiya Janata Party for spamming other users. Has the platform received requests from the government to take down accounts or content that could impact national security or elections? Harbath said it hasn’t.

She said even though the company receives inputs from the government, civil society and other third party organisations that monitor the elections, it does not necessarily guarantee that Facebook will take action. “We will look at every single report that comes to us to make our own determination, if it violates our community standards, does it violate local law or what is the right action that we can take. We do that to make sure that we are not letting biases creep into the decisions we make,” she said.

Monitoring social media

Zuckerberg recently called for legislation in the areas of harmful content, election integrity, privacy and data portability. Is the company prepared for monitoring of its platform?

Habarth said while the company is doing its bit to tackle problems, it cannot do so alone.

“These are unprecedented times we find ourselves in...one area I have been thinking about is regulation of political communication space,” she said.

WhatsApp and virality of fake news

WhatsApp with an estimated 200 million users in India has emerged as an easy platform for quick dissemination of posts. It found itself in hot water after a spate of incidents of fake news shared on the platform led to over dozens of violent episodes that ended in fatalities. The platform has already introduced features such as limiting the number of forwards and partnering with ‘Checkpoint Tipline’ for fact-check of the shared posts. It is also in the process of beta testing a feature that will label frequently forwarded messages, so when a message has been forwarded four times; the user will be allowed to forward if further only once. The features that allow users to give consent before being added to a group and the limit on sharing messages were rolled out in India first.

(Smriti Kak was in the US as a guest of Facebook.)

First Published: Apr 08, 2019 14:14 IST

more from lok sabha elections