New Delhi -°C
Today in New Delhi, India

Nov 25, 2020-Wednesday
-°C

Humidity
-

Wind
-

Select Country
Select city
ADVERTISEMENT

Monitor to check abuse

commentpaper Updated: Jul 01, 2016, 07:38 IST

The shocking suicide of a 21-year-old woman in Tamil Nadu after seeing her morphed semi-nude pictures on a Facebook page is only the latest in a series of incidents showing the darker side of social media sites. While the cyber crime cell of the state’s police appears to have been lax, there is also a telling irony in the way justice was delayed. There was a gap of four days between the first complaint against the abuser and a repeat of the crime by the man who has been arrested. If law enforcers cannot act as fast as those who abuse the medium, something seems terribly wrong. There is a lingering question: Could Facebook itself have moved to prevent the suicide? Indications are that it can and it must to prevent the recurrence of such incidents.

The social network has a feature called ‘Facebook Safety’, which has resources to check people who may be experiencing self-injuring or suicidal thoughts. This has been developed in collaboration with mental health organisations. The safety feature can help people help friends in need and involves a tool to counsel those with negative feelings. Facebook has also joined Google in using automation to remove extremist content from their sites. Aimed at pre-emptively removing propaganda that promotes violence, the technology looks for digital fingerprints on suspicious videos and removes malicious ones. The smart software also catches reposting objectionable content.

What we need is a combination of such technologies, besides management processes and community monitoring, to prevent abuses of the kind we witnessed in Tamil Nadu. Alert friends or social activists can work with a company-led command centre and local control rooms more sensitive to cultures outside of Facebook’s native US to take down objectionable content and crack down on wrong doers. There should be efforts to check upload of objectionable content unless the user verifies it as a personal choice. An extension of technologies combined with social monitoring and company efforts can prevent tragic incidents in future. What is needed is a systematic process.

Sign In to continue reading