Creating a new legal regime for ‘platforms’ | Analysis
The idea of safe harbours was meant to protect digital intermediaries. It is time to now find a new balance
In 2010, professor Tarleton Gillespie called out the astuteness behind YouTube’s appropriation of the term “platform” to reference its activities. As Gillespie rightly noted in an article in New Media & Society, multiple meanings of this word conveniently served to assume the status of champions of user expression while, in parallel, escaping liability for such expression.
In its oldest sense, this term stood for an architectural feature – a raised level surface on which people could stand. But this feature also lent the term a more figurative meaning with time, symbolising the reliance on something to achieve a higher goal. The term has also acquired a computational meaning as being a neutral technical infrastructure that supports various applications. But, in polar opposition to this, are the political connotations of this term when used to signify issues endorsed by a political leader or party.
Gillespie’s attempts at unravelling the multiple meanings of the term “platform” are not only of exceptional academic rigor but also deep practical relevance. Technology companies have used these multiple meanings to lobby for reduced liability for activities on the “platform.” In the online context, policymakers in both the United States and the European Union bought in to this metaphor. Thus, “safe harbours” were crafted to protect digital intermediaries against defamation and other torts and crimes. They were, after all, the engines of free expression and its unhindered transmitters, rather than curators of opinion and content. This placed them in direct contrast with the traditional media industry that invested, and continues to invest, significant resources in their editorial function and ironically, attracts tortious and criminal liability.
The legal position has evolved in India in very different ways though. Starting with the Avnish Bajaj case where a senior official at Baazee.com faced criminal liability for an unfortunate video clip that went up for sale, there was considerable confusion about the extent to which the Information Technology Act, 2000 would rescue the intermediary model. This was later put to test when questions of liability for copyright infringement came up before the Delhi High Court.
In a litigation initiated by T-Series against MySpace, a single judge of the high court found the latter liable for facilitating the upload of the former’s copyrighted content by primary infringers who were subscribers of the digital platform.
The division bench of the Delhi High Court subsequently overturned this verdict and, while doing so, offered policy reasons in support of a more relaxed liability regime. Echoing the policy choices in the United States when “safe harbours” were originally introduced, the bench observed that imposing such great liability on intermediaries would “not only discourage investment, research and development in the Internet sector but also in turn harm the digital economy.” Though considered a progressive verdict that supported the platform model, its effect is largely confined to the Intellectual Property Rights context.
Indian courts, including the Madurai bench of the Madras High Court and multiple benches of the Supreme Court of India, have intervened in situations ranging from TikTok to prenatal diagnostic technique advertisements, to impose strictures and positive obligations on social media platforms and search engines. In December 2018, the ministry of electronics and information technology also proposed amending the intermediary guidelines and rules under the IT Act to mandate automated tools for filtering undesirable content. It is worth noting here that copyright rules in EU have similarly embraced proactive filtering. Subsequently, the Indian debate has also taken the undesirable turn of questioning the need for encryption technologies, with experts arrayed from either side to attack or defend the anonymity of private conversations.
But the drive to generally regulate social media platforms for the content they carry, whether through judicial or executive action, misses the forest for the trees. Moreover, these interventions have rightly attracted criticism on the ground that they are overboard and represent a form of impermissible State-sanctioned restriction against free speech and expression.
What is needed instead is a compensatory regime that addresses grievances of individuals whose reputations are permanently damaged on account of “viral and trending.” Here, platforms are not neutral observers, rather lending the power of the algorithm to multiply manifold the damage of a one-off slur. If traditional media, with its exacting editorial standards, is put to a low threshold for liability, there is no reason to exempt advertising companies that deploy algorithms to gain maximum eyeballs merely for their reliance on the “platform” metaphor.
New legal tests must be devised to hold these platforms liable, including an assessment of how news spreads and the role of the respective platform in assisting with the same, regardless of facial neutrality and their distancing from the actual content. This must necessarily happen on a case-by-case basis. However, we must bear in mind that the business or technology models of today are a far cry, at least from the lens of scale, from what the “safe harbours” were meant to protect.
The occasion is therefore ripe to create new harbours for the models that exist today wherein individuals can anchor their rights and be suitably compensated.
Ananth Padmanabhan is a visiting fellow with the Centre for Policy Research, New Delhi
The views expressed are personal