Legal protections for Big Tech need review
While Joe Biden’s intent is positive, it remains devoid of specifics of how any of this will happen, especially given the partisan divide in American politics on the issue
This month, the White House announced new core principles that will guide attempts to reform American laws to better hold tech platforms to account. Outlined in a set of six “principles for enhancing competition and tech platform accountability” was the United States (US’s) strongest reiteration yet that it wants to “remove special legal protections for large tech platforms”.
The legal immunity is contained under Section 230 of the Communications Decency Act (CDA-230), which is widely regarded as having laid the foundation for the internet as it is today. The law bakes in two specific protections: Interactive web services cannot be treated as publishers of their user’s content and, therefore, are not liable for posts; and they will not face action if they voluntarily remove objectionable user content “in good faith”. These protections are why a service such as Tripadvisor can host negative user reviews without being sued for defamation, or how Wikipedia can create a breadth of information without being taken to court over contested or disputed facts (the digital encyclopaedia follows a citation rule by its volunteers).
All regions in the world now have some version of CDA-230 immunities. The Electronic Frontier Foundation (EFF), in a May analysis, showed that globally, such laws fell on a spectrum with strict liability and wide immunity on either extremes, with the middle being made up of approaches where countries laid down some conditions. For instance, in India, social media companies have a safe harbour if they carry out due diligence to ensure certain illegal speech is not allowed (what EFF defines as fault-based conditionality) while in the European Union (EU), companies become liable if they do not act once illegal content is flagged (knowledge-based conditionality).
But it is the CDA-230 that largely sets the paradigm for how Big Tech behaves. Drawn up in 1996, Section 230 was also known as the “Good Samaritan” provision. It came at a time when the understanding of the scope and cost of the harms from online speech was different. Imagined harms were based on illegalities seen in the printed word, such as defamation. The paradigm of technologies, then, too, was different. CDA-230 protections amounted to extending online the same distinction between the liability of a bookstore that sells an illegal book, and that of a publisher that prints it. The bookstore is not liable, the publisher (and the author) are.
The second part of CDA-230’s protective clause, allowing companies to moderate content on their own, in fact, stemmed from a court case at the time: Prodigy, a message forum, was held liable as a publisher because it exercised editorial control by deleting user posts. CDA-230 overturned this precedent – likening Prodigy to the bookstore from the above analogy by giving it the “good faith” prerogative to remove objectionable content. Had it not done so, the evolution of Web 2.0 services that gave birth to social media may not have been possible.
But today, almost two decades after Web 2.0 took off, these distinctions are increasingly irrelevant, which brings us to the first reason why CDA-230 must be reformed.
Social media behemoths Meta (Instagram, Facebook), Twitter and Google (YouTube) — all incorporated on American soil — are hardly mere bookstores. They operate technologies that go beyond the work of an intermediary, determining as they do the amplification or suppression of certain speech. The motive, most often, is only to maximise user engagement, a key metric for their advertising-supported revenue model.
If today’s social media companies were indeed like bookstores, their shelves would be stacked ceiling high with provocative titles, eliciting emotions such as anger that are now known to maximise engagement. It would only be in faraway corners that more nuanced, sedate works would be found. Most of these firms are also accused of applying their policies — the “good faith” prerogative earned via CDA-230 — unevenly. Often this is simply because they do not have the tools to detect violations of their own policies over a wide variety of languages, cultures and methods of expression. Take for instance the new anti-vaxxer strategy of using the carrot emoji to refer to a vaccine (most automated moderation filters are based on words, not emojis) when they share misinformation about life-saving doses. This uneven and flawed application of content moderation prerogative has led to the widespread existence of hate speech, cyber-bullying, misinformation and disinformation.
The second reason is the nature of economic incentives and counterbalances that exist today. Implicit during the CDA-230 drafting was a laissez faire approach, an assumption that services with bad content would simply lose market share as people would flock to a better moderated, healthier information ecosystem. Today, the opposite is true. Network effects (you are more likely to be on a platform that your friends and family are on) and calculated corporate deal-making (most notably, Facebook’s acquisition of Instagram and WhatsApp) have given rise to monopolies. A handful of services have now become today’s new town square.
Many regions, most significantly the EU with the Digital Services Act, are now creating an influence-linked mechanism to force a stronger incentive for these companies to create a healthier town square, while hoping to ensure that these rules do not stifle innovation. CDA-230 should, therefore, be reformed similarly, increasing pressure on larger platforms to commit resources more commensurate with the influence, and therefore the true social cost, of the harms their platforms can perpetuate. The scope to do more became clear in 2019, when Mark Zuckerberg announced his company would spend north of $3.7 billion on content moderation. The amount of money seemed staggering but amounted to only 5% of the company’s revenues the previous year.
While Joe Biden’s intent is positive, it remains devoid of specifics of how any of this will happen, especially given the partisan divide in American politics on the issue. Both Biden and former US President Donald Trump agree on reforming CDA-230, but for very different reasons: The Democrats say the companies are not doing enough to combat unlawful speech, while the Republicans allege too much is being moderated, to the point of Right-wing voices being censored. This itself is a reflection of how sticky the problem of policing speech online is.
email@example.comThe views expressed are personal