Regulating the marketplace of ideas

TORSHA SARKAR, ARINDRAJIT BASU and KARTHIK NACHIAPPAN

back to issue

SOCIAL media is one of the most important tools shaping political discourse all over the world. Over the last decade, we have seen online spaces being increasingly used to archive instances of humanitarian violence, carve out repose for historically marginalized communities and coordinate and communicate entire political movements. At the beginning of the last decade, therefore, optimists like Jack Balkin saw the internet and digital technologies, with their ‘[...] widespread distribution, their scope and their power’ as having the potential of promoting the ‘possibility of democratic culture’.

However, if the beginning of the decade heralded social media platforms for improving the participative nature of democracies for good, the later half has cast a doubt on this rosy narrative. In the last few years, both governments and civil society have accused these companies of manipulating elections, facilitating genocides and censoring the voices of historically oppressed communities. As Lawrence Lessig warned us, in our celebration of cyberspace as achieving ‘liberty from the government’ we overlooked the impact of code as the hidden regulator of the terms of our online liberty.

The changing nature of these online spaces and the ever-expanding number of services offered by social media has made governments enact stricter, more interventionist liability regimes for these platforms, often trampling upon the right to freedom of speech and expression. India is no exception.

In this article, we explore, first, the domestic interventions India has opted for in regulating social media; second, global, multilateral efforts undertaken by both state and non-state actors and third, highlight key questions on the applicability of global governance mechanisms to regulate free speech on social media for the Indian context. Regulating social media is a continuous battle between competing values – preserving and restricting speech that harms public order. The heterogeneity – social, economic and linguistic – of India’s vast population could mean that India’s regulatory manoeuvres have the potential to shape free speech norms vis-a-vis social media governance globally.

 

India regulates social media under the broad regulatory ambit of intermediary liability. The term intermediary is defined under the Information Technology (IT) Act, 2000 as ‘with respect to any particular electronic records, [...] any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record’. This broad definition therefore encompasses social media within its ambit.

One of the most critical aspects of regulating social media is the issue of regulating user-generated content, and navigating the liability regime for supposedly unlawful acts of users online. Article 19, in its report ‘Internet Intermediaries: Dilemma of Liability’, identifies three legal models of regulating the liability of intermediaries for content posted by its users. In the spectrum of holding intermediaries liable for third-party content – strict liability model (followed in China and Thailand) and broad immunity model (followed in USA under the Communications Decency Act) occupy the two ends, while safe harbour falls somewhere in the middle, though the exact contours of the model vary across jurisdictions.

India has traditionally followed the safe harbour model of liability. As provided for in Section 79 of the Information Technology Act, an intermediary would be granted a ‘safe harbour’ from the liability accrued due to third party content provided they follow certain legal obligations. While this system has not been perfect, the draft amendments introduced to the existing rules seem to overhaul the structural norms of the liability model. Specifically, Rule 3(9) of the draft amendments introduced in December 2018 require intermediaries to deploy ‘technology based automated tools’ to filter out ‘unlawful’ content. At first glance, this obligation seems to run into the constitutional ‘void for vagueness’ doctrine where the usage of ambiguous and broad terms throws up questions of the constitutionality of the provision.

Additionally, the rule ignores views that criticize the use of automated tools to filter speech. YouTube’s Content ID, for instance, is a classic example of the faults of automated content detection methods. Content ID is an automated tool used for detection of content on the platform violating copyright, and despite the staggering amount of investment behind its development, continues to yield false positives, and has proven to be not infallible.

 

For every post under scrutiny, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would be understandable by a machine. Further due to the algorithmic ‘black box’, no decision made by automated filtering tools can fully be explained to human beings, even to the developer that trained the algorithm. This has crucial ramifications for due process and accountability if a decision needed to be scrutinized in a court of law.

Perhaps most importantly, this obligation leaves it up to the social media platform to arbitrate what comprises ‘unlawful’ speech, a decision that should be taken by a legitimate lawmaking body. In India, with its diversity of cultures and languages, what comprises unlawful speech would vary and require contextual analysis. It is impossible to expect such judgment from social media companies whose content moderation norms have always been a-contextual and agnostic to the socio-political realities of the terrain where they operate in.

One of the core conceptions of a liberal democracy is holding elected institutions accountable for the enforcement of our constitutional rights, including our right to life and right to free expression. If India’s political institutions are shirking this duty in favour of corporations, then that leaves the common citizen without any recourse should they believe their fundamental rights of expression are violated online. This development is alarming.

 

Apart from this, section 69A of the IT Act and its allied rules also provide the regulators an alternate framework to effectuate content takedown. The procedures under this framework are required to be carried under a strict confidentiality clause, as mandated by the law. As a result, this framework has traditionally allowed the Indian government to carry out censorship in a completely opaque manner, and circumvent answering Right to Information (RTI) requests time and again.

An oft-overlooked aspect of regulating social media is the issue of holding them accountable for their actions, which may not be unlawful, but nevertheless limits the public participation of the users. As the decision of the US Supreme Court in Packingham v North Carolina states, participation in social media was equivalent to ‘speaking and listening in the modern public square’, and the same was protected under the First Amendment rights.

A similar legal question is currently being discussed in India where the suspension of Supreme Court advocate Sanjay Hegde’s Twitter account and his subsequent litigation against Twitter has raised several important questions. One is, of course, regarding Twitter’s arbitrary enforcement of its internal moderation norms, where perfectly legal speech (like the content shared by Hegde) is censored while troves of hateful narrative remain online. The second, and possibly more important, question is the creation of an enforceable constitutional remedy of the fundamental right of free expression against social media platforms.

Why is this important? We have to remember that traditionally, the government’s role in regulating speech is limited to the extent that speech is clearly violating a legal framework while ensuring that any restrictions on speech do not violate the fundamental right to free expression. This is a negative obligation on the part of the government – to not violate our rights. However, should legal jurisprudence develop to an extent where certain acts of social media companies are subsumed within the constitutional framework, our fundamental rights may cast a different duty upon the government to regulate these private actors on our behalf – thereby placing a positive obligation. Getting around this constitutional juggernaut will be critical for effective social media regulation in India.

 

While multilateral discussions covering the regulation of social media are new territory for India, it is becoming a fixture of political discourse in other parts of the world. Of late, several countries have begun seriously discussing whether the time has come to regulate and constrain the growing clout of social media companies like Facebook and Twitter. Some of these discussions cover issues related to privacy and copyright, particularly in the European Union where some tech companies have been sanctioned for violating EU laws. Other such discussions have been prompted by events, most notably the heinous Christchurch terrorist attack that was telecast live on Facebook. This senseless attack precipitated an eponymous global initiative to curb the deleterious impacts of social media platforms – Christ-church Call to Eliminate Terrorist and Violent Extremist Content Online.

The Christchurch Call outlines ‘collective, voluntary commitments’ from both governments and internet service providers to reduce violent extremist and terrorist content online and, more broadly, to prevent the internet from being used for such nihilistic purposes. Through the call, eight tech giants, including Facebook and Twitter, the European Commission and 48 countries, including India, signed a voluntary call to remove the ability of social media platforms from being used for extremist purposes. The Christchurch pledge includes three specific commitments for governments, online service providers and for both combined.

Specifically, the call impels governments that have committed, to act by promoting social cohesion that resists and eradicates hate, enforce laws to prohibit the production of violent content online, encourage media organizations to behave ethically online and tighten domestic standards that heighten reporting of such malevolent acts. Yet, a bugbear of the initiative has been its innate conflicts with freedom of speech restrictions, which has prevented some countries, notably the United States, from supporting it despite Washington issuing support over the spirit of the call.

 

The Christchurch Call aside, tech companies have banded together to combat the use of their platforms for nefarious purposes. Twitter, Facebook and Microsoft announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) to disrupt terrorists and extremists from exploiting their services to promote terrorism, disseminate extremist propaganda and glorify violence. The GIFCT functions as a private initiative between its members to address the adverse public effects of their platforms, largely done through information and knowledge sharing, technical collaboration and shared research that could blunt that ability of terrorists from abusing digital platforms. What is patently unclear, even now, is how the GIFCT interacts with public authorities as it works to fulfil its mandate; moreover, concerns owing to transparency linger particularly as these firms tout information sharing as a key priority under GIFCT.

It also remains to be seen how the GIFCT will collaborate to reduce online extremist content without effectively balancing the need of its users who will seek to express themselves freely on their platforms; GIFCT members that now include YouTube, Pinterest, Dropbox, Amazon, Linkedin and WhatsApp, also have the right to resist governmental calls to remove online content from their platforms. Self-governance exhibited by tech firms through GIFCT appears to be a promising development but one that has to work in concert with other more stringent public efforts to root out online extremism.

 

European institutions and countries have been spearheading the global effort to cleanse social media platforms of hateful content. In March 2000, the European Commission established the European Internet Forum (EIF) to generate greater awareness among European Parliament MEPs on the question of internet governance and how the internet could impact European countries broadly. The EIF has evolved as a forum to help European parliamentary representatives respond to the social and economic effects of the ‘digital transformation’ sweeping Europe; the forum has since become a serious advocate for international cooperation to contain the ‘viral spread’ of terrorist and violent extremist content online.

The need to facilitate this objective also featured prominently during France’s 2019 G-7 presidency where discussions raised the issue of extremist online content and the responsibilities platforms like Facebook had to eliminate it. French President Macron initially hoped to get social media companies to sign a ‘Charter for an Open, Free, and Safe Internet’ that aimed to forge a ‘collective movement’ to ensure the internet remains a safe positive space for all. French initiative, however, was scuppered by the Trump administration that pressured their social media companies to demur.

 

Besides these focused initiatives, other intergovernmental organizations have joined the fray offering proposals and ideas to coordinate and regulate social media governance across borders. In 2018, the UN issued its first report on the regulation of online content which examined the role of both states and social media companies in ensuring an ‘enable environment’ exists for the expression of information and ideas. Going further, the report urged states to reconsider speech-based restrictions and, instead, adopt targeted regulation that helps publics make educated choices on online engagement.

Recently, UNESCO also issued a report calling for the modernization of electoral frameworks that could contain the spread of disinformation and ‘fake news’ on various platforms. These diffuse efforts, though somewhat ineffective, are gradually raising more awareness regarding whether the UN or OECD should lead efforts to devise clear rules, through an international convention or binding standards that point the way for countries to regulate their social media companies.

The growing landscape of actors and governance approaches to reduce hateful and violent online content also includes two multi-stakeholder networks – The Internet and Jurisdiction Policy Network and the Freedom Online Coalition (FOC) Advocacy network. The FOC consists of 31 member states that have committed to support internet freedom and protect fundamental human rights or freedom of expression, association, assembly and privacy when online. FOC’s 31 member states work to achieve internet freedom by aligning diplomatic efforts and policies, sharing information and raising concerns when freedom is abridged online.

In terms of work, the FOC also partners with civil society and the private sector through working groups to advance interest freedom and digital rights. Compared to other groups, the FOC as a clear and avowed political mission – to keep the internet open and democratic for all individuals just as repression on the internet appears to be rising. Finally, the Internet and Jurisdiction Policy Network functions more as an arbiter, working to resolve conflicts that surface when the effects of the internet cross jurisdictions. The network’s role and responsibilities embody the multi-stakeholder nature of the internet and internet governance that cuts across multiple silos and jurisdiction informing the public of the risks tied to dealing with internet problems narrowly.

 

The above discussion elicits two important questions that trouble the application of traditional global governance mechanisms to regulating social media. First, it is clear that the appropriate regulation of free speech online can only be devised and implemented in conjunction with private actors. The proliferation of multilateral bodies consisting of private actors, along with the growing jurisprudence of holding social media platforms accountable for discharging a ‘public function’ seems to support that observation. Private sector actors are not new to discharging public functions, of course. Microsoft, for example, along with the French government championed the Paris Call for Trust and Security in Cyberspace. While some scholars like Duncan Hollis argue that this sort of a public-private partnership is vital to effective global governance, questions remain about the public values at stake when a private company creates the norms that regulate them.

 

Further, arriving at a balance between two core competing values of free speech and public order should not be left to a handful of firms, largely based in the Bay Area. Private sector companies do not have the resources to frame regulations in a manner that render themselves scrutable to differing constitutional standards across the globe. Therefore, the only solution is for governments to devise regulations themselves while adopting an open consultative process to understand the concerns and needs of social media companies. After the Intermediary Liability Guidelines were released in late 2018, the Indian government did make it a point to put out a request for public consultation. Till the revised draft is made public, however, we will not know the extent to which the concerns put forth by social media companies were taken note of by the government.

Second, a daunting challenge for global governance in this arena is that one-size-fits-all strategies rarely work. Cultural contexts and constitutional protections of free speech differ across the world. The impact of online speech plays out differently in the Global South with its unique social cleavages. In 2017-18, there were fifty-six cases of lynchings of alleged child abductors in India, catalysed by videos spread through encrypted private messaging platform WhatsApp. On the other hand, concerns in the United States or UK have centred around the spread of misinformation or hate speech on platforms like Facebook or Twitter – as was laid bare in the aftermath of the Cambridge Analytica scandal.

One of the best-known gambits to combat fake news by Indian Police Service (IPS) officer Rema Rajeswari utilized folk songs and ballads to spread awareness on misinformation in rural areas in Telangana. This method was customized and designed for governing misinformation at the grassroots level in a very specific context. Despite the noble intentions of global governance efforts like the Christchurch Call or the rigorous work of the Freedom Online Coalition, their impact on domestic regulations will likely be limited. The highlevel regulatory guidelines and commitment to principles is dependent on how these principles are shaped and implemented across the world – something that depends on the action of the federal, and as we saw in Telangana district administrations.

This heterogeneity differentiates global governance on social media from other topics discussed in this issue such as 5G, data or autonomous weapons systems. Those are trans-national problems that require well defined, robust and implementable transnational solutions. There is no transnational solution for governing free speech on social media. Any solution necessarily needs to be bottom-up and context-specific, and social media companies need to comply with the sovereign writ of any jurisdiction they are operating in.

Global governance arrangements can help by encouraging countries and social media companies to outline their commitments, share best practices and facilitate dialogue. Regulatory success, however, depends on local efforts – in India as anywhere in the world.

top