While few would disagree with the broad proposition that granting immunity to Internet intermediaries from legal liability for the content posted on their platforms is critical for the maintenance of an open and free Internet, the determination of the precise width and amplitude of this immunity has always been a vexed issue.
As information gatekeepers, intermediaries play an indispensable role in facilitating the robust and uninhibited dissemination of content in cyberspace. As a result, judges and lawmakers have long grappled with the difficulty of finding the appropriate balance between granting intermediaries freedom from the threat of unwarranted legal liability on the one hand and ensuring that intermediaries are not able to actively facilitate the contravention of extant laws under the garb of this freedom on the other.
Against this backdrop, one hopes that the Supreme Court’s recent decision to solicit the assistance of Attorney General, Mukul Rohatgi, to construe the meaning and full import of the statutory scheme governing intermediary liability in the Information Technology Act (“IT Act”) will result in the emergence of clear jurisprudential guidance on this issue.
Facts and High Court judgment
This case essentially relates to the posting of allegedly defamatory content on an online platform owned by Google. More specifically, the accused, Gopala Krishna, is a co-ordinator of a group called Ban Asbestos Network of India (BANI), whose web portal, hosted by Google, can be found here. According to the information provided by the group on the aforementioned platform, it has been working towards creating an asbestos free India since 2002 and works on ensuring that manufacturers of asbestos are held criminally liable while also providing medical assistance to asbestos victims.
On 31.07.2008 and 21.11.2008, Krishna posted articles on the BANI Portal that contained material which, according to Visaka Industries, a manufacturer of asbestos cement products, is of a defamatory character. Consequently, Visaka instituted criminal proceedings against Krishna and Google for criminal conspiracy and defamation.
Google filed a petition under Section 482 of the Criminal Procedure Code, 1973, in the Andhra Pradesh High Court, arguing that it could not be held criminally liable in light of the safe harbor provisions statutorily engrafted in Section 79 of the IT Act. The High Court rejected the petition for two main reasons.
First, under Sec. 79 of the IT Act, in order for an intermediary to be shielded from liability under any law, it must expeditiously remove or disable access to objectionable material as soon as it obtains actual knowledge about the existence of such material. This being the case, since Google did not remove the content in question despite receiving a notice from Visaka, the Court held that it could not seek refuge under Section 79 of the IT Act.
Second, prior to its amendment, which came into force on 27.10.2009, Section 79 did not shield an intermediary from any liability flowing from any other law; its scope was confined to the offences delineated in the IT Act itself. Since the allegedly defamatory material in this case was posted in 2008 and the criminal proceedings were instituted in January 2009, the court held that Google was not eligible to enjoy the benefit of the amended Section 79 and, consequently, the said provision could not operate as an embargo to the criminal proceedings going forward.
As a result, Google filed an SLP before the Supreme Court against the High Court judgment.
Arguments before Supreme Court
According to a news report, the gravamen of Google’s argument before the Supreme Court seems to be that it is both technically infeasible and legally undesirable for Google to sit in judgment over what content posted on platforms hosted by it is defamatory. Put differently, in light of the fact that any finding as to whether or not the content in question is defamatory would be based on a subjective assessment, Google argues that there exist no objective, strait jacket criteria for it to conclusively determine what content is defamatory and what is not. Regular readers of this Blog will recall that, in his post on this subject, Kartik had similarly argued that if private intermediaries are held liable for the content posted on their platforms, they will be forced to make judicial determinations with far-reaching implications on the basis of inadequate information.
Further, removing such content is also a difficult proposition from a technical standpoint, inasmuch as there aren’t any concrete attributes that are possessed by all kinds of objectionable content. This being the case, Google cannot simply devise an algorithm to blindly remove content containing certain keywords or attributes.
This argument also seems to have found favour with the Court, in light of the fact that the Court noted that taking down objectionable content, in contradistinction to auto-blocking it, seems to be the only practical solution.
At the outset, it would be pertinent to note that under the Intermediary Guidelines of 2011, intermediaries, vide Rule 3(4), are mandated to remove content that is, inter alia, of a defamatory character as soon as they acquire knowledge of the existence of such content on their platform.
As Thomas had noted in his analysis of a defamation notice being sent by Anil Ambani’s lawyers to Google for the removal of defamatory content, such requests essentially transform neutral gatekeepers into private censors, inasmuch as they are obligated to decide whether or not the content in question is defamatory and remove it expeditiously in order to continue enjoying the benefits of the safe harbor provisions.
In its landmark 2015 decision, the Supreme Court in Shreya Singhal versus Union of India, read down Section 79(3)(b) of the IT Act and Rule 3(4) of the Intermediary Guidelines to mean that an intermediary would be liable only if it fails to remove objectionable content when it is asked to remove such content by way of a court order. Noting that intermediaries cannot be expected to assess the veracity of millions of takedown requests and determine which requests are legitimate and which are not, the Court held that a failure to comply with a request originating from a private party will not be sufficient to trigger the liability of an intermediary. [Rupali had analyzed the general implications of the judgment here and its effect on intermediary liabilityhere.]
Despite this much-needed clarification, an intermediary can still find itself embroiled in criminal litigation if it fails to remove objectionable content on acquiring actual knowledge of its existence. As Chinmayi Arun and Sarvjeet Singh note, Rule 3 of the Intermediary Guidelines does not define terms like ‘defamatory’ or ‘obscene’, thereby implying that the definitions of these terms delineated in other legislation will be applicable. This being the case, while Internet giants such as Google and Facebook may be able to hire the services of legal experts to construe these terms in a meaningful fashion and remove content that would fall within their ken, it is extremely difficult for small start-ups to comply with these vague and amorphous standards that is the sine qua non for remaining within the zone of immunity carved out by Section 79.
While the flaws in the notice and takedown procedure that Chaitanya Ramachandran had outlined in this guest post have been significantly addressed by the Shreya Singhal judgment, the Supreme Court would do well to use this opportunity to secure the rights of Internet intermediaries on firmer legal moorings. More specifically, while it may be true that the amended version of Section 79 cannot come to Google’s aid in this case, the Supreme Court can nonetheless use this opportunity to delineate, in a concrete and precise way, (a) the principles in accordance with which an intermediary can determine what content is objectionable and what is not; and (b) what technically feasible solutions intermediaries can adopt in order to remain within the zone of immunity.
In light of the fact that the Supreme Court is currently adopting a heavy-handed approach in dealing with intermediaries whose platforms contain ads relating to sex determination kits which has significantly eroded the immunity enjoyed by such intermediaries, the moment is ripe for the Court to recognize the unexceptionable proposition that the harm to free speech that this approach is likely to cause clearly outweighs whatever negligible benefits it may give rise to.