Rahul Gandhi, the leader of India’s main opposition party, tweeted a photo of himself with the parents of a girl allegedly raped and killed in New Delhi. Since the Indian law prohibits disclosing the identity of victims of sexual assault, the National Commission for Protection of Child Rights issued a notice to Twitter asking for the removal of Gandhi’s tweet. As a result, Twitter hid the tweet and suspended Gandhi’s account. The same thing happened to Congress members who tweeted the photo. Mr. Gandhi stated that, by doing this, Twitter was interfering in Indian politics.
Twitter, as well as other platforms such as WhatsApp, have been at odds with the Indian government over the past years. With around 24 million users in the country, Twitter has become a crucial online space for public debate in India. In this scenario, the government has frequently required Twitter as well as other social media platforms to remove content and block accounts in order to protect “public order.” For instance, in February 2021, Twitter suspended more than 500 accounts accused by the government of making inflammatory remarks about the Prime Minister, Narendra Modi. The company took action to comply with the orders from the Indian government. In May 2021, Delhi Police’s Special Cell raided Twitter’s offices after the company labeled tweets by politicians from the ruling party “manipulated media.”
Relatedly, in February 2021, the Indian government introduced the Intermediary Guidelines and Digital Media Ethics Code (2021 IT Rules) to regulate social media firms, streaming services, and digital news outlets. These laws provided the government with the ability to demand the removal of content that it considers defamatory, misleading, or a threat to “the unity, integrity, defense, security or sovereignty of India.” The online intermediaries in the country are required to comply with strict removal timeframes. These laws also mandate that companies must appoint an in-house grievance redress officer within the country to address concerns.
The 2021 IT Rules were introduced after Twitter declined to comply with a legal order by the government to block accounts of protesters during the “farmers protests” in 2020 and 2021. The accounts included those of activists, opposition politicians, and journalists who had criticized the country’s ruling party. In March 2021, ten international non-profit organizations published an open letter “calling on government authorities and web firms operating in India to cease a crusade of censorship and surveillance across the nation targeting critics speaking online in response to the ongoing #FarmersProtests.” The letter stated that “tech companies hold immense power, and must defend privacy, and ensure free speech by pushing back on government orders that infringe on rights,” and asked the Indian government to stop its campaign to silence criticism and censor information related to protests and democratic opposition. In addition, three United Nations’ Special Rapporteurs published a joint letter expressing their concerns about the Rules and stating that they do not conform with international human rights norms. In 2022, after a few amendments by the government to the 2021 IT Rules, the Electronic Frontier Foundation (EFF) — together with other organizations — requested the Indian government suspend the implementation of the 2021 IT Rules, withdraw the new amendments, and hold inclusive public consultations.
Finally, in July 2022, after receiving a letter from the government warning of “serious consequences” of non-compliance with the 2021 IT Rules, Twitter filed a petition with the high court in Karnataka against the Indian government. Twitter argued that the government’s order to remove several tweets was arbitrary and disproportionate, and that it consisted of a “disproportionate use of power.” Moreover, Twitter added that some block orders “pertain to political content that is posted by official handles of political parties,” and that “blocking of such information is a violation of the freedom of speech guaranteed to citizen-users of the platform. Further, the content at issue does not have any apparent proximate relationship to the grounds under Section 69A [of the 2021 IT Rules].” WhatsApp has also sued the Indian government. In this case, the petition was made over the requirement in the 2021 IT Rules to devise a way to trace the originator of messages, even if the platform operates encrypted messaging services.
In August 2021, Rahul Gandhi, the leader of India’s current main opposition party, tweeted a photo of himself with the parents of a nine-year-old girl who had been allegedly raped and killed in New Delhi. The death of the child resulted in protests and outrage in India.
The post by Mr. Gandhi, who has more than 20 million followers on Twitter, included the photo accompanied by Hindi text that translates to “Parents’ tears are saying only one thing — their daughter, the daughter of this country, deserves justice. And I am with them on this path to justice.”
After Mr. Gandhi published the image, the tweet was hidden behind a notice and Mr. Gandhi’s Twitter account was temporarily locked. The National Commission for Protection of Child Rights (NCPCR) asked the Delhi Police and Twitter to take action over the tweet since it revealed the identity of the victim. This was forbidden under the Juvenile Justice Act and the Protection of Children from Sexual Offences (POCSO) Act. Moreover, the official account of the Congress party stated that several of its leaders and “about 5,000 volunteers” who had retweeted Mr. Gandhi also had their accounts locked.
As his account was suspended, Mr. Gandhi posted his response on Instagram, saying that “They can lock us out on a platform. But they can’t lock out our voice for the sake of the people” and that “If showing compassion and empathy is a crime. I am guilty.” Congress spokesperson Randeep Singh Surjewala stated that “the law only says that one cannot put out the photo of the victim or give details of the family and that we’ve done neither.” * In addition, Congress general secretary Priyanka Gandhi accused Twitter of colluding with the government to stifle democracy, and the head of the Congress social media department Rohan Gupta expressed that “Twitter is clearly acting under the government’s pressure as it did not remove the same pictures shared by the Twitter account of the National Commission for Scheduled Castes for a few days.”
Twitter argued that the image violated its privacy rules. According to the company, all the features of the account could be restored in 12 hours, once the tweet violating privacy rules was deleted by Mr. Gandhi. A Twitter spokesperson said that “we have taken proactive action on several hundred tweets that posted an image that violated our rules and may continue to do so in line with our range of enforcement options. Certain types of private information carry higher risks than others and our aim is always to protect individual privacy and safety.”
While this case may be impacted by the Indian political context and a more than two-year-long conflict between the government and Twitter, it also illustrates a relevant content moderation challenge. Certain characteristics of the case make it worth analyzing: it is the family rather than the affected person who is portrayed; the affected person is a minor; there are local laws involved; it is a public official who posted the picture; and at least part of the information posted may be of public interest.
- What particular steps and precautions should companies take when moderating sensitive content that may affect the rights of victims of crime and is potentially illegal?
- How can companies ensure that decisions on this kind of sensitive content are being made in a consistent manner at a global scale?
- What alternatives could be employed to retain information that could be of public interest while also protecting the right to privacy?
- How can companies provide users and the general public with more details about the reasoning behind these content decisions, so as to avoid speculation from different sectors?
- How can companies work hand-in-hand with government agencies—for example, the National Commission for Protection of Child Rights—in order to better identify sensitive content that could be potentially illegal (such as content that violates the POCSO Act)? Are there any guidelines or documents that can be jointly developed?
- How can companies respect and comply with local regulations (e.g., the 2021 IT Rules) and—at the same time—ensure the protection of their users’ human rights?
- What are more effective and scalable ways of identifying this kind of sensitive content?
- Are there—or should be—any special considerations given that the author of the post is a public official?
- How might information about the meeting between a public official and the victim’s family be of public interest? If it is of public interest, how might the rift between the protection of access to information of public interest and the right to privacy be resolved in this case?
Written by Maia Levy Daniel, December 2022. Special thanks to Siva Raghava and Wazbir Hazarika for their feedback!
* Article 23 (2) of India’s POCSO Act states that “No reports in any media shall disclose the identity of a child including his name, address, photograph, family details, school, neighborhood or any other particulars which may lead to disclosure of identity of the child. Provided that for reasons to be recorded in writing, the Special Court, competent to try the case under the Act, may permit such disclosure, if in its opinion such disclosure is in the interest of the child.”