Meta’s Oversight Board has upheld the social media giant’s decision to restore an Instagram video showing a Dalit woman being sexually assaulted by a group of men, citing larger public interest and the need to raise awareness about discrimination against Dalits and Adivasis in India, especially women. However, the board has also ruled that the use of “newsworthiness allowance” does not provide a clear standard process to resolve such cases and there must instead be a clearly defined exception to the platform’s adult sexual exploitation policy.
The board observed that the woman’s face was not visible in the video nor did it show any nudity. However, a minority within the board believed that the risk of restoring the content was not low enough.
The video was posted in March by the Instagram account that posts content on issues linked to marginalised communities. A user reported the post and after human review, Meta removed it for violating its adult sexual exploitation policy which prohibits content that “depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation”.
However, this removal was flagged internally by a Meta employee from the company’s global operations team who saw the initial removal being discussed on their personal Instagram account. After internal teams reviewed the content, they restored the video behind a warning screen which restricted views by minors. Meta cited the “newsworthiness allowance” which allows violating content to remain on its platforms for public interest and newsworthiness reasons.
The board had taken up this case for consideration in September at Meta’s request since it demonstrated the challenge in striking “the appropriate balance between allowing content that condemns sexual exploitation and the harm in allowing visual depictions of sexual harassment to remain on [its] platform”
The board had asked Meta 15 questions and the company resolved all but one. It had also asked Meta to share its human rights impact assessment report for India with the board but the company declined citing security risks, a reason that the board did not buy.
Meta had released a four-page summary of the report in July 2022 but did not release the complete report. Facebook had commissioned this assessment to determine the role of the platform in spreading hate speech online. According to the Time magazine, for the assessment, an independent law firm Foley Hoag had interviewed more than 40 civil society stakeholders, activists and journalists. The summary’s release had drawn significant criticism in India.
Board points to ‘public interest’
While the board acknowledged that depicting non-consensual sexual touching can lead to significant risk of harm, both to the victim and in terms of emboldening perpetrators and increasing acceptance of violence, the post documented violence and discrimination against Dalit and Adivasi communities and was posted to raise awareness.
“The post therefore has significant public interest value and enjoys a high degree of protection under international human rights standards,” the board observed.
As per the board, since the woman is not identifiable in the video and the post was restored with a warning label, it “outweigh[s] the risk of harm”.
Is the ‘newsworthiness allowance’ inadequate?
The board said that “newsworthiness allowance” is “inadequate” for dealing with cases of sexual abuse at scale. It noted that this exception is rarely used – it was applied only 68 times globally and “only a small portion” were issued in relation to the adult sexual exploitation community standard between June 2021 and June this year.
It said the term is vague, allows for too much discretion, cannot ensure consistent application at scale, and has no clear criteria to assess potential harm caused by content that violates its sexual abuse policy.
The board wants Meta to provide clearer standards under the adult sexual exploitation policy that clearly indicate how posts shared to raise awareness can be distinguished from those perpetuating sexual violence.
Under this exception, Meta should consider the context of the post and allow a post to remain on its platforms if it judges that it entails minimal risks of harm for the victim by considering whether or not the victim is identifiable, whether the content involves nudity, and whether the content has been shared in a sensationalised context. This exception, as per the board, should be applied at “escalation” only – for restoring user-reported or algorithmically-flagged posts, not for letting them remain in the first instance.
Earlier this year, the board had upheld the company’s decision to restore a Facebook post depicting violence against a civilian in Sudan for similar reasons. At the time, it had recommended that the company similarly amend its violent and graphic content community standards to clearly define how such content meant to document human rights abuses can be distinguished from similar content that is meant to provoke.
Newslaundry is a reader-supported, ad-free, independent news outlet based out of New Delhi. Support their journalism, here.