A new human rights report accuses Facebook of aiding the 2017 genocide against Muslims in Myanmar, while other reports allege the company continues to allow its platform to spread dangerous disinformation and hate speech in other countries.
In the months and years leading up to what the U.S. has called a genocide by Myanmar’s military against Rohingya Muslims — that saw 9,000 killed and hundreds of thousands forced from the country — Facebook “became an echo chamber of virulent anti-Rohingya content,” Amnesty International alleged in a report last week.
“Actors linked to the Myanmar military and radical Buddhist nationalist groups systematically flooded the Facebook platform with incitement targeting the Rohingya, sowing disinformation regarding an impending Muslim takeover of the country and seeking to portray the Rohingya as sub-human invaders,” the report said, attacking the social media firm’s “wholly inadequate staffing of its Myanmar operations prior to 2017.”
Amnesty called those purported failures “symptomatic” of Facebook’s “broader failure” to adequately invest in content moderation in the developing world. Critics have for years lambasted Facebook, which according to the company’s most recent quarterly report had nearly 3 billion users worldwide, for its inability to effectively police content and enforce its own policies. In the U.S., Facebook has been criticized for allowing the spread of false news reports and disinformation in the run-up to the 2016 presidential election of former President Donald Trump.
Facebook, now overseen by parent company Meta, said it has made “voluntary, lawful data disclosures” for the United Nations’ investigation into the Myanmar atrocities and for The Gambia’s case against Myanmar in the International Court of Justice.
“Our safety and integrity work in Myanmar remains guided by feedback from local civil society organizations and international institutions … as well as our ongoing human rights risk management,” said Rafael Frankel, a public policy director for Meta.
In three recent reports, Global Witness, a rights group headquartered in London, has taken aim at Facebook’s operations in other countries. Facebook did not immediately provide answers to questions about the Global Witness allegations.
In August, Global Witness alleged that Facebook “appallingly failed to detect election-related disinformation in ads” as the Oct. 2 Brazilian presidential election approached. Global Witness submitted 10 ads to Facebook, half with false election information including when and where to vote, and half “aiming to delegitimize the electoral process” through means such as casting doubt on electronic voting machines, it said.
The group deliberately violated several of Facebook’s election-integrity safeguards, including by failing to verify the account it used to place the ads, the group said. Facebook approved all of the ads, according to Global Witness, adding that it cancelled the ads before publication.
The election gave 48% of the votes to center-left former president Luiz Inácio Lula da Silva, and 43% to far-right president Jair Bolsonaro, who had publicly questioned the country’s electoral system and the integrity of voting machines. A runoff is expected Oct. 30.
In Ethiopia, Global Witness found a dozen of the “worst examples” of hate speech posted on Facebook in the dominant Amharic language, it said in a June report. All had been reported previously to Facebook as violating policies, and the company had removed most of them, the group said. Global Witness submitted the 12 examples as ads, with four ads each targeting the country’s three main ethnic groups.
“The sentences used included violent speech that directly calls for people to be killed, starved or ‘cleansed’ from an area,” the group said in a report. “Several of them amount to a call for genocide.” All were approved, Global Witness alleged, adding that it cancelled them so they never appeared.
The group said it presented its findings to Facebook, which responded “that the ads shouldn’t have been approved and that they’ve invested heavily in safety measures in Ethiopia, adding more staff with local expertise and building their capacity to catch hateful and inflammatory content,” Global Witness said. The group then submitted another two hate-speech ads a week later. Both, according to the group, were accepted for publication “within a matter of hours.”
Since the conflict started in northern Ethiopia in November 2020, hundreds of thousands of people have died, millions have been displaced, and all sides have been accused of rape and torture.
Global Witness also ran an experiment this summer in Kenya, which has seen deadly violence around several elections. The group found 10 examples of hate speech and calls to ethnic violence used in Kenya since more than 1,000 people died in 2007 election-related violence, much of it ethnic based. With Kenya’s national elections approaching in early August, Global Witness submitted the hate speech as 20 ads, half in English and half in Swahili, including material “comparing specific tribal groups to animals and calling for rape, slaughter and beheading,” the group said. The Swahili ads were approved promptly, but the English ads were initially rejected for not complying with Facebook’s grammar and profanity policy, Global Witness said.
“Facebook invited us to update the ads, and after making minor corrections they were similarly accepted,” the group alleged.
Global Witness informed Facebook of its findings, and Facebook put out a statement highlighting its work to remove harmful content ahead of the election. The group said it submitted another two hate-speech ads, and Facebook approved them.
Kenya’s human-rights commission chair Roseline Odede said last week there had been fewer human rights violations around the August elections, but that there had been four deaths and 49 cases of assault, harassment and intimidation.