The semi-independent Oversight Board which handles appeals of social media company Meta’s content moderation decisions decided Tuesday to reverse Meta’s auto-removal of two posts on Instagram and Facebook about the ongoing Israel-Hamas War after an expedited review process. Both posts were restored by Meta after the review was announced.The Instagram post, which included a video taken in the aftermath of the Al Shifa Hospital bombing in Gaza, was auto-removed by Meta because the post allegedly violated the company’s Violent and Graphic Content Policy, which was expanded within weeks of the war’s outset. While the board agreed that the post included gory visuals which may be disturbing, it found that the post’s largely political language placed the post firmly within the current content guidelines, as long as it includes a warning for graphic imagery and is age-restricted. The board concluded, writing:This case further illustrates that insufficient human oversight of automated moderation in the context of a crisis response can lead to erroneous removal of speech that may be of significant public interest. Both the initial decision to remove this content as well as the rejection of the user’s appeal were taken automatically based on a classifier score, without any human review.The Facebook post, which included an alleged video of an Israeli hostage being taken by Hamas, was auto-removed for violating Meta’s Dangerous Organizations and Individuals Policy. This policy prohibits the posting of videos of terrorist activity if they show the moment an individual is attacked, even if the post is made for the purposes of raising awareness. This policy was strengthened in the wake of the October 7 Hamas attacks on Israel, with Hamas being identified as a terrorist organization by Meta for the purposes of the policy. The board found that Meta should have kept the post up with a content warning and age restriction. It also found that after the post was reinstated, Meta should not have excluded the post from being algorithmically recommended. The board concluded by criticizing Meta’s rollout of the new content moderation policies in the wake of October 7, stating:The Board is also concerned that Meta’s rapidly changing approach to content moderation during the conflict has been accompanied by an ongoing lack of transparency that undermines effective evaluation of its policies and practices, and that can give it the outward appearance of arbitrariness. For example, Meta confirmed that the exception permitting the sharing of imagery depicting visible victims of a designated attack for informational or awareness-raising purposes is a temporary measure. However, it is unclear whether this measure is part of the company’s Crisis Policy Protocol or was improvised by Meta’s teams as events unfolded.Meta responded to both decisions in a statement to Engadget, writing, “We welcome the Oversight Board’s decision today on this case. Both expression and safety are important to us and the people who use our services.”This is not the first time Meta has been under fire for its approach to content moderation during times of conflict. In November, Amnesty International alleged that Meta directly contributed to numerous human rights abuses in Ethiopia by allowing hate speech against the minority Tigrayan community. In June, the board recommended Meta temporarily remove former Cambodian Prime Minister Hun Sen’s account after he allegedly made several violent threats against his political opponents, but Meta refused. In 2021, several UK and US-based Rohingya refugees from Myanmar sued Meta for allowing hate speech against the Rohingya to proliferate on their platform Facebook, allegedly leading to multiple human rights abuses against them backed by the then-government of Myanmar.

Leave a Reply

Your email address will not be published. Required fields are marked *