The Oversight Board, established by Meta, recently issued a decision that has drawn attention to the company’s manipulated media policy. While the board upheld Meta’s decision to allow a video of President Joe Biden to circulate on the platform, it criticized the incoherence and lack of clarity in the policy. In this article, we will analyze the board’s findings and recommendations, highlighting the need for revisions to address the potential harm caused by manipulated media.
The video in question features real footage of President Joe Biden from October 2022, where he places an “I Voted” sticker above his adult granddaughter’s chest. However, the video, which was posted as early as January 2023, loops the moment Biden’s hand reaches her chest to create the false impression of inappropriate touching. In one version of the video, posted in May 2023, Biden is even referred to as a “sick pedophile” in the caption.
Despite the obvious manipulation in the video, the Oversight Board agreed with Meta’s assessment that it did not violate the company’s manipulated media policy. The current rules only prohibit the creation of videos that make it appear as though someone said something they didn’t, not videos that portray false actions. Additionally, the rules specifically target videos created with artificial intelligence (AI), rather than those edited through simpler means like looping or basic editing tricks.
While the Oversight Board acknowledged that Meta correctly applied its existing rules in this specific case, it raised significant concerns about the overall policy. The board characterized the policy as lacking persuasive justification, incoherent, and confusing to users. It also highlighted the failure of the policy to clearly specify the harms it aims to prevent.
In light of the upcoming 2024 elections, the Oversight Board urged Meta to reconsider its manipulated media policy. The board suggested expanding the scope of the policy to cover cases where videos or audio are edited to make it appear that someone did something they didn’t, even if it is not based on their words. The group also expressed skepticism about basing content decisions on how a post was edited, whether through AI or more basic editing techniques.
Through consultation with experts and public comments, the board concluded that non-AI-altered content can also be misleading and contribute to the spread of false information. However, this does not necessarily mean that Meta should remove all altered posts. Instead, the board suggested implementing less restrictive measures, such as applying labels to notify users that a video has been significantly edited.
The Oversight Board, created by Meta, serves as a reviewing body for content moderation decisions and has the authority to issue binding judgments. Additionally, the board can make policy recommendations to Meta, which the company can choose to implement. In response to the board’s findings, a Meta spokesperson noted that the company is currently reviewing the recommendations and will provide a public response within 60 days, as required.
The Oversight Board’s recent decision has shed light on the flaws within Meta’s manipulated media policy. While the board acknowledged Meta’s application of the policy in the specific case involving an altered video of President Joe Biden, it emphasized the need for significant revisions. The board suggests that the policy should cover a broader range of manipulated media, including non-AI-altered content, and proposes less restrictive measures to address the potential harm caused by such content. It remains to be seen how Meta will respond to these recommendations and whether it will take steps to strengthen its policy in line with the board’s suggestions.
Leave a Reply