The advocacy groups are calling on the parent company of Facebook, Instagram and WhatsApp to address long-standing content moderation issues they say are at the center of their allegations.
The "Meta: Let Palestine Speak" petition accuses the tech giant of unfairly removing content and suspending or "shadow banning" accounts from Palestinians, while failing to adequately address "incendiary Hebrew-language content."
The complaints about Meta's content moderation policies stretch back several years, said Nadim Nashif, the executive director and co-founder of the Palestinian digital rights group 7amleh-The Arab Center for the Advancement of Social Media.
Following an earlier outbreak of violence in May 2021 that prompted similar accusations of unfair treatment of Palestinians on Meta's platforms, the tech giant commissioned an independent due diligence report.
The report found that Meta's actions during the period of unrest "appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination."
Meta agreed to implement many of the recommendations from the report, including developing and deploying "classifiers" for Hebrew "hostile speech." The classifiers, which use machine learning to detect violating content, had previously existed for Arabic but not Hebrew.
However, following the Oct. 7 attack on Israel by the Palestinian militant group Hamas and Israel's subsequent airstrikes and ground invasion of Gaza, Nashif said Meta's new Hebrew classifiers appear to have fallen short.
Meta acknowledged the shortcomings of its Hebrew classifiers internally last month, noting that the classifiers were not being used on Instagram comments because the machine learning-based tool did not have enough data to function properly, The Wall Street Journal reported.
The tech giant also lowered the threshold for an automated system that hides comments that could potentially violate the company's policies on hostile speech from 80 percent certainty to 25 percent within the Palestinian territories, according to the Journal.
Meta, which lowered the thresholds for several countries throughout the region to lesser extents, reportedly sought to address a surge in hateful content after the Oct. 7 attack.
However, Nashif argued that lowering the threshold produces a "very aggressive content moderation approach," which results in "lots of false positives" and content "being taken down that should not be taken down."
Read more in a full report at TheHill.com.
No comments:
Post a Comment