X has abandoned antisemitic and Islamophobic hatred, a new report shows

Amid the escalating conflict in the Middle East, X has failed to moderate hate speech on its platform that promotes antisemitic conspiracies, praises Hitler and denigrates Muslims and Palestinians.

In new research, the Center for Countering Digital Hate (CCDH), a nonprofit that researches online hate and extremism, collected sample of 200 X posts of 101 accounts displaying hate speech. Each post was reported on the platform on October 31 using X’s reporting tools and either “directly addressed the ongoing conflict, or appeared to be aware of it.”

That tool invites users to flag content and provide information on what category of behavior it falls into, including an option for hate speech. Those reporting options include “Slurs, Racist or sexist stereotypes, Dehumanization, Incitement of fear or discrimination, Hateful references, Hateful symbols and logos.”

According to CCDH, 196 of the 200 posts remain online, while one account was suspended after being reported and two were “locked.” A sample of posts reviewed by TechCrunch showed that X continued to host content that depicted antisemitic caricatures, called Palestinians “animals” and invited others to “enjoy the performance of those Jews and Muslims killing each other.”

All of the example X posts reviewed by TechCrunch remain online at the time of writing. Of the 101 accounts represented in the sample posts, 82 are paid verified accounts with a blue check.

Views on X’s posts vary, but some have been viewed more than 100,000 times, including posts denying the Holocaust, and an interactive gif depicting a man in a yarmulke strangled, which has been viewed almost a million times. The posts that have not been deleted have collected more than 24 million views in total.

While a sample of 200 posts represents only a fraction of X’s content at any given time, many of the posts are notable for their overt racism, open embrace of violence and for the fact that they remain online, even today. Social media companies often fail to remove pieces of content that violate their rules, but they often remove posts very quickly when researchers or journalists highlight them. Of the sample posts included in the CCDH report, some now show a label that says “Limited visibility: this Post may violate X’s rules against Hateful Conduct.”

“X wants to reassure advertisers and the public that they have control over hate speech – but our research shows that it is nothing more than empty words,” said the CEO of the Center for Countering Digital Hate Imran Ahmed. “Our ‘mystery shopper’ test of X’s content moderation systems – to see if they have the capacity or will to remove 200 instances of clear, unambiguous hate that speech – reveals that hate actors seem to have free rein to post violent antisemitic and hateful rhetoric. on Elon Musk’s platform.”

In its safety guidelines, X states that users “may not attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, sex, gender identity, religion, age, disability , or serious illness.” Under the leadership of Elon Musk, the company formerly known as Twitter has reduced its content moderation workforcerolled back safety policies that protect marginalized groups and invite waves of previously banned users back to the platform.

this year, X filed a case against CCDH, which said the nonprofit used the platform’s data without permission and intentionally harmed the company’s advertising business. The CCDH maintains that X has used legal threats to silence its research, which has been heavily influenced by the many reports about the decline in X content under Elon Musk.

On the same day the CCDH released its new report, X published a blog post outlining content moderation systems during the ongoing conflict in Israel and Gaza. The company says it takes action on more than 325,000 pieces of content that violate its Terms of Service and that actions can include “restricting the reach of a post, removing the post or suspending the account.”

“In times of uncertainty such as the Israel-Hamas conflict, our responsibility to protect public conversation is heightened,” X’s Safety team wrote.

Leave a comment