X says 325K posts and 375K accounts ‘acted’ on Israel-Hamas war violations

X, the social platform formerly known as Twitter, has faced waves of criticism over how it, under owner Elon Musk, has grappled with issues of trust and safety. Today, the company, which says it now has over 500 million monthly visitors to its platform, published some figures and updates how it coped with the one major test case of all: the Israel-Hamas war.

To be clear, the numbers posted by X have no external check to verify how accurate they are, and X does not provide internal comparison numbers to tell the overall size of the problem. However, it turns out that X, now a private company, feels compelled to publish anything. That speaks to the company’s continued efforts to play nice as it continues to court advertisers.

And as others have reported, that’s an ongoing challenge for X. Research published in October (via Reuters) from before the first Hamas attacks in Israel found that each of the last 10 months saw a decrease in advertising revenue in the US (the largest market) of 55% or more.

Here are the highlights from the update:

X Safety says it has “taken action” on more than 325,000 pieces of content that violate the company’s Terms of Service, including its rules on hate speech and hateful behavior. “Moved” includes taking down a post, suspending an account or preventing access to a post. X also previously announced that it would remove monetization options for posts (using fixes to Community Notes as part of that effort).

X said 3,000 accounts had been deleted, including accounts connected to Hamas.

X added that it was working on “automatic remediation against antisemitic content” and was “giving our agents around the world a refresher course on antisemitism”. It does not specify who these agents are, how many there are, where they are located, nor who is giving the refresher course and what is in the course.

X has an “escalations team” that works on more than 25,000 pieces of content that fall under the company’s synthetic and manipulated media policy — that’s fake news, or content created using AI and bots .

It also targeted specific accounts related to it: more than 375,000 were suspended or otherwise blocked, it said, due to investigations into “real conversations” about the conflict. This includes coordinated/fake engagement, fake accounts, duplicate content and trending topic/hashtag spam, it added. It continues, although there is no clarity on the method. Meanwhile, X said it is also looking into disrupting “coordinated campaigns to manipulate conflict-related conversations.”

Graphic content, according to X, is still allowed if it is behind a sensitive media warning that it is interstitial and newsworthy, but it will remove images if they meet the company’s “Gratuitous Gore” definition. (You can see more about this and other definitions of sensitive content HERE.) The company did not disclose how many images or videos were flagged under these two categories.

Community Notes — X’s Wikipedia style, crowdsourced moderation — were scrutinized by critics of the platform last month. With most of the company’s in-house Trust and Safety team gone, and no external scrutiny of how things work, but plenty of evidence of abuse of the platform, in many ways, the Community Notes feels like X.’s first line of defense against misleading and manipulative content.

But if that’s the case it’s an unfair combination. In relation to the ease of posting and sharing on the platform itself, it can take weeks to be approved as a Community Note creator, and then these notes sometimes take hours or even days to publish.

Now, X has given some updates on how it’s going. X said in the first month of the conflict, notes related to the posts were viewed more than 100 million times. It now has more than 200,000 contributors in 44 countries to the program, with 40,000 added since the start of the fight.

It added that it is trying to speed up the process. “They can now be seen 1.5 to 3.5 hours faster than a month ago,” it said. It also automatically populates notes for, say, a video or photo in posts with similar media. And now trying to repair some of the damage of allowing fake and manipulated news to spread on the platform, when one of the posts got a Community Note attached to it, which is now being sent as an alert. X says that up to 1,000 of these are sent every second – really highlighting the scale of the problem with how much harmful content is being spread on the platform.

If there’s a motivation for why X is posting everything now, I’m guessing: money. And in fact, the last data points it outlines have to do with “Brand Safety,” that’s the situation for advertisers and would-be advertisers in all of this, running ads against content that violates policies.

X says it has proactively removed more than 100 publisher videos that were “not suitable for monetization” and that its keyword blocklists have captured more than 1,000 additional terms that related to the conflict, which in turn hinders ad targeting and adjacency placements in Timeline or Search. “With the many conversations happening at X today, we’re also sharing guidance on how to manage brand activity this time through our suite of brand safety and fairness safeguards and through tighter targeting of appropriate brand content such as sports, music, business and games,” it added.

Leave a comment