Social media is awash in misinformation about Israel-Gaza war, but Musk’s X is the most egregious

While Twitter has always struggled with combating misinformation about major news events, it was still the go-to place to find out what’s happening in the world. But the Israel-Hamas war has underscored how the platform now transformed into X has become not only unreliable but is actively promoting falsehoods.

Experts say that under Elon Musk the platform has deteriorated to the point that it’s not just failing to clamp down on misinformation but is favoring posts by accounts that pay for its blue-check subscription service, regardless of who runs them.

If such posts go viral, their blue-checked creators can be eligible for payments from X, creating a financial incentive to post whatever gets the most reaction — including misinformation.

Ian Bremmer, a prominent foreign policy expert, posted on X that the level of disinformation on the Israel-Hamas war “being algorithmically promoted” on the platform “is unlike anything I’ve ever been exposed to in my career as a political scientist.”

And the European Union’s digital enforcer wrote to Musk about misinformation and “potentially illegal content” on X, in what’s shaping up to be one of the first major tests for the 27-nation bloc’s new digital rules aimed at cleaning up social media platforms. He later sent a similar, though toned-down, version of the letter to CEO Mark Zuckerberg of Meta, which owns Facebook and Instagram.

While Musk’s social media site is awash in chaos, rivals such as TikTok, YouTube and Facebook are also coping with a flood of unsubstantiated rumors and falsehoods about the conflict, playing the usual whack-a-mole that emerges every time a news event captivates the world’s attention.

“People are desperate for information and social media context may actively interfere with people’s ability to distinguish fact from fiction,” said Gordon Pennycook, an associate professor of psychology at Cornell University who studies misinformation.

For instance, instead of asking whether something is true, people might focus on whether something is surprising, interesting or even likely to make people angry — the sorts of posts more likely to elicit strong reactions and go viral.

The liberal advocacy group Media Matters found that since Saturday, subscribers to X’s premium service shared at least six misleading videos about the war. This included out-of-context videos and old ones purporting to be recent — that earned millions of views.

TikTok, meanwhile, is “almost as bad” as X, said Kolina Koltai, a researcher at the investigative collective Bellingcat. She previously worked at Twitter on Community Notes, its crowd-sourced fact-checking service.

But unlike X, TikTok has never been known as the No. 1 source for real-time information about current events.

“I think everyone knows to take TikTok with a grain of salt,” Koltai said. But on X “you see people actively profiteering off of misinformation because of the incentives they have to spread the content that goes viral — and misinformation tends to go viral.”

Emerging platforms, meanwhile, are still finding their footing in the global information ecosystem, so while they might not yet be targets for large-scale disinformation campaigns, they also don’t have the sway of larger, more established rivals.

Meta’s Threads, for instance, is gaining traction among users fleeing X, but the company has so far tried to de-emphasize news and politics in favor of more “friendly” topics.

“One of the reasons why you’re not hearing a lot about Facebook is because they have something called demotions,” said Alexis Crews, a resident fellow at the Integrity Institute who worked at Meta until this spring. If something is labeled as misinformation, the system will demote it and send it to independent fact-checkers for assessment. Crews cautioned that if Meta — which has been cutting costs and laid off thousands of workers — deprioritizes its fact-checking program, misinformation could flood its platforms once again. The Associated Press is part of Meta’s fact-checking program.

Meta and X did not immediately respond to AP requests for comment. TikTok said in a statement that it has dedicated resources to help prevent violent, hateful or misleading content, “including increased moderation resources in Hebrew and Arabic.” The company said it also works with independent fact-checkers to help assess the accuracy of material posted to its platform.

A post late Monday from X’s safety team said: “In the past couple of days, we’ve seen an increase in daily active users on @X in the conflict area, plus there have been more than 50 million posts globally focusing on the weekend’s terrorist attack on Israel by Hamas. As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.”

While plenty of real imagery and accounts of the carnage have emerged, they have been intermingled with social media users pushing false claims and misrepresenting videos from other events.

Among the fabrications are false claims that a top Israeli commander was kidnapped, a doctored White House memo purporting to show U.S. President Joe Biden announcing billions in aid for Israel, and old unrelated videos of Russian President Vladimir Putin with inaccurate English captions. Even a clip from a video game was passed on as footage from the conflict.

“Every time there is some major event and information is at a premium, we see misinformation spread like wildfire,” Pennycook said. “There is now a very consistent pattern, but every time it happens there’s a sudden surge of concern about misinformation that tends to fade away once the moment passes.”

“We need tools that help build resistance toward misinformation prior to events such as this,” he said.

For now, those looking for a central hub to find reliable, real time information online might be out of luck. Imperfect as Twitter was, there’s no clear replacement for it. This means anyone looking for accurate information online needs to exercise vigilance.

In times of big breaking news such as the current conflict, Koltai recommended, “going to your traditional name brands and news media outlets like AP, Reuters, who are doing things like fact checking” and active reporting on the ground.

Meanwhile, in Europe, major social media platforms are facing stricter scrutiny over the war.

Britain’s Technology Secretary Michelle Donelan summoned the U.K. bosses of X, TikTok, Snapchat Google and Meta for a meeting Wednesday to discuss “the proliferation of antisemitism and extremely violent content” following the Hamas attack.

She demanded they outline the actions they’re taking to quickly remove content that breaches the U.K.’s online safety law or their terms and conditions.

European Commissioner Thierry Breton warned in his letter to Musk of penalties for not complying with the EU’s new Digital Services Act, which puts the biggest online platforms like X, under extra scrutiny and requires them to make it easier for users to flag illegal content and take steps to reduce disinformation — or face fines up to 6% of annual global revenue.

Musk responded by touting the platform’s approach using crowdsourced factchecking labels, an apparent reference to Community Notes.

“Our policy is that everything is open source and transparent, an approach that I know the EU supports,” Musk wrote on X. “Please list the violations you allude to on X, so that the public can see them.”

Breton replied that Musk is “well aware” of the reports on “fake content and glorification of violence.”

“Up to you to demonstrate that you walk the talk,” he said.

___

Kelvin Chan in London contributed to this report.

Technology writer covering social media and the internet