Skip to main content

AI could be useful in fighting antisemitism, tech expert says, but it's not without risks

Artificial intelligence could help curb antisemitic and terrorist social media content, one tech expert says. But generative AI also runs the risk of being misused.

Artificial intelligence could help flag antisemitic and terrorist content online, one tech expert said, but only if social media companies prioritize fighting Jew hatred.

"Social media platforms are capable of investing in technologies when it affects their bottom line," CyberWell founder and CEO Tal-Or Cohen Montemayor said. "It's high time that we started demanding that they do it when it comes to violent content and to antisemitism online."

JEWISH-OWNED SHOP VANDALIZED IN ATTACK REOPENS AS POLICE INVESTIGATE POTENTIAL HATE CRIME

CyberWell is an Israeli nonprofit that created the first open database of online antisemitic content. It uses a host of open-source intelligence techniques and tools, including an AI dictionary Montemayor developed over the course of three years that monitors antisemitism in real time across social media platforms. Humans then review and vet the data to identify trends in online antisemitism, Montemayor told Fox News.

After Hamas launched its attack on Israel in October, CyberWell reported an 86% increase in Jew hatred on social media. Montemayor was "shocked at just how poorly" companies performed when it came to screening out violent content from Hamas.

"I was shocked and horrified," she said, "as a Jew, as a woman, as an Israeli, and as an American citizen, after the horrors that I witnessed online. I saw things that I unfortunately can never unsee."

WATCH MORE FOX NEWS DIGITAL ORIGINALS HERE

Companies can do more, she said, noting that they already have automated technology that detects child pornography. Platforms also regularly scan uploaded videos against an audio and visual database to make sure they don't violate copyright law.

Artificial intelligence is a "wonderful additional tool to scale up the ability to content moderate in times of crisis," Montemayor said.

"A very effective use of generative AI is, we should take the videos and the data and the hate content that we saw pour out over these social media platforms following the attacks on Oct. 7th … and we should implement the most advanced AI tools to more effectively automatically detect this content and remove it at scale," she said.

AS ANTISEMITISM INFILTRATES COLLEGE CAMPUSES, THIS FREE SPEECH-FOCUSED SCHOOL EXPECTS A FLOOD OF APPLICANTS

But generative AI — technology that can create text, images or other media — has also been harnessed to create antisemitic memes and "misinformation and disinformation about the conflict during the active war itself," Montemayor said.

The "jury is still out" in terms of the risks of AI chatbot-like tools "creating" and spreading antisemitism in their results, Montemayor said.

"If social media is any kind of yardstick in terms of the way that algorithmic tools can spread to hatred, we better watch out when it comes to generative AI," she said.

CyberWell works with several social media companies to flag antisemitic content, Montemayor said, adding that results have been "relatively" good.

TikTok removes 98% of flagged content and Facebook and Instagram remove about 91%, she said. X, which has the highest volume of antisemitic content according to CyberWell, removes only about 10%, she said.

To hear more from Montemayor, click here.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.