Social media giant Twitter says a new system to prevent the spread of child sexual abuse material (CSAM) on its platform was “deployed seamlessly,” testing technology developed by the non-profit group Thorn.

The Twitter Safety account announced on Tuesday that it participated in a beta test of the group's AI-powered Safer solution to proactively detect, delete, and report text-based material containing child sexual exploitation.

AD

“Through our ongoing partnership with Thorn, we’re doing more to create a safe platform,” the Twitter Safety account wrote. “This work builds on our relentless efforts to combat child sexual exploitation online, with the specific goal of expanding our capabilities in fighting high-harm content where a child is at imminent risk.”

“This self-hosted solution was deployed seamlessly into our detection mechanisms, allowing us to hone in on high-risk accounts,” it continued.

Launched in 2012 by actors Demi Moore and Ashton Kutcher, Thorn develops tools and resources focused on defending children from sexual abuse and exploitation. In April, Google, Meta, and OpenAI signed onto a pledge issued by Thorn and fellow non-profit organization All Tech is Human, vowing to enforce guardrails around their AI models.

“We’ve learned a lot from our beta testing,” Thorn's VP of data science Rebecca Portnoff, told Decrypt. “While we knew going in that child sexual abuse manifests in all types of content, including text, we saw concretely in this beta testing how machine learning/AI for text can have real-life impact at scale.”

As Portnoff explained, the Safer AI model comprises a language model trained on child safety-related texts and a classification system that generates multi-label predictions for text sequences. Prediction scores range from 0 to 1, indicating the model's confidence in the text's relevance to various child safety categories.

AD

While Portnoff could not disclose which other social media platforms were participating in the beta test of the Safer suite of products, she said the response from other companies has been positive.

“Some partners shared that the model is particularly useful for identifying harmful child sexual abuse activity, prioritizing reported messages, and supporting investigations of known bad actors,” Portnoff said.

Due to the proliferation of generative AI tools since the launch of ChatGPT in 2022, internet watchdog groups like the UK-based Internet Watch Foundation have sounded the alarm about a flood of AI-generated child pornography circulating on dark web forums, saying the illicit material could overwhelm the internet.

The announcement by the Twitter Safety team came hours before the European Union demanded that the company explain reports of “decreasing content moderation resources.”

The latest transparency report that Twitter submitted to EU regulators said Elon Musk’s cost-cutting measures have reduced the size of the platform’s content moderation team by almost 20% since October 2023, and cut the number of languages monitored from 11 to 7.

“The commission is also seeking further details on the risk assessments and mitigation measures linked to the impact of generative AI tools on electoral processes, dissemination of illegal content, and protection of fundamental rights,” the demand adds.

The EU opened formal proceedings against Twitter in December 2023 over concerns that the company violated the Digital Services Act across several areas, including risk management, content moderation, “dark patterns,” and data access for researchers.

The commission said Twitter must provide the requested information by May 17, and address additional questions by May 27.

AD

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.