US Investigators Turn AI Against AI to Spot Fake Child Sexual Abuse Images
Background
Generative AI has made it easier and cheaper to produce realistic images, including child sexual abuse material (CSAM). Law enforcement agencies say the volume of AI-generated content has surged, complicating efforts to identify and protect real victims.
The contract and the actors involved
The Department of Homeland Security’s Cyber Crimes Center, which handles cross-border child exploitation investigations, awarded a $150,000 contract to San Francisco–based Hive AI. The heavily redacted government filing, posted on September 19, confirms the center will experiment with Hive’s AI tools to determine whether images were generated by AI or depict real people.
Hive cofounder and CEO Kevin Guo told MIT Technology Review he could not discuss contract details but confirmed the company’s AI-detection algorithms will be applied to CSAM cases. The filing cites data from the National Center for Missing and Exploited Children showing a 1,325% increase in incidents involving generative AI in 2024.
How the detection tools work
Hive provides a range of AI tools, including generative models and content moderation systems that can flag violence, spam, sexual material, and identify public figures. For CSAM specifically, Hive offers a tool built with child safety nonprofit Thorn that uses a hashing system to assign unique IDs to known CSAM and block it from being uploaded. That approach is widely used by tech platforms as a first line of defense.
Separately, Hive has developed an AI model that detects whether images were AI-generated. According to Guo, this model is not trained specifically on CSAM but looks for pixel-level patterns and artifacts that tend to indicate synthetic creation. He says the model can generalize across different types of images and that Hive benchmarks its detectors for each use case.
Why this matters for investigators
The immediate priority for child exploitation teams is finding victims who are currently at risk. AI-generated CSAM adds noise that can obscure cases involving real children. A reliable detector that flags images of actual victims would help investigators prioritize limited resources and accelerate interventions. The government filing highlights that distinguishing AI-generated content from material showing real victims helps focus investigative efforts and safeguard vulnerable individuals.
Prior research and context
The filing references two supporting points. One is a 2024 University of Chicago study that reportedly found Hive’s AI detection tool outperformed four other detectors at identifying AI-generated art. The other point is Hive’s work with the Pentagon on deepfake identification. In December, MIT Technology Review reported that Hive had sold deepfake-detection technology to the US military.
Limitations, transparency, and the trial
Key details in the DHS filing are redacted, and independent researchers and nonprofits such as the National Center for Missing and Exploited Children did not provide an assessment of the tool’s effectiveness in time for the report. Hive acknowledges that detection models must be benchmarked for each specific use case, and the company says its general detector can be applied to CSAM even if it was not trained on that material.
The award bypassed a competitive bidding process, with the government justifying the sole-source decision by referencing the study and the company’s existing contracts. The trial of Hive’s AI-generation detector by the Cyber Crimes Center is set to last three months.