close
close

Association-anemone

Bite-sized brilliance in every update

AI-generated images of child sexual abuse are spreading
asane

AI-generated images of child sexual abuse are spreading

A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A US Army soldier accused of creating images of children he knew being sexually abused. A software engineer tasked with generating sexually explicit hyperrealistic images of children.

US law enforcement agencies are cracking down on a the disturbing spread of images of child sexual abuse created through artificial intelligence technology — from manipulated photographs of real children to computer-generated graphic representations of children. Justice Department officials say they are aggressively pursuing criminals who exploit AI tools, while states compete to ensure that people who generate deepfakes and other harmful images of children can be prosecuted under their laws.

“We need to signal early and often that this is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, who heads the Justice Department’s Child Exploitation and Obscenity Division, said in an interview for The. Associated Press. “And if you’re sitting there thinking otherwise, you’re dead wrong. And it’s only a matter of time before someone holds you accountable.”

The Justice Department says existing federal laws clearly apply to such content and recently brought what is believed to be the first federal case involving purely AI-generated images — meaning the children depicted are not real, but virtual . In another case, federal authorities in August arrested a US soldier stationed in Alaska, accused of broadcasting innocent images of real children he knew through an AI chatbot to make the images sexually explicit.

Trying to catch up with technology

The pursuits come as child advocates work urgently to curb misuse of the technology to prevent a flood of disturbing images, which officials fear could make it harder to save real victims. Law enforcement officials fear that investigators will waste time and resources trying to identify and track down exploited children who don’t really exist.

Meanwhile, lawmakers are passing legislation to ensure local prosecutors can bring charges under state law for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered images of child sexual abuse, according to an analysis by the National Center for Missing and Exploited Children.

“We’re playing catch-up as law enforcement with a technology that, frankly, is moving much faster than we are,” said Ventura County, California District Attorney Erik Nasarenko.

Nasarenko pushed through the legislation signed last month by Gov. Gavin Newsom clarifying that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not pursue eight cases involving AI-generated content between last December and mid-September because California law required prosecutors to prove the images depicted a real child.

AI-generated images of child sex abuse can be used to groom children, law enforcement officials say. And even if they are not physically abused, children can be deeply affected when their image is made to appear sexually explicit.

“I felt like a part of me was taken away. Even though I wasn’t physically raped,” said Kaylin Hayman, 17, who starred in the Disney Channel series “Just Roll with It” and helped promote the California bill, after becoming a victim of the images “deepfake”.

Hayman testified last year at the federal trial of the man who digitally superimposed his face and that of other child actors on bodies performing sex acts. He was sentenced in May to over 14 years in prison.

Open-source AI models that users can download to their computers are known to be favored by criminals, who can train or modify the tools to produce explicit representations of children, experts say. Hackers exchange tips in dark web communities on how to manipulate AI tools to create such content, officials say.

A report from last year by the Stanford Internet Observatory found that a research dataset that was the source for major AI image producers such as Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which some tools they could produce harmful images. The data set was removed, the researchers later said they deleted over 2,000 web links to suspected child sexual abuse images.

Top tech companies including Google, OpenAI and Stability AI have agreed to work with the Thorn organization against child sexual abuse. to combat the spread of images of sexual abuse of children.

But experts say more should have been done early on to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools “will do little to prevent” criminals from running older versions of the models on their computer “without detection,” a Justice Department prosecutor said in recent court documents.

“No time has been spent on making products secure, as opposed to effective, and it’s very hard to do after the fact — as we’ve seen,” said David Thiel, chief technologist at the Stanford Internet Observatory.

AI images become more realistic

The National CyberTipline Center for Missing and Exploited Children received approximately 4,700 reports of content involving AI technology last year — a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was submitting about 450 reports a month of AI-involved content, said Yiota Souras, the group’s legal director.

Those numbers could be undercounts, however, because the images are so realistic that it’s often difficult to tell whether they were generated by AI, experts say.

“Investigators spend hours just trying to determine if an image actually depicts a real minor or if it’s generated by artificial intelligence,” said Ventura County Deputy District Attorney Rikole Kelly, who helped draft the California bill. “There used to be some really clear indicators … with the advances in AI technology, that’s just not the case anymore.”

Justice Department officials say they already have the tools under federal law to pursue criminals for such images.

US Supreme Court in 2002 overturned a federal ban on virtual material of sexual abuse of children. But a federal law signed the following year prohibits the production of visual depictions, including drawings, of children engaged in sexually explicit behavior that is considered “obscene.” That law, which the Justice Department says has been used in the past to charge cartoon images of child sexual abuse, specifically notes that there is no requirement “that the depicted minor actually exist.”

The Justice Department brought that charge in May against a Wisconsin software engineer accused of using the AI ​​tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit behavior and was caught after sending some to a boy 15 years through a direct intervention. message on Instagram, authorities say. The man’s attorney, who claims to dismiss the charges on First Amendment grounds, declined to comment on multiple allegations in an email to the AP.

A spokesperson for Stability AI said the man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says it has “invested in proactive features to prevent the misuse of AI to produce harmful content” since taking over exclusive development of the models. A spokesperson for Runway ML did not immediately respond to a request for comment from the AP.

In cases involving “deepfakes,” where a photo of a real child has been digitally altered to make them sexually explicit, the Justice Department brings charges under federal “child pornography” law. In one case, a child psychiatrist in North Carolina who used an artificial intelligence. The app to digitally “undress” girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year.

“These laws exist. They will be used. We have will. We have the resources,” Grocki said. “This is not going to be a low priority that we ignore because there is no real child involved.”