A U.K.-based watchdog has called for tight controls on the use of artificial intelligence tools to prevent their rampant use in creating deepfake child sex abuse material. The Internet Watch Foundation has warned governments and technology providers that there is already an alarming proliferation of child pornographic material on the internet, a situation that could aggravate if left unattended, a news agency report said.
The advocacy has urged the authorities around the world to act quickly before a flood of AI-generated images and videos of child sexual abuse overwhelms law enforcement investigators, the Associated Press report said.
“We're not talking about the harm it might do,” the report said citing Dan Sexton, the watchdog group's chief technology officer. “This is happening right now and it needs to be addressed right now.”
In a first-of-its-kind case in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 virtual child abuse images, according to the Busan District Court in the country’s southeast.
This disturbing trend goes beyond the actions of criminals and extends to children themselves. In some cases, kids are using these AI tools on each other. At a school in southwestern Spain, police have been investigating teens’ alleged use of a phone app to make their schoolmates appear nude in photos.
The report exposes a dark side of the race to build generative AI systems that enable users to describe in words what they want to produce — from emails to novel artwork or videos — and have the system spit it out. While AI has shown tremendous potential in various creative and productivity applications, it's also being exploited for heinous purposes.
If this surge in AI-generated child sexual abuse images isn’t stopped, it could have dire consequences. Investigators may be bogged down trying to rescue children who turn out to be virtual characters, wasting valuable resources. Furthermore, perpetrators could use these fabricated images to groom and coerce new victims, making the fight against child exploitation even more challenging.
The AP report adds:
Sexton said IWF analysts discovered faces of famous children online as well as a “massive demand for the creation of more images of children who’ve already been abused, possibly years ago.”
“They’re taking existing real content and using that to create new content of these victims,” he said. “That is just incredibly shocking.”
Sexton said his charity organization, which is focused on combating online child sexual abuse and working with others to remove it, first began fielding reports about abusive AI-generated imagery earlier this year. That led to an investigation into forums on the so-called dark web, a part of the internet hosted within an encrypted network and accessible only through tools that provide anonymity.
What IWF analysts found were abusers sharing tips and marveling about how easy it was to turn their home computers into factories for generating sexually explicit images of children of all ages. Some are also trading and attempting to profit off such images that appear increasingly lifelike.
“What we’re starting to see is this explosion of content,” Sexton said.
While the IWF's report is meant to flag a growing problem more than offer prescriptions, it urges governments to strengthen laws to make it easier to combat AI-generated abuse. It particularly targets the European Union, where there's a debate over surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse even if the image is not previously known to law enforcement.
A big focus of the group’s work is to prevent previous sex abuse victims from being abused again through the redistribution of their photos.
The report says technology providers could do more to make it harder for the products they've built to be used in this way, though it's complicated by the fact that some of the tools are hard to put back in the bottle.
A crop of new AI image-generators was introduced last year and wowed the public with their ability to conjure up whimsical or photorealistic images on command. But most of them are not favored by producers of child sex abuse material because they contain mechanisms to block it.
Technology providers that have closed AI models, with full control over how they're trained and used — for instance, OpenAI's image-generator DALL-E — appear to have been more successful at blocking misuse, Sexton said.
By contrast, a tool favored by producers of child sex abuse imagery is the open-source Stable Diffusion, developed by London-based startup Stability AI. When Stable Diffusion burst on the scene in the summer of 2022, a subset of users quickly learned how to use it to generate nudity and pornography. While most of that material depicted adults, it was often nonconsensual, such as when it was used to create celebrity-inspired nude pictures.
Stability later rolled out new filters that block unsafe and inappropriate content, and a license to use Stability's software also comes with a ban on illegal uses.
In a statement released Tuesday, the company said it “strictly prohibits any misuse for illegal or immoral purposes” across its platforms. “We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes,” the statement reads.
Users can still access unfiltered older versions of Stable Diffusion, however, which are “overwhelmingly the software of choice ... for people creating explicit content involving children,” said David Thiel, chief technologist of the Stanford Internet Observatory, another watchdog group studying the problem.
“You can’t regulate what people are doing on their computers, in their bedrooms. It’s not possible,” Sexton added. “So how do you get to the point where they can’t use openly available software to create harmful content like this?”
Multiple countries, including the U.S. and U.K., have laws banning the production and possession of such images, but it remains to be seen how they will enforce them.
The IWF's report is timed ahead of a global AI safety gathering next week hosted by the British government that will include high-profile attendees including U.S. Vice President Kamala Harris and tech leaders.