The Australian Federal Police (AFP) has warned that the fight against child sex abuse has become more challenging due to the emergence of artificial intelligence (AI) technology.
During a speech at the National Press Club of Australia on April 24, AFP Commissioner Reece Kershaw described how criminals were using AI to generate fake child sex abuse images, which were getting increasingly difficult to distinguish from authentic ones.
The commissioner cited a report by the Internet Watch Foundation that revealed just how realistic the images created by AI had become over time.
“When our analysts saw the first renderings of AI-generated child sexual abuse material in spring of … 2023, there were clear tells that this material was artificially generated,” he quoted.
“Half a year on, we’re now in a position where the imagery is so life-like that it’s presenting real difficulties for even our highly-trained analysts to distinguish.”
“We’re seeing AI (child abuse material) images using the faces of known, real victims. We’re seeing how technology is nudifying children whose clothed images have been uploaded online for perfectly legitimate reasons.”
Mr. Kershaw also stated that abuse materials generated by AI could potentially consume law enforcement’s manpower that could be better used for other purposes.
The commissioner gave a scenario where AFP investigators identified online child sexual abuse, and invested substantial time and effort into the case, only to discover that the depicted abuse was fabricated.
“It is determined this is a priority case, but after weeks, months, or maybe years, investigators determine there is no child to save because the perpetrator used AI to create an image to create the sexual abuse,” he said.
“It is an offence to create, possess or share this material, and it is a serious crime. But the reality for investigators is they could have been using capability, resources and time to find real victims.”
No Silver Bullet for AI-Generated Abuse Materials
While Mr. Kershaw said there were available measures on social media platforms to curb the rise of AI-generated child abuse materials, he noted that there was no “silver bullet” and that offenders were always looking for ways to beat technological countermeasures.
Mr. Kershaw called on tech companies to play a bigger role in the fight against child sex abuse by cooperating with authorities.
“We continue to talk to social media companies about how to help law enforcement identify a tsunami of AI-generated child abuse material we know is coming,” he said.
“Numerous law enforcement agencies, including the AFP, have appealed to social media companies and other electronic service providers to work with us to keep our kids safe.”
The commissioner also raised the alarm about the sharp increase in the cases of online child sexual exploitation in Australia.
In the 2022-23 financial year, the Australian Centre to Counter Child Exploitation received over 40,000 reports about online child sexual exploitation, up from 14,000 reports in 2018-19.
“Nearing the end of this financial year, we have already exceeded the past financial year’s figures,” Mr. Kershaw said.
The Australian Federal Police (AFP) has warned that the fight against child sex abuse has become more challenging due to the emergence of artificial intelligence (AI) technology.
During a speech at the National Press Club of Australia on April 24, AFP Commissioner Reece Kershaw described how criminals were using AI to generate fake child sex abuse images, which were getting increasingly difficult to distinguish from authentic ones.
The commissioner cited a report by the Internet Watch Foundation that revealed just how realistic the images created by AI had become over time.
“When our analysts saw the first renderings of AI-generated child sexual abuse material in spring of … 2023, there were clear tells that this material was artificially generated,” he quoted.
“Half a year on, we’re now in a position where the imagery is so life-like that it’s presenting real difficulties for even our highly-trained analysts to distinguish.”
“We’re seeing AI (child abuse material) images using the faces of known, real victims. We’re seeing how technology is nudifying children whose clothed images have been uploaded online for perfectly legitimate reasons.”
Mr. Kershaw also stated that abuse materials generated by AI could potentially consume law enforcement’s manpower that could be better used for other purposes.
The commissioner gave a scenario where AFP investigators identified online child sexual abuse, and invested substantial time and effort into the case, only to discover that the depicted abuse was fabricated.
“It is determined this is a priority case, but after weeks, months, or maybe years, investigators determine there is no child to save because the perpetrator used AI to create an image to create the sexual abuse,” he said.
No Silver Bullet for AI-Generated Abuse Materials
While Mr. Kershaw said there were available measures on social media platforms to curb the rise of AI-generated child abuse materials, he noted that there was no “silver bullet” and that offenders were always looking for ways to beat technological countermeasures.
Mr. Kershaw called on tech companies to play a bigger role in the fight against child sex abuse by cooperating with authorities.
“We continue to talk to social media companies about how to help law enforcement identify a tsunami of AI-generated child abuse material we know is coming,” he said.
“Numerous law enforcement agencies, including the AFP, have appealed to social media companies and other electronic service providers to work with us to keep our kids safe.”
The commissioner also raised the alarm about the sharp increase in the cases of online child sexual exploitation in Australia.
In the 2022-23 financial year, the Australian Centre to Counter Child Exploitation received over 40,000 reports about online child sexual exploitation, up from 14,000 reports in 2018-19.
“Nearing the end of this financial year, we have already exceeded the past financial year’s figures,” Mr. Kershaw said.