According to the Internet Watch Foundation (IWF), the use of artificial intelligence (AI) to generate child sexual abuse imagery is reaching a "tipping point".
The watchdog stated that it has seen more of these AI-produced illegal contents in the past six months than in the previous year.
Most of this content was found on the open internet, not the dark web.
The increasing sophistication of these images suggests that producers are using AI tools trained on real victim's photos and videos.
The IWF also reported a significant number of “deepfake” videos and AI-generated images of real-life victims.
The IWF took action against 74 reports of AI-generated child sexual abuse material from April to September, compared to 70 from the previous year. Most of the flagged content was hosted on Russian and US servers.
Photos of clothed children are also being 'nudified' using AI tools. T he IWF shares the webpage addresses containing the illicit imagery with the tech industry to have them blocked.
In light of these developments, social media platform Instagram has announced new measures to prevent sextortion by blurring any nude images sent in direct messages.
- CyberBeat
CyberBeat is a grassroots initiative from a team of producers and subject matter experts, driven out of frustration at the lack of media coverage, responding to an urgent need to provide a clear, concise, informative and educational approach to the growing fields of Cybersecurity and Digital Privacy.
If you have a story of interest, a comment, a concern or if you'd just like to say Hi, please contact us
We couldn't do this without the support of our sponsors and contributors.