in

Thousands of such images have been found on forums on the dark web Aroged

According to The Washington Post, the use of generative methods to create content related to the sexual exploitation of children will complicate the search for true victims and the fight against violence in the world.

On a single request, AI programs known as diffusion models may create any image, which is now being used by pedophiles. Investigators discovered tens of thousands of porn photographs of children on “specialized” forums on the dark web, some of which may have been real victims.

“Children’s photographs (including real victims) are used to make terrible videos. It’s almost impossible to identify a child who is in danger,” says Rebecca Portnoff, a nonprofit child safety organization.

Despite not a single real example of prosecution, there are images that use children’s pictures that do not exist in reality.

Tools such as DALL-E, Midjourney, and Stable Diffusion utilize billions of internet images to make their own work — and some include photographs of actual children from photo sites or personal blogs.

Pedophiles have increased the speed and scope with which they can produce new explicit images, thanks to advances in generative techniques (such as superimposing children’s faces onto adults’ bodies using the “deepfake” technique) and as they are more likely to be operated indefinitely and without control.

Stability AI claims that it prohibits the creation of child sexual abuse images, assists law enforcement in investigating “illegal or malicious” use, and that it has removed all explicit content from its training records, thus reducing the “possibility to create indecent content.”

The license for the open source tool states that it will not be used to exploit or harm minors in any way, but the most basic security features, such as the explicit image filter, can easily be bypassed by users, who can add lines of code to the program. At the same time, the company defends its open source approach as important to users’ creative freedom.

DALL-E and Midjourney, two of Stable Diffusion’s main competitors, are prohibited from sexy content and do not have open source software; all images are monitored and monitored, and all images are captured and monitored. OpenAI, for example, has removed explicit content from its DALL-E image generator’s training data.

Users on darknet forums discuss strategies for taking explicit pictures and ways to avoid pornography filters — including using non-English languages, which they believe are less vulnerable to detection, according to a recent internal survey.

The National Center for Missing and Exploited Children, which manages a database used by businesses to identify and block child pornography, has reported a surge in reports of AI-generated images and people uploading images of sexual abuse in the last few months. over children in AI tools in the hope of getting something more.

The FBI claims that an increase in complaints of youngsters who have been altered to “sex images that look real” will be discussed at a national training this month.

Several groups of scientists are already working on technical solutions to counteract this trend, including the creation of identification algorithms that would link to their creators.

What do you think?

37 points
Upvote Downvote

Written by John Smith

One Comment

Leave a Reply
  1. Hello.This post was extremely interesting, particularly because I was browsing for thoughts on this issue last Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Analyzing the Cybersecurity Measures of Binance USD: A Technical Perspective

Five US Agencies Unite to Create Anti-Crypto Crime Task Force