AI Undressing: Deepfake Nude Services Skyrocket in Popularity

Bybit
AI Undressing: Deepfake Nude Services Skyrocket in Popularity
Fiverr



The scourge of malicious deepfake creation has spread well beyond the realm of celebrities and public figures, and a new report on non-consensual intimate imagery (NCII) finds the practice only growing as image generators evolve and proliferate.

“AI undressing” is on the rise, a report by social media analytics firm Graphika said on Friday, describing the practice as using generative AI tools fine-tuned to remove clothing from images uploaded by users.

The gaming and Twitch streaming community grappled with the issue earlier this year when prominent broadcaster Brandon ‘Atrioc’ Ewing accidentally revealed that he had been viewing AI-generated deepfake porn of female streamers he called his friends, according to a report by Kotaku.

Ewing returned to the platform in March, contrite and reporting on weeks of work he’d undertaken to mitigate the damage he’d done. But the incident threw open the floodgates for an entire online community.

Tokenmetrics

Graphika’s report shows the incident was just a drop in the bucket.

“Using data provided by Meltwater, we measured the number of comments and posts on Reddit and X containing referral links to 34 websites and 52 Telegram channels providing synthetic NCII services,” Graphika intelligence analyst Santiago Lakatos wrote. “These totaled 1,280 in 2022 compared to over 32,100 so far this year, representing a 2,408% increase in volume year-on-year.”

New York-based Graphika says the explosion in NCII shows the tools have moved from niche discussion boards to a cottage industry.

“These models allow a larger number of providers to easily and cheaply create photorealistic NCII at scale,” Graphika said. “Without such providers, their customers would need to host, maintain, and run their own custom image diffusion models—a time-consuming and sometimes expensive process.”

Graphika warns that the increase in popularity of AI undressing tools could lead to not only fake pornographic material but also targeted harassment, sextortion, and the generation of child sexual abuse material (CSAM).

According to the Graphika report, developers of AI-undressing tools advertise on social media to lead potential users to their websites, private Telegram chat, or Discord servers where the tools can be found.

“Some providers are overt in their activities, stating that they provide ‘undressing’ services and posting photos of people they claim have been ‘undressed’ as proof,” Graphika wrote. “Others are less explicit and present themselves as AI art services or Web3 photo galleries while including key terms associated with synthetic NCII in their profiles and posts.”

While undressing AIs typically focus on pictures, AI has also been used to create video deepfakes using the likeness of celebrities, including YouTube personality Mr. Beast and iconic Hollywood actor Tom Hanks.

Some actors like Scarlett Johansson and Indian actor Anil Kapoor are taking to the legal system to combat the ongoing threat of AI deepfakes. Still, while mainstream entertainers can get more media attention, adult entertainers say their voices are rarely heard.

“It’s really difficult,” legendary adult performer and head of Star Factory PR, Tanya Tate, told Decrypt earlier. “If someone is in the mainstream, I’m sure it’s much easier.”

Even without the rise of AI and deepfake technology, Tate explained that social media is already filled with fake accounts using her likeliness and content. Not helping matters is the ongoing stigma sex workers face, forcing them and their fans to stay in the shadows.

In October, UK-based internet watchdog firm the Internet Watch Foundation (IWF), in a separate report, noted that over 20,254 images of child abuse were found on a single darkweb forum in just one month. The IWF warned that AI-generated child pornography could “overwhelm” the internet.

Thanks to advances in generative AI imaging, the IWF warns that deepfake pornography has advanced to the point where telling the difference between AI-generated images and authentic images has become increasingly complex, leaving law enforcement pursuing online phantoms instead of actual abuse victims.

“So there’s that ongoing thing of you can’t trust whether things are real or not,” Internet Watch Foundation CTO Dan Sexton told Decrypt. “The things that will tell us whether things are real or not are not 100%, and therefore, you can’t trust them either.”

As for Ewing, Kotaku reported the streamer returned saying he was working with reporters, technologists, researchers, and women affected by the incident since his transgression in January. Ewing also said he sent funds to Ryan Morrison’s Los Angeles-based law firm, Morrison Cooper, to provide legal services to any woman on Twitch who needed their help to issue takedown notices to sites publishing images of them.

Ewing added that he received research about the depth of the deepfake issue from mysterious deepfake researcher Genevieve Oh.

“I tried to find the ‘bright spots’ in the fight against this type of content,” Ewing said.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.



Source link

NiceHash

Be the first to comment

Leave a Reply

Your email address will not be published.


*