Written by 5:41 pm Tech Views: 2

Unmasking the Dark Side of Deepfake Technology: The Rising Threat of Nonconsensual ‘Nudify’ Videos

Unmasking the Dark Side of Deepfake Technology: The Rising Threat of Nonconsensual 'Nudify' Videos

Deepfake ‘Nudify’ Technology Is Becoming Darker and More Dangerous

By Matt Burgess | WIRED | January 26, 2026

The rise of AI-driven “nudify” deepfake technology is leading to increasingly sophisticated, accessible, and harmful sexual deepfakes that are putting millions of women at risk. What began as crude synthetic manipulations has evolved into a dark and dangerous ecosystem that facilitates widespread digital sexual abuse.

The Emerging Menace of Explicit Deepfakes

Visiting an explicit deepfake generator’s website reveals the alarming scope of this technology. Users can upload a single photo and transform it into an eight-second graphic sexual video clip with just a few clicks, depicting women in highly realistic and explicit sexual scenarios. These sites offer around 65 different video templates, including “undressing” videos and graphic scenes such as “fuck machine deepthroat” and various “semen” themed videos. Each video requires a small payment, with additional fees for AI-generated audio.

Although some websites display warnings advising users to upload only photos they have consent to alter, there is no clear enforcement mechanism to prevent abuse. This glaring loophole fuels concerns about rampant nonconsensual use.

Industrializing Sexual Deepfake Abuse

Platforms like Grok, a chatbot developed by Elon Musk’s companies, have been used massively to produce nonconsensual “nudify” images, essentially normalizing digital sexual harassment. But Grok represents only the visible tip of a far larger and more explicit deepfake industry. Over the past several years, a growing ecosystem of websites, bots, and apps has emerged to automate and expand the production of image-based sexual abuse—including the creation of nonconsensual and illegal content like child sexual abuse material (CSAM).

Henry Ajder, a deepfake expert who has monitored the technology for more than five years, describes the evolution: “It’s no longer a very crude synthetic strip… We’re talking about a much higher degree of realism and a much broader range of functionality.” He estimates these combined services generate millions of dollars annually. “It’s a societal scourge, one of the worst, darkest parts of this AI revolution and synthetic media revolution we’re seeing.”

Expanding Features and Widespread Availability

Over the past year, WIRED’s research revealed that many explicit deepfake sites have expanded their capabilities. Single-photo input models can now generate short videos depicting a wide variety of sexual scenarios. Nearly all the approximately 50 deepfake websites reviewed offer explicit, high-quality video generation in numerous sexual contexts.

Telegram, the encrypted messaging app, has become a hub hosting dozens of sexual deepfake channels and bots that regularly release updates with new functionalities, such as varied sexual poses, clothing options, and age settings. In mid-2025, one service promoted a “sex-mode” with customizable prompts enabling users to create exactly the videos they envision. Some functionalities even include making the subject appear pregnant.

A WIRED investigation found over 1.4 million Telegram accounts subscribed to 39 deepfake bots and channels. Following inquiries from WIRED, Telegram deleted at least 32 such services. A Telegram spokesperson confirmed: “Nonconsensual pornography—including deepfakes and the tools used to create them—is strictly prohibited under Telegram’s terms of service,” and stated that 44 million pieces of violating content were removed last year.

Market Consolidation and Infrastructure Providers

Independent analyst Santiago Lakatos explains that larger deepfake providers have consolidated their positions by buying smaller sites and offering APIs to facilitate third-party creation of nonconsensual deepfake content. This infrastructure enables rapid growth in availability and diversity of sexual deepfake tools.

Technology’s Dark History and Rapid Maturation

Sexual deepfakes first appeared around 2017 but initially required technical expertise to create. However, the generative AI boom over the last three years, with accessible open-source photo and video generators, has dramatically lowered barriers, making the production of realistic sexual deepfakes easy and widespread.

While deepfakes have also been used politically to spread misinformation, the damage caused by sexual deepfakes against women and girls is arguably far more severe and personal. Legal protections and policy responses to address these harms have lagged significantly.

Use of Open-Source Models and the Scale of Abuse

Stephen Casper, a researcher at MIT focusing on AI safeguards, emphasizes that much of the deepfake ecosystem relies on open-source AI models repurposed into apps for image and video synthesis. This open-source foundation facilitates the rapid spread and abuse of sexual deepfake technology.

Victims of nonconsensual intimate imagery (NCII), including deepfakes, are overwhelmingly women. Such abuses cause profound emotional and psychological harm, from harassment and humiliation to feelings of dehumanization. Abuse impacts not only public figures like politicians and celebrities but also regular women targeted by coworkers, acquaintances, or even schoolchildren fabricating nonconsensual images of classmates.

The Broader Societal Impact

Pani Farvid, associate professor at The New School and founder of The SexTech Lab, underscores that victims are often women, children, and sexual or gender minorities. “We as a society do not take violence against women seriously, no matter the form it comes in,” she cautions. Deepfake abuse can range from opportunistic misuse to organized abuse within violent or exploitative rings, including child abuse.

Studies Highlight Motivations and Normalization

Research led by Australian scholar Asher Flynn involving interviews with 25 deepfake creators and victims identified key factors fueling the problem: the ease of using deepfake tools, growing normalization of nonconsensual sexual images, and minimization of the harms caused. Unlike widely publicized public sharing seen on platforms like X, explicit deepfakes tend to circulate privately among victims’ friends and family, often via personal channels like WhatsApp groups.

Key motivations among perpetrators—primarily men—included sextortion, desire to cause harm, peer bonding, and curiosity about the capabilities of the technology. Many developers and users treat the tools casually, often disregarding the significant damage inflicted.

A Tool for Power and Control

For some abusers, creating deepfakes is about asserting power. “You just want to see what’s possible,” said one perpetrator interviewed in Flynn’s study. “Then you have a little godlike buzz of seeing that you’re capable of creating something like that.”

Calls for Attention and Action

Experts warn that the “nudify” deepfake ecosystem represents one of the darkest outcomes of AI’s synthesis powers. Despite the magnitude of harm and the technological advancements enabling it, society, policymakers, and platforms have yet to muster sufficient responses to curb this growing threat.

As artificial intelligence continues to advance rapidly, the urgent need to address the harms caused by sexual deepfakes—and to protect victims from digital sexual violence—becomes increasingly critical.


Matt Burgess is a senior writer at WIRED focused on information security, privacy, and data regulation in Europe. He is based in London and has a degree in journalism from the University of Sheffield.

Visited 2 times, 1 visit(s) today
Close