As AI-Generated Porn Advances, Ethical Challenges Intensify
The evolution of AI-powered porn generators has brought significant advancements, but the ethical dilemmas surrounding them have only grown more complex. As generative AI becomes mainstream, so does AI-created adult content, improving in quality and accessibility.
In its early days, AI porn generation was rudimentary, with few tools available, and the results were far from convincing. These early models struggled with anatomy, often creating unsettling images of distorted figures that resembled scenes from a Cronenberg movie, complete with extra limbs or misplaced body parts.
Today, a quick search for “AI porn generator” yields numerous results, many of which are free to use. While the generated images are far from perfect, some are polished enough to pass for professional artwork, raising profound ethical questions.
No Easy Answers
The commodification of AI porn and its tools is having troubling real-world consequences.
In a notable incident, Twitch personality Brandon Ewing (Atrioc) was caught viewing nonconsensual deepfake images of female streamers. Although the creator of these images eventually deleted them under public pressure, the harm was done, as targeted streamers continue to face harassment via DMs containing copies of the images. Most deepfake pornography on the internet targets women and is often used as a weapon.
The ramifications extend beyond online harassment. A Washington Post article detailed how a school teacher lost her job after AI-generated pornographic images of her circulated without her consent. In another case, a 22-year-old was jailed for creating explicit deepfakes using photos of underage girls found online. Even more disturbingly, photorealistic AI-generated child sexual abuse material has been reported on the dark web, with one case involving a minor being blackmailed using AI-edited images.
The technology has also been exploited for scams, as Reddit users have been duped into purchasing explicit images of AI-generated fictional individuals. Meanwhile, workers in the adult entertainment industry voice concerns about how such advancements might disrupt their livelihoods.
Unstable Diffusion: A Case Study
When Stability AI’s open-source text-to-image model, Stable Diffusion, launched, it wasn’t long before the internet began repurposing it for generating porn. A group called Unstable Diffusion emerged, leveraging Stable Diffusion to create their own adult-content-generating tools, supported by datasets largely curated by volunteers.
Despite setbacks, including bans on platforms like Kickstarter and Patreon, Unstable Diffusion raised over $26,000, trained its AI models on over 30 million images, and launched a platform now serving over 350,000 users, who generate half a million images daily. According to co-founder Arman Chaudhry, the group aims to uphold “freedom of expression” while providing a space for creativity without censorship.
Improving Technology, Persistent Bias
Unstable Diffusion’s models have made notable strides in generating anatomically plausible artwork, particularly in anime and digital styles. However, technical limitations remain, especially with complex prompts, multi-person scenarios, or photorealistic outputs.
Bias is also evident in the models’ outputs. Prompts for “men” and “women” frequently generate white or Asian individuals, reflecting imbalances in training data. Similarly, certain prompts reveal stereotypical depictions, such as portraying secretaries as submissive Asian women, mirroring biased datasets.
Content Moderation and Legal Challenges
Despite its stance on “uncensored” creativity, Unstable Diffusion enforces content moderation policies to avoid legal trouble. The platform bans explicit deepfakes of celebrities and content involving underage or fictional characters resembling minors. It employs an AI-driven moderation system to flag and delete problematic content, though the filters have faced issues, including allowing restricted material to be generated temporarily.
Legal frameworks around AI porn are still evolving. Several U.S. states have laws targeting nonconsensual AI-generated pornography, and federal legislation is under consideration. Platforms like FurAffinity and Newgrounds have banned AI-generated adult content altogether, while Reddit has only partially lifted its ban, limiting it to fictional depictions.
Future Directions
Unstable Diffusion is transitioning to a subscription-based model to fund improvements and support. The group is focusing on customization, offering users control over color palettes, art styles, and other features. However, questions about artist consent and dataset ethics loom large. Like many generative AI models, Unstable Diffusion relies on datasets sourced from the internet without creators’ permission, leading to lawsuits and backlash from the artistic community.
While Unstable Diffusion’s founders express interest in supporting artists, tangible action on this front remains limited. The platform finds itself in a precarious position, striving for mainstream acceptance while managing controversy and balancing its diverse community’s expectations.
For now, the ethical and legal complexities surrounding AI-generated porn seem far from resolution. Unstable Diffusion, like the broader industry, appears caught in a holding pattern, grappling with the challenges of growth, regulation, and accountability.