Abstract
Nonconsensual, sexually explicit deepfakes represent a new and insidious form of gender-based violence, merging advanced artificial intelligence with entrenched misogyny to exploit women’s images at an unprecedented scale. While existing legislative and platform-based interventions primarily frame the issue as an extension of “revenge porn,” there is limited empirical research examining the motivations behind the creation and dissemination of sexually explicit deepfakes. This study addresses this gap through an observational analysis of deepfake enthusiast web forums, where pseudonymous users discuss, create, and exchange AI-generated intimate images. By employing theoretical frameworks from gender-based violence studies, masculinities literature, and feminist theory, this research uncovers the primary motivations and cultural norms underlying deepfake abuse. Contrary to the assumption that deepfake pornography is primarily created out of malice toward its subjects, findings indicate that perpetrators often justify their actions as expressions of admiration, technical experimentation, or as a means of gaining validation within online communities. The analysis further reveals how anonymity, peer validation, and shared technical interests contribute to the normalization and proliferation of deepfake abuse. By situating this emerging phenomenon within broader discussions of image-based sexual abuse (IBSA) and networked misogyny, this study provides critical insights for policymakers, platform regulators, and gender-based violence researchers seeking to develop more effective intervention strategies.
Presenters
Kaylee WilliamsPhD Student, Graduate School of Journalism, Columbia University, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2025 Special Focus—From Democratic Aesthetics to Digital Culture
KEYWORDS
Deepfakes, AI, Images, Gender-based Violence, Image-based Sexual Abuse