When justice fails: Why women can't get protection from AI deepfake abuse
The UN Women article “When justice fails: why women can’t get protection from AI deepfake abuse” explains how AI-generated deepfakes are increasingly used to abuse women and why legal systems often fail to protect victims.
Deepfakes are realistic images, videos, or audio created with artificial intelligence that make it appear as if a person said or did something they never actually did. One of the most common abuses is the creation of non-consensual sexual deepfakes, where a woman’s face is inserted into explicit material. These images can spread quickly online and cause severe reputational, emotional, and professional harm.
Women are overwhelmingly targeted by this form of abuse. Research shows that most deepfake pornography online depicts women, highlighting the gendered nature of the problem and how digital technologies can amplify existing forms of violence against women.
Despite the serious harm caused, victims often struggle to obtain justice. In many countries, laws are outdated and do not clearly cover AI-generated content. Legal systems may require proof that the image is real or that a specific law against image-based abuse applies, creating gaps that perpetrators exploit. As a result, victims may be unable to prosecute offenders or have the material removed quickly.
Another challenge is platform accountability. Deepfake content spreads rapidly through social media and websites, but moderation systems and reporting processes are often slow, confusing, or ineffective. This allows abusive content to circulate widely before action is taken.
The article also highlights how AI has expanded digital abuse more broadly. Technology makes harassment easier, faster, and more anonymous, enabling tactics such as impersonation, sextortion, doxing, and coordinated harassment campaigns targeting women and girls.
UN Women argues that stronger responses are needed. These include updating laws to recognize AI-generated abuse, improving law enforcement capacity, requiring technology companies to design safer systems, and strengthening international cooperation. Protecting women online also requires addressing the underlying gender inequalities and misogyny that drive such attacks.
Overall, the article concludes that while AI tools create new opportunities, without stronger regulation and accountability they risk becoming powerful tools for gender-based violence in digital spaces.





