The Digital War on Women: Sexualized Deepfakes, Weaponized Data and Stalkerware That Monitors Victims Online

The 2024 U.S. election is over, but the online abuse of women in politics is set to intensify. When Vice President Kamala Harris encountered a wave of misogynistic attacks similar to the hostile backlash that overshadowed Hillary Clinton’s campaign eight years ago, she brushed it off as the “same old tired playbook.” Still, digital sexism found her: A deepfake video circulated online that depicted Harris as a sex worker and claimed to show real footage.

Harris is one of many female candidates targeted by manipulated explicit content; and while the harm escalates fast, legal recourse is lengthy. Italian Prime Minister Georgia Meloni is still seeking compensation from two alleged perpetrators facing defamation charges after uploading pornographic videos with her likeness to an American adult website back in 2020.

Manipulated image data undermines a candidate’s credibility and reinforces the trope of women being “unfit for office.” If sexualized defamation dominates the conversation, it can silence and isolate women in a society that imposes harsher standards on their work and private lives than on their male counterparts.

(Fiordaliso / Getty Images)

Discriminatory discourse is often rooted in and deeply intertwined with democratic backsliding. The harm it inflicts goes beyond individual impacts—it is an assault on fundamental rights to participate in public life on equal footing, which in its effects creates a democratic deficit. 

Sexualized deepfakes are a new form of gender-based violence. A 2023 study showed 98 percent of deepfake videos online are pornographic, and a staggering 99 percent of those target women. Ninety-four percent of individuals featured in deepfake pornography videos work in the entertainment industry, with celebrities in the U.S. being among the prime targets. Synthetic media capabilities are advancing rapidly, and existing laws fall short of providing an effective remedy. In the absence of federal legislation that would specifically prohibit the creation or distribution of sexualized deepfakes, some states have amended existing laws on revenge porn (image-based sexual abuse) or non-consensual distribution of intimate images to include AI-manipulated content. 

The absence of a unified approach leaves significant gaps in protecting affected individuals, and implementation presents a challenge. New provisions would need to require platforms to remove explicit deepfakes without a court order to prevent its circulation and reduce repeated victimization.

President Joe Biden shakes hands with former Fox News anchor Gretchen Carlson before signing into law H.R. 4445, the Ending Forced Arbitration of Sexual Assault and Sexual Harassment Act of 2021, with Vice President Kamala Harris at the White House on March 3, 2022. (Jim Watson / AFP via Getty Images)

At the federal level, Section 230 of the Communications Decency Act grants platforms immunity from liability for user-generated content, allowing them broad discretion in setting content moderation standards. Recent proposals to amend the provision are gaining traction, partly due to bipartisan efforts like the Intimate Privacy Protection Act, seeking to curb protections for tech companies that fail to remove intimate deepfakes. Trump has also repeatedly called to repeal Section 230. In this context, the potential overhaul would likely limit social media platforms’ ability to “censor” certain political speech as the Republican candidates alleged that right-wing views were cracked down on by social media companies. Carving this type of protection into Section 230 could inadvertently increase misogynistic and sexist content online.

The current disconnect between inadequate legal protections and the many ways data is weaponized to harm women is at the heart of a new report published by New America. Through the cases of sexualized deepfakes and non-consensual intimate images, defamation campaigns riding on misogyny, surveillance software and Internet of Things (IoT) devices deployed to exert control and data breaches where abortion records become a chip in negotiating ransom, the paper sends a clear message: Data is not neutral nor equal. How it is collected, capitalized on and used to coerce and harm victims intertwines with existing power structures. Technology that was meant to serve and protect us is becoming a tool undermining progress on gender equality. 

Nearly any system that collects and shares location data can now be weaponized against its users.

It is telling that the reports of stalkerware use surged by 780 percent during the coronavirus pandemic when people were confined to their homes. Stalkerware enabled abusers to monitor victims more easily and contributed to a disturbing rise in domestic violence. From purpose-built apps marketed as child protection tools to everyday IoT devices like doorbell cameras, fitness trackers and item-finding software that share location data, dual-use technology and applications are increasingly repurposed for surveillance. Nearly any system that collects and shares location data can now be weaponized against its users, including vehicles that gather information about drivers like their routes and driving patterns. Digital surveillance can have a devastating impact on women, especially given the lack of robust legal or social protections against gender-based violence. 

Indiscriminate attacks can also have a more devastating effect on women. Consider the pandemic lessons when widespread disruptions to essential services hit women particularly hard. In the context of cyber operations that paralyze services, such as ransomware, a similar pattern with pandemic disruption could emerge in ever more destabilizing ways. If a hospital’s network is hacked, delaying medical care or disrupting critical services, women may be especially impacted. Not only do they make up a large portion of healthcare workers who bear the stress of keeping things running, but as caregivers to children, elderly or sick family members, they are often the ones who absorb the fallout from interruptions.

In sectors like energy, disruptions from a cyber incident might also harm women’s health outcomes more severely. If an attack were to cause widespread blackouts, hospitals and home healthcare equipment dependent on continuous power would become unstable, hitting hard those who rely on them, those caring for vulnerable populations, and those who already face barriers in accessing them. Such cascading impacts define the cybersecurity landscape and women would feel the compounding effects. 

We still lack gender-disaggregated data sets and testimonies to fully illustrate the breadth and depth of how data exploitation affects people based on gender as a variable factor. However, recent research shows that the stakes for women in the digital age are high. Women’s experiences online are in many respects different from men’s, and our responses to technology and the law that governs it must reflect this reality.

In a world where data is increasingly used as a weapon, digital safety is not limited to protecting privacy—it is a fight for gender equality, dignity and security. If we do not act now, the cycle of harm will continue to deepen, trapping more women in its wake.

About

Pavlina Pavlova is a #ShareTheMicInCyber fellow at New America in Washington, D.C., and a cybercrime expert at the U.N. Office on Drugs and Crime (UNODC) in Vienna.