Meta What? Meta Who: The Need for a Feminist Analysis of the Facebook Metaverse

It’s no surprise that Meta, the company behind Facebook and Instagram, isn’t protecting women and people of color from virtual abuse.

meta-feminist-facebook-metaverse-women-rape-sex-assault
(Pixabay)

In December 2021, Nina Janel Patel, vice president of Metaverse research, wrote a Medium post about being gang raped in the metaverse. Patel said her avatar was assaulted by others with male voices who yelled “don’t pretend you didn’t love it” as she tried to get away:

“Within 60 seconds of joining — I was verbally and sexually harassed — 3–4 male avatars, with male voices essentially, but virtually gang raped my avatar and took photos. … A horrible experience that happened so fast and before I could even think about putting the safety barrier in place. I froze.”

The term “metaverse” was coined by science fiction writer Neal Stephenson in his 1992 novel Snow Crash. In it, Stephenson described a world in which people connect through virtual reality—a multi-sensory technological environment in which light photons simulate sight, acoustic inputs simulate sound, and tactile or haptic simulators make people feel like they are “seeing, hearing and touching” other avatars in the metaverse. In other words, every interaction feels virtually real. 

Mark Zuckerberg’s Meta responded Last Friday, Feb. 4, almost three months after Patel’s post, by adding personal protection measures. Even still, I cannot help but draw parallels between how Patel was treated in the metaverse to the treatment of feminists during the Gamergate era. 

Gamergate and Metaverse: Different Platforms, Same Old Misogyny

Gamergate was a year-long campaign, during which male gamers attacked women and people of color seeking to diversify the gaming industry. It started in 2014 after a former boyfriend of game developer Zoe Quinn posted personal details about their relationship to 4chan. The posts quickly expanded to include nude pictures and details of her alleged sexual impropriety. Soon, an internet frenzy of largely white men began calling for Quinn to commit suicide. The vitriol quickly spread to feminist critic Anita Sarkeesian, a popular critic of the representation of women in gaming, and eventually to any feminist who dared to defend Quinn and Sarkeesian.

This period uncovered deep misogyny within the gaming industry—which is also found in the tech sector at large. This is the same through-line that led to the objectification of Nina Patel.

It’s No Suprise Meta Isn’t Protecting Women and People of Color From Virtual Abuse

A gang rape in the metaverse should not be a shock to those who have been paying attention—including its creators. Meta, the company behind the metaverse, ignored internal research last year on how Instagram, one of the companies in its portfolio, harms teenage girls. Before this, Bloomberg reported that women made up just 25 percent of the Meta’s computing workforce. In 2020, a USA Today analysis showed Black women made up just 1.75 percent of Facebook’s U.S. workforce—the exact employees most likely to bring an intersectional lens to product design, with a personal stake in making the metaverse safe for women and nonbinary femmes.

Facebook Meta virtual reality
In 2020, Black women made up just 1.75 percent of Facebook’s U.S. workforce. (Creative Commons)

In 2019, I published a report with sociologist Jessie Daniels and computer scientist Darakashin Mir, called Advancing Racial Literacy in Tech. Developers must have a cognitive understanding of how technical systems express racial bias, we argued, along with the emotional capacity to have effective discussions about how to dismantle racism. Once these two concepts are in conversation, developers can use the insights to create action plans to design anti-racist technologies.

This has already been proven at Meta, the company formerly known as Facebook: In 2019 the U.S. Department of Housing and Urban Development sued Facebook for enabling racial discrimination in housing. Investigators found Facebook algorithms stopped housing ads from appearing on the feeds of Black users—a result of so-called targeted advertising, in which advertisers could use filters that stop users who checked the box next to “African American affinity” from seeing their ads. The fact no one on the Facebook policy team ensured this action was compliant with the 1968 Fair Housing Act suggests the company was not cognizant of the link between racism and housing. Following this, Meta removed its ‘ethnic affinity’ section as part of a settlement with civil rights groups.

However, in 2020 an investigation by The Markup found the company still ran discriminatory ads. And in 2021, a Facebook whistleblower provided evidence that Facebook’s race-blind content moderation policies harmed Black users. In sociologist Eduardo Bonilla Silva’s book Racism Without Racists, Bonilla argues institutions that refuse to acknowledge how racist attitudes shape its policies and practices allow racism to persist—as is the case with Meta. Patel’s case suggests the company may also be employing gender blindness—and it’s hurting women.

Making Meta Safer

In order for the metaverse to be safe for women, Meta has to design controls against nonconsensual touching, allow avatars to block citizens of the metaverse that seek to do them wrong, and de-platform harmful users. Otherwise the gender-based violence women experience in the analog world will become part of meta culture.

Power asymmetries do not fix themselves just because we put on headsets. In order to make the metaverse safe for women, engineers must actively set out to design anti-racist feminist virtual environments that protect the most vulnerable users, including poor women, Black women, nonbinary femmes, people living with disabilities, non-English speakers, immigrants, queer people and low-income people.

We can keep people from deeply marginalized communities safe on the metaverse—and only then will virtual environments be safe for us all. Ask the women who developed the Combahee River Collective Statement.

If you found this article helpful, please consider supporting our independent reporting and truth-telling for as little as $5 per month.

Up next:

About

Mutale Nkonde (she/hers) is the founder of AI for the People, a communications firm that focuses on the future of racial justice by adding a technical analysis to how racial justice is discussed in journalism, television and film. She is also currently a master's candidate in the Department of American Studies at Columbia University, and she is an affiliate at the SAFE Lab at the Columbia Graduate School of Social Work.