While Meta has been trying to convince the general public to embrace virtual reality since 2016, the firm does not yet seem to have solved the problems of violence and toxicity observable today in online worlds and tomorrow in the Metaverse. What are the existing moderation systems? Should they be applied to the Metaverse?
Article written following a professional thesis realized in the framework of the Master IDE Gobelins / CNAM-ENJMIN, in collaboration with Diversion cinema.
In a post on Meta’s blog published on September 27, 2021, Andrew Bosworth (Vice President of Meta’s VR and AR Branch) and Nick Clegg announce the upcoming challenges of Metavers (from Meta): in the list, we find “safety”, as well as “equity and inclusion”. On November 12, 2021, the same Andrew Bosworth, sends an internal note, revealed by the Financial Times: he asks for a “Disney-Level Safety” (to understand, a moderation/safety at the level of a Disney movie) by warning that virtual spaces can be a toxic environment for women and the so-called “minority” identities. This unresolved issue could be, according to Bosworth, an existential threat to Meta’s ambitious plan, as it could exclude the general public from the Metaverse.
Having worked for 5 years in the XR industry, and in particular at Diversion cinema where inclusivity and accessibility of VR are central topics, I have a certain fear regarding the experience of the general public in the “Metaverse”. Virtual sexual harassment, inappropriate content, and abusive behavior are rampant online: Are current moderation systems adequate for digital life in the Metaverse?
Why is online toxicity a problem?
Online toxicity, on social networks or in multiplayer games, is already being denounced. Harassment, insults, doxxing: these so-called “disruptive” behaviors are legion online. According to the ADL survey, nearly 74% of gamers are victims of such behavior in their digital life, and a quarter of gamers have had their personal information published online against their consent (doxxing).
Disruptive behavior has a real impact: “23% of harassed gamers become less social and 15% feel isolated because of in-game harassment. One in ten gamers have depressive or suicidal thoughts because of harassment in online multiplayer games, and nearly one in ten take steps to reduce the threat to their physical safety (8%).
As Andrew Bosworth argues, online toxicity creates a hostile environment for those it targets, even when hidden behind an avatar: this is the case for women, who cannot always hide their female identity in voice chat. In addition, there is the embodiment (embodiment of the body in the virtual world): since 2016, Jordan Bellamire tells in his blog of virtual touching that occurred in QuiVR, a VR multiplayer game. Beyond the harassments, which prevent from fully enjoying a VR experience, it is the feeling of reality of these touchings that marked Jordan Bellamire. Julia Carry Wong, a journalist at The Guardian, adds her article, “What will we do when virtual abuse feels as real as physical assault?” Simulation and embodiment in online worlds, exacerbated by new technologies, put users at even greater risk.
What are the existing moderation systems?
To answer these risks, it is interesting to study the current moderation systems, in what can be considered as the roots of the Metaverse: MMORPGs (online multiplayer video games), social VR or even social networks. What are these tools? Should we be inspired by them?
Following Yasaman Farazan’s work, here is the beginning of a typology of these systems, and the critical analysis of these on the experience of the players:
1/ Mute the user
Action that consists in not receiving audio from a player. This is a moderation system that is widely used by players. Riot, the publisher of League of Legends (a game known for its toxicity problems), has conducted experiments on the subject, by mutating the cross-chat (voice chat between teams that compete). The exchanges were then analyzed as 33% less toxic. However, the mutation also contributes to a self-isolation of the victims, which in the long run, compromises the experience of the game and its lifetime value (duration of use by the players).
I’m afraid that if we don’t make a bigger commitment here […] we will stop at providing “tools” to self-insulate, rather than promoting a fair and competitive experience for everyone. So I will make this commitment. […] We have learned to mute the nagging. We have learned to mute ourselves in order to keep the peace. As a result, we have a competitive experience that can seem compromised. We often find ourselves at a disadvantage.
Anna Donlon, producer on Valorant at Riot (via gamerant.com)
According to Anna Donlon, Mute tends to disadvantage harassed players, and has no impact on those who engage in disruptive behavior.
2/ Activate your space bubble
First deployed by the QuiVR team following Jordan Bellamire’s blog post mentioned above: the Space Bubble is a measure specific to a virtual reality world. It prevents the user from overstepping the intimate limit (usually around 1m). This measure, adapted to the embodiment, has since been integrated into the latest update of Meta Horizon under the name of Personal Boundary. It reduces the risk of physical aggression but in the long run, it could also reduce the opportunities of immersion, allowed by the embodiment, for the concerned players.
3/ Player rating/status system
What if we only interact with people we trust? This is the proposal of the status or rating system. VR chat has developed its trust system in this sense: users can compartmentalize their social interactions according to the status of other players (automatic mute, visible or invisible avatar).
It’s a real “à la carte” moderation system, close to social networks, where you only see your friends’ interactions. This system, which has proven its efficiency, also tends to reproduce the inequalities of reality. Users only exchange with similar people to avoid friction, and this can create what are called filter bubbles – theorized by Eli Pariser. Far from a virtual city open to the world, the Metaverse would become an echo chamber that would reinforce the user’s own views.
4/ Report the user
Without any real direct impact, this is about notifying the “decision-makers” of a disruptive behavior in the virtual world. The reason can sometimes be specified, but the application of measures is the sole responsibility of the decision-maker, and little follow-up can be done by the person who has reported. This rather opaque system raises the question of who makes the law and justice:
- Is it a computer system, with an AI subject to algorithmic bias?
- Is it the developers or employees who are both judges and parties, representing the executive, legislative and judicial systems?
- Or is it the community, capable of a more contextual and adapted judgement, but which sometimes tends to succumb to the logic of the guillotine – reproducing violence, without seeking to transform the defendant?
5/ Forcibly remove / ban the user from the space
The ban prevents players with disruptive behavior from returning to the game space. It is a moderation measure, sometimes temporary, sometimes permanent. However, banning tends to break up communities.
Kicking stalkers out of the community at large is a possible route, but with RV, when communities are so limited, you might consider educating and rehabilitating them, encouraging reflection.
Renée Gittings, founding member of the IGDA (International Game Developers Association)
If we take a closer look at the figures on online toxicity (Free to Play, ADL, p.23), we discover that some of the harassed are also harassers. Sometimes perpetrators, sometimes victims: users with disruptive behaviors cannot all be banned permanently, at the risk of seeing all virtual world communities shrink, little by little, without reducing the risk of recidivism and without transforming their behavior in depth.
Rethinking the justice of our (virtual) worlds
In the face of current moderation systems in virtual worlds, we observe measures that tend to isolate the victim more than the “executioner”. We also notice that the redemption of the accused is never encouraged: he can be banned, lose his status, but he will not be accompanied, listened to, or guided towards the good practices and codes to adopt. Without community, victim and perpetrator are left to their own devices: one is forced to isolate himself, while the other is ostracized.
It is time to invent other solutions. It would be interesting to call upon alternative forms of justice, such as restorative justice, which gives power back to the victim, makes the accused speak, invokes the community, gets rid of the desire for revenge and avoids the “eliminatory” aspect of justice.
Systems of restraint must reinvent themselves, and this must involve reinventing our conception of justice, seeing everyone as a member of the community who cannot be abandoned.
Leave a Reply
You must be logged in to post a comment.