It’s a tricky time for ethics in the 21st century. Unlike the Ancient Greeks, the fathers of Western ethics, we tend to treat rules of morality with great suspicion. Beyond the legally protected boundaries of public order, under Western liberal democracies each one of us is entitled to define their own ‘right and wrong’ through a unique prism of individual experiences. The market economy is following the same logic: as long as the laws are respected and no consumer is physically hurt in the process, one’s path to profit is of no one else’s business.
As digital technologies are becoming an integral part of our daily existence, the need to take into account the less obvious, long-term impact of innovation is becoming recognised across tech ecosystems, including AI and robotics. The VR industry has been relatively indifferent to the ethical debate, but a number of recent developments are changing that. For one, Facebook’s increasing presence in the immersive tech ecosystem has been alarming for many. As the controversial tech giant is steadily paving its path to mass market domination through both hardware (Oculus Quest 2, project ARIA) and platform (Horizon) breakthroughs, an uneasy suspicion creeps in: is our immersive future in good hands? The answer is far from simple. The key concern is not just Facebook, or the rest of the GAFAM, or any other XR company per se. It is, first and foremost, the tech itself.
For decades now, the immersive technologies have captivated our imagination due to their utopian dream – to overcome worldly limitations by creating a whole new dimension of reality with infinite possibilities. The idea of rising above the human condition is as old as humanity itself, but here’s the twist: the ‘escape’ enabled by virtual worlds is unprecedentedly powerful. With the right visual and auditory stimulation alone, our brains can be tricked into experiencing something as real despite knowing that it’s not. This quality, which makes VR invaluable for things like training, is also potentially dangerous. Beyond the usual ethical challenges associated with gaming (violence, addiction, bodily neglect etc.), we’re dealing with a groundbreaking technology that, by blurring the line between the medium and the user, makes the latter identify with his/her virtual life on a profound level. Owing to what Nick Yee and Jeremy Bailenson called ‘the Proteus Effect’, where users adapt their behaviour to avatar appearance, our virtual selves may gradually merge with and reshape our ‘real’ identities. As explained by researchers and ethicists Michael Madary and Thomas K. Metzinger : “It is not excluded that extended interactions with VR environments may lead to more fundamental changes, not only on a psychological, but also on a biological level.” Which is a mixed blessing, as the very same VR mechanism that can help users overcome racial bias and become more altruistic, is equally capable of triggering immoral or degrading behaviours (for more details and references, see Madary & Metzinger’s ‘Real Virtuality: A Code of Ethical Conduct […]’).
Once adopted on mass scale, VR will contribute to a larger trend: digital technologies redefining what it means to be human. If things go well, we can imagine a future where the ultimate dream is as good as fulfilled: the metaverse will unleash our creativity, mitigate inequality and enable us to finally experience the thrill of ‘real’ social interactions online. At the same time, it is hard to ignore the dystopian scenario where the virtually mediated existence desensitizes users to the feelings of others and/or lures them into abandoning the physical world altogether. The actual future is likely to be a mix of both, which is enough to suggest that, as an industry, we might want to take ethics in VR more seriously. Because all the seemingly trivial decisions taken today in relation to world design, storytelling techniques, avatar representation, virtual code of conduct and consent mechanisms will most definitely matter.
If we apply Robert Nozick’s classical experience machine thought experiment from 1974 to contemporary context, the key existential question on the agenda is no longer ‘Would humans want to plug into the pleasure machine for eternity?’ (most will, free trial option provided), but rather ‘How could we program the machine in a way that it preserves our humanity?’.
Here’s where we encounter a real challenge. So far, the (virtually unlimited) power to craft immersive futures has been delegated to private actors, which is problematic for several reasons. First, because there’s money involved. As explained in Kent Bye’s XR Ethics Manifesto, the fiduciary duties of tech companies towards their shareholders are often in conflict with their respective obligations towards the users. This corporate paradigm becomes especially questionable, from both legal and moral standpoints, when the experience sold is able to rewire human cognition and the key commodity traded is the users’ intimate data. According to the Silicon Valley ‘whistleblowers’ like Tristan Harris, featured in the recent Social Dilemma documentary, social platforms have been purposefully adopting manipulative design choices to maximise engagement, often at the expense of damaging mental health and increasing social polarization. We have little reason to believe that things would be much different for social VR, quite the opposite in fact. With a direct insight into our bodies and surroundings, the immersive tech devices will not just amplify our digital footprint, but will make it even more granular and, as a result, prone to manipulation. The fact that Facebook is requiring account-linked login for Oculus Quest 2 testifies to this new kind of data being a true gold mine for ads-reliant tech companies and an equally big privacy threat to users. In fact, according to a recent study from Stanford’s Virtual Human Interaction Lab, the kind of data collected via immersive experiences may be more difficult to anonymise than previously thought. Clearly, business models and privacy frameworks inherent to ‘surveillance capitalism’ would need to be challenged at least to some extent if we’re to innovate responsibly in VR.
Beyond questioning the moral integrity of growth-oriented business mentality, we may also want to examine other values that drive the people who build tech products. If we assume, which the evidence tells us we should, that frontier technologies are ‘existential’ (i.e. able to redefine our way of life), then it is only reasonable to interrogate the ideology of their creators. Regretful cases of AI bias alone should be enough to incentivise a serious inquiry. The way it seems from the outside, the basic dominating ideology inherited from Silicon Valley can not be classified as outright ‘immoral’, but it certainly seems one-sided. Incubated away from humanitarian doctrines in self-reinforcing social bubbles, tech revolutionaries are so driven by outmanouvering competitors and delivering innovation at all cost, they end up ignoring some of the essential factors that need to be considered for the sake of society at large.
This ethical shortsightedness could however be remedied by appropriate corporate culture. Today, most tech employees are simply not provided with enough resources and coaching to engage in meaningful deliberations around ethical challenges. They could, for instance, benefit from more exposure to outsiders with meaningful things to say, notably from academia and other industries that have more experience in dealing with existential dilemmas. Given its novelty and fast pace of development, VR innovation is a beast too complex to be tamed sole-handedly by the industry heavyweights. The ethical standards should ultimately be shaped by a multitude of voices, both inside and outside of the VR ecosystem, to ensure their well-roundedness and fairness.
The digitalised world of today is infinitely more complex than ever before in human history. It’s an intricate task, defining what kind of worlds to build and stories to tell in virtual realities. It’s equally complicated to figure out how to democratise Big Data in a way that makes sense financially and morally. The challenge should not discourage us. Ethics provide us with an occasion to better inform ourselves about the world, to start asking the right questions and to collectively develop new frameworks for thinking about morality. In this quest, we may need to depart from an individualistic mindset and remember that collective intelligence is, and always has been, our best bet at getting it right.
Anna Valeri is a founder at Ethical Framework for XR (EF4XR), which has as its mission to provide organisations with the knowledge and resources to innovate responsibly in XR.
Leave a Reply
You must be logged in to post a comment.