Designing for the Metaverse: Can UX Make it Less Awkward?

After Mark Zuckerberg announced expansion plans in summer 2022 for Meta’s virtual reality platform, Horizon Worlds, critics took to social media. They compared the platform’s graphics to those of Second Life, an early 2000s life simulation game: The avatars were legless. The Eiffel Tower sat at the base of a bubbly, pixelated hillside. There was no way the design would be accepted by gamers who had seen what was possible in 3D open-world games like Elden Ring, Cyberpunk, and Grand Theft Auto.

It was an unfortunate moment in the company’s quest to build a metaverse comprising several similar virtual spaces, but Meta and competing VR development companies are facing bigger issues than lackluster graphics. Augmented and virtual reality—the 3D technologies used to experience and navigate the metaverse—are often awkward and unintuitive. Interactions that are second nature in 2D need to be explained or redesigned when in 3D, and AR/VR usability guidelines are still in their infancy.

Although the metaverse user experience is limited, designers are well-positioned to enhance it. By embracing diegetic design and rethinking personalization, usability, and accessibility for 3D environments, designers can build immersive AR/VR worlds that users want to explore.

Designing for Interoperability With Diegetics

One of the main problems with the metaverse is that users can’t travel between different spaces as they can in the real world. Author Neal Stephenson coined the idea of the metaverse in his 1992 sci-fi novel Snow Crash, where programmable avatars interact in a single three-dimensional virtual space. But the metaverse as we know it today doesn’t consist of a single space; rather, it is a collective of separate spaces. Examples include:

  • Meta’s Horizon World, a social universe that comprises 10,000 worlds that host events, games, and social activities.
  • Fortnite’s video game-based metaverse, which allows players to create and explore their own worlds.
  • Samsung’s Decentraland, a blockchain-enabled digital space where users can explore, create, and trade assets.

Ideally, the metaverse would be completely interoperable. Users would be able to navigate seamlessly between spaces and trade with their e-wallets and smart objects across virtual worlds, just as we do with our credit cards and personal items in the real world. However, metaverse interoperability isn’t yet possible because of technical differences and inconsistent 3D rendering between platforms. Yugal Joshi, leader of Everest Group’s digital, cloud, and application services research practices, has said he believes some platforms will even promote vendor lock-in. This would prohibit virtual assets, such as digital currency and non-fungible tokens (NFTs), from being transferred between spaces.

Solutions like more sophisticated NFTs that allow for revenue-sharing agreements are still in the works, but there are things that designers can do to help users move through the metaverse as it exists right now. For instance, Metropolis magazine outlines an eight-point metaverse design manifesto for creating consistent, legible portals to take users seamlessly across services and entertainment options. The manifesto’s authors, Lara Lesmes and Fredrik Hellberg, suggest that diegetics—sensory cues and graphics existing within the narrative of a virtual world—will play an outsize role in the metaverse. For the metaverse to feel natural and immersive, they write, designers will need to develop “a new grammar of material behaviors, graphics, and signs.” Blue hyperlinks and 2D graphics, such as navigational icons, will be replaced with familiar architectural symbols, such as doors or tunnels, that will lead users to new worlds.

A gray-scale desert landscape. Three-dimensional sand dunes fill the bottom half of the image. In the center of the frame is an open door. Through the door is a galaxy made up of blue hues, dotted by twinkling stars.
Diegetics can immerse the user into the metaverse experience. Familiar architecture, such as doorways, makes metaverse design more natural for interoperability.

But this is no easy feat. By now, navigational cues to visit a homepage (home image), read a notification (bell image), or search a website (magnifying glass image) have become so ubiquitous across platforms that users read them almost unconsciously. But seeing these 2D cues in a 3D world may look clumsy and derivative. Instead, new visual, tactile, and auditory cues will need to be developed to help users anticipate where portals will lead, what actions they can perform with controller movements or hand gestures, and what responses these actions will have.

“[Diegetics] makes everything much more immersive because you feel you’re inside of the world,” says Hugo Barbera, a senior UI designer at Toptal and the founder and creative director of In Crime Content, a Barcelona-based visual accelerator. “Instead of a UI-heavy interface with hints of ‘how many bullets do I have,’ ‘how many likes do I have,’ and ‘which direction do I need to go,’ the sounds and objects will orient users.”

Personalized On-screen Identity

Designed to make life more exciting and convenient, AI avatars, bots, and assistants are gaining prevalence in society and the metaverse. In Japan, these creations are commonplace in urban areas, appearing as reporters and TV anchors, servers, subway guides—and even safety police programmed to recognize and aid distressed citizens. Daniel Nisttahuz, a Toptal product and motion designer based in Tokyo, says avatars represent a significant cultural shift that will lay the groundwork for their adoption across virtual spaces.

The emotional connection between people and their AI-driven characters is also getting stronger. Among users who are accustomed to fashioning online personas, the metaverse holds promise as a playful simulacrum to experiment with new identities. Nisttahuz highlights how the metaverse could even offer a new way for introverts and those with social insecurities to make a living, shielding them from the emotional obligations of appearing before people in the flesh: “There are a lot of people that are very shy, they don’t want to show their faces, but at the same time, they need to make their living,” he says. For instance, Mayu Iizuka, a Japanese VTuber with millions of fans on YouTube, is the chirpy voice behind the animated character Yume Kotobuki, whose virtual identity has inspired Iizuka to make changes in her own life. “I am becoming more like Yume,” she told the South China Morning Post. “I used to balk at speaking in public, but Yume is such an experienced live-streamer that my identity as her has been helping me speak more confidently as Mayu Iizuka too.”

Avatars like the animated character Yume Kotobuki are tools for self-expression, but when designing for the metaverse, designers must be cautious of the potential dangers they pose to the person who assumes the avatar’s identity.

Despite these benefits, tech made with good intentions can be misused. Computer scientist and ethicist Divine Maloney has written about avatars’ potential ethical and psychological threats, such as creating identities dissimilar to reality, encouraging anonymity to a degree that stunts emotional and social development, and adopting identities that exhibit dangerous and violent tendencies. In a paper in the journal International Cybersecurity Law Review, Ben Chester Cheong notes potential the legal issues of metaverse avatars, emphasizing the challenge of protecting users’ rights and imposing liability using existing legal concepts: “If an avatar steals a digital Gucci handbag in the metaverse, this would involve issues relating to property rights, theft, and intellectual property law,” says Cheong.

So what’s the solution? Product designer Nick Babich, editor-in-chief of UX Planet, has suggested that striking the right balance between reality and abstract is the key to creating positive connected experiences that feel natural and relatable. In virtual spaces, avatars are constructed to visually communicate aspects of a user’s identity and how they want to be perceived. Babich emphasizes that users need to be able to customize their avatars’ attributes—such as skin tones, hairstyles, and clothing—to make avatars look more natural and encourage users to connect with them.

Additionally, avatars should reflect human emotions in their facial expressions to make conversations less stiff and robotic. Eye gaze, blinking, and lip movement that emulate lifelike human gestures are all aspects to consider when designing avatars for the metaverse.

To realize such ideas, Oculus created a library of modular components called Interaction SDK for the 3D game engine Unity. The tool allows metaverse designers to use digital hands, gesture detection, and raycasting to accurately represent a range of human interactions. Correspondingly, Oculus, Veeso, and Emteq are developing facial expression technologies to translate users’ eye gaze and mouth movements to avatars through their VR headset facial tracking and hand-tracking technology.

As users familiarize themselves with new virtual realms, designers must establish a clear set of usability guidelines for AR/VR experiences. While companies like Apple and Google have articulated such guidelines for 2D experiences, including Apple’s human interface guidelines, comparative models for the metaverse are still limited.

Nielsen Norman Group has highlighted AR’s usability issues, including poor discoverability and findability, low-visibility instructions, and vague icons and signifiers. As for potential solutions, UX researcher Alita Joyce writes about Jakob Nielsen’s 10 usability heuristics applied to virtual reality: “The design should speak the users’ language. … Building on existing mental models helps users (correctly) predict interactions in a VR system.”

Applied to the metaverse, that could mean creating a movie theater experience wherein the virtual architecture is observable in mimicked 3D. Or it could be achieved by enabling users in a virtual conference room to sketch on a virtual marker board, similar to what Meta’s Horizon Workrooms now offers—but with clearer controller feedback cues, so users don’t feel like mimes writing in thin air. The world’s interior elements should almost invisibly show users how to behave.

A video introducing Meta’s Horizon Workrooms. To increase usability features, the metaverse design should mimic real-life cues, such as writing on a marker board.

Global accessibility adviser Iulia Brehuescu says that if the metaverse is to be truly immersive, it will need to become accessible to the more than one billion people who struggle to interact with digital content because of visual, auditory, and physical disabilities. Some progress has indeed been made: Early complaints of dizziness caused by the perceptual discord between a user’s simulated visual field and their actual body movements are mostly resolved. Today, most VR headsets have image refresh rates above 90Hz, shortened pixel fade times, and external and internal rotation tracking devices that help regulate sudden positional shifts. Hardware has caught up with the human eye.

Nevertheless, the metaverse is highly dependent on graphics, and designers must figure out how to portray this virtual world to users with vision impairments. Filipe Arantes Fernandes, a systems and computing engineering PhD student at the Federal University of Rio de Janeiro, indicates several design solutions for rethinking vision-related accessibility for the metaverse, such as text magnification, font enlargement, contrast adjustment, color inversion, and control-display ratio changes.

But how can users with profound visual impairment successfully navigate virtual spaces? Some work has already been done to overcome these challenges with touch and haptic feedback. For example, Microsoft Research’s Canetroller is a haptic controller that simulates white cane interactions, enabling blind users to navigate a virtual environment by leveraging their cane skills.

Microsoft’s Canetroller enables users with visual impairments to navigate virtual reality with haptic and auditory cane simulation.

To lend realism to 3D experiences for users with slight to moderate auditory impairment, designers need to leverage spatial audio, says Barbera. In a crowded metaverse chat room, workspace, or virtual concert, speakers’ voices will need to be tracked directionally and by their distance from other users. For users with profound hearing loss, Fernandes suggests using closed-caption file formats that allow captions to be embedded in online videos.

Making the metaverse accessible for users with mobility and motor control issues remains a challenge. For instance, older adults and those with chronic pain might not be able to wear bulky devices and headsets for long periods. But there is progress. Fernandes mentions advances in VR software designed for users with physical disabilities, like WalkinVR, which allows users with various neurological diseases to operate their virtual avatar using controllers instead of physical locomotion.

Our digital paradigm is firmly established: Users expect instant access to information and intuitive experiences in which they can easily carry out their goals. It’s unclear whether the metaverse will mature beyond its current awkwardness and become an interoperable universe where users can freely travel and transact across varied spaces. For that to happen, the companies creating these spaces must prioritize metaverse UX design and employ designers who creatively incorporate diegetic design cues, personalization, and accessibility features.

Editor’s Note: Thanks to Toptal network member Radu Anghel for contributing design insights to this story.

Further Reading on the Toptal Design Blog


Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: