ECGC: Virtual Reality Gaming - Overcoming Usability & Input Hurdles

By Published April 30, 2014 at 6:00 am

Following technological and monetary hindrances, one of virtual reality's biggest impediments to market has been usability. We worked with Sixense and Oculus VR after PAX Prime to write an article detailing the history and future of VR & AR technology, where some of these difficulties were discussed; since this posting, Oculus VR has engineered a 1080p, relatively low-latency version of their headset and facebook has acquired the company and its Rift. A lot has changed in a few months.

At ECGC 2014 -- the same place we filmed our interview with Morrowind's Ken Rolston -- we managed to catch a virtual reality panel hosted by NextGen Interactions' Jason Jerald. The panel discussed usability and input hurdles in virtual reality, information conveyance, fun VR experiments featuring virtual pits and scared players, and the future of VR. A video of the panel can be found below, but I've picked out a few key highlights for those who'd rather read a quick recap.

On the Issue of Information Conveyance 

"Information conveyance" is a critical component of the video gaming world, but it plays a special role in the realm of virtual & augmented reality gaming. As a general definition, information conveyance within gaming is the oft-subtle guidance given to players by means of cues planted in the game; these cues can include things like a use of stairs to guide players upward in a level (players almost always go up given the opportunity), pillars and columns to guide them forward, or UI elements to very directly tell the player where to go. There are different applications and use case scenarios for the types of subtle and not-so-subtle guidance, it ultimately hinges on what the developer is attempting to achieve.

We have means to guide a player through levels in traditional games, but problems emerge with virtual reality. When using a head-mounted display (HMD), the player is suddenly more immersed in the atmosphere -- a feeling of "presence" is gained, as Oculus VR describes it -- and some of the more traditional UI design is thrown away. This is coupled with a loss of sight to the actual hardware input devices, splitting the player's mind between two realities. The UI must now reflect the 3-dimensional nature of a surrounding game world and text has to be displayed in proper 3D and can no longer be flatly overlaid atop the world. From an immersion standpoint, having a relatively fixed / hovering UI with no frame of reference (like a HUD would provide) is no longer a desirable option. UI element placement also changes: Being too far in the corners makes it difficult to keep tabs on player status without physically looking away from the action; pulling placement in towards the center makes sense, but there's a fine line as to when the UI conflicts with gameplay - not to mention compatibility with non-HMD output. HUDs resolve a lot of this while adding to immersion, but not every game should have the feeling of a mech suit or cockpit encompassing the player.

star-citizen-hudStar Citizen's HUD from the PAX demo we covered.

Getting Lost in Virtual Space & Input Challenges

In his discussion panel, Jerald suggests that game developers can adopt new solutions that fit VR gaming. Jerald's brief demonstration showed that UI elements could stem from the player character's hands; in this example, the player physically looks down (wearing the Rift) at his or her hands to view the control scheme. Jerald also experimented with a simple arrow element in the UI that I had a chance to test at NextGen Interactions' booth. When looking down at my feet in the demo, I could see a compass that helped with navigation and prevented the "lost in space" feeling that VR sometimes creates; this ultimately contributes to the player's passive ability to navigate a space that fills a significant portion of their field of view, which can be thought of as "wrapping around your head."

On a somewhat similar note, Jerald explored the issue of using look-based movement in conjunction with more standard analog or mouse input. The issue, he noted, is that game designers need to decide whether the player's head movement turns the entire body of the character or just the head; and if it's just the head, then does the character continue running forward whilst looking sideways? Or does the body turn to run in the direction of the character's gaze? Jerald noted that this is largely up to designers and doesn't necessarily have a "right" answer, but he did explain that offering both look-based and analog input can yield confusing directionality for players. Planting a couple of arrows on the UI helps tell the player which direction they're going to move (regardless of where the character's head is looking) and served as another example of expanded information conveyance within virtual spaces.

All of these items are generally already present in some form when the game lends itself to a seated position. Racing games, mech combat, and space simulators (like Star Citizen - FPS interview here) have the benefit of naturally needing some sort of in-game HUD anyway, and these HUDs offer additional immersion alongside the usual UI information. The question now is how to bridge some of this to other first-person games without destroying the experience.

Some Fun: Testing How Users Respond to Tangible Objects & Virtual Cliffs 

Jerald recounted several virtual reality experiments conducted by research centers and universities. Some of these might be in the video, though we got into most of the detail in a post-presentation meeting.

Of the experiments, one included the concept of introducing physical touch to the world of virtual reality, making use of more of our real senses within the "game" world. This dates back to before the Rift became a thought (90s) and the test was conducted at UNC Chapel Hill.


Participants would don a wired HMD and enter a padded room. This sort of merges the concept of an augmented reality with a virtual one, as the virtual room is modeled after the physical, real room wherein the experiment takes place. Players are free to walk around the room and interact with objects; as they move, movement is tracked and the character moves in the virtual world -- the player has no visibility outside of the virtual world. UNC experimenters placed foam objects in the real environment in the same locations as objects within the virtual space; a couch, for instance, would be represented by a foam stand-in that participants could touch.

"It really blew their minds," Jerald told us, "when they were told to reach out and touch an object, and then they could feel it, it really made them feel that much more 'there.'"

Participants would next be guided to a doorway (it wasn't clear whether a door was present in the test) to a "pit room." Yes, it's as evil as it sounds: A virtual pit was carved into the floor in the virtual world, and because players had just been trained that virtual reality is reflected in reality (by nature of foam objects), they suddenly felt real fear of the pit. Heart rate monitors and other biometric readers were attached to participants (reading sweat on palms, for instance), and researchers saw spikes in each of their measurements upon entrance to the pit room. Jerald noted that, for the most part, participants understood that logically there wouldn't be a pit there, but they still felt a fear response. To further this, researchers had a "plywood cliff" leading into the room; participants stood atop this raised platform (1-2 inches over the 'cliff') and could test the ledge.

"The researchers would say, 'OK, if you don't believe it, why don't you hang your toes over the edge?'" Jerald recounted, "and then they'd really freak out!" Jerald then told a story of one researcher who'd crawled on his hands and knees to the ledge.

This was conducted back in the 90s with some of the best technology available, but graphics and hardware weren't exactly where they are today. Imagining what could be accomplished with today's software and hardware advancements is uncanny. To think that an HMD likely costing in the tens of thousands -- if not more -- can be outclassed on every level by a $300 Rift development kit shows how far we've come in just twenty years.

If this interests you, we'd encourage you to check out our history of virtual reality article.

- Steve "Lelldorianx" Burke.

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge