Connecting Virtual Worlds: Hyperlinks in WebVR

WebVR development continues to happen at a fever pace. And while much of the work has been around refining aspects of the API that define how content is rendered into headsets and managing input devices, one of the most exciting recent developments is the added ability to navigate between WebVR experiences.

The story of the Web is really the story of the hyperlink. Without the hyperlink, the Web would not exist. The elemental piece that allows for web pages to link from one to another. Enabling users to move from one interest to the next in a organic and exploratory fashion. It’s how you end up eyeballs deep in some arcane subject that you never knew you were interested in — often way too late into the evening, when you should be sound asleep. It’s why we enjoy the Web.

This is exciting stuff! WebVR content will no longer have to exist as their own siloed experiences and expand into larger interconnected experiences.

In this article, we’ll dig into how this navigation behavior works and see a basic working example.

It’s still super early

Make no mistake; we’re still in the very early stages of what this will look like. The specification accounts for only the mechanical aspects of how you move from one experience to the next. So no <a href="…"></a> markup or blue underlined text or outlines just yet, nor have we finalized how the interaction model will work.

For now, this leaves WebVR content to freely express what a link looks like and how users interact with them. But, more about that later.

Link traversal experience

View demo!
View source code on GitHub

We’re not going to cover how the scene itself is built (for that, see this), but we’ll focus on the code responsible for how content navigates and how we manage the VR rendering during the navigation.

Built by developer Erica Layton, this experience uses A-Frame (three.js) with a simple gaze-based cursor, which, when targeted to a sphere and clicked, navigates the user to another WebVR scene (by setting window.location.href from JavaScript).

Try the demo here!

(Only works with Firefox for the time being, due to current user-gesture requirements in experimental Chromium WebVR builds.)

In-headset impressions

It takes about 1-3 seconds moving from page to page, during which time, the headset displays black until the next scene is loaded and begins rendering content into the headset. As an early-stage experiment, it’s a serviceable experience, considering the content that is being loaded and the pieces needed to support it (i.e., webvr-polyfill, three.js, A-Frame, etc.) all must currently load synchronously before the scene is entirely constructed, the WebGL content is rendered to a canvas, and finally the WebVR API renders the content to the VR headset.

There is still room for improvement in how VR content is displayed into the headset and performance with timing between page navigations.

We’re just starting to scratch the surface with progressive loading of content, Service Worker caching (using the Cache API), and so forth.

How it works

Let’s dig deeper and see how navigation works in WebVR:

Diagram for WebVR link navigation

It’s worth noting that, as of this writing, the WebVR interfaces and events, including the exact mechanisms for navigation, are currently in flux. But, today, traversing links does work (i.e., automatic presentation of VR upon navigation) in Firefox Nightly, and navigation will soon work the same in a consistent manner across all WebVR-capable browsers.

Here’s a quick walkthrough with pseudo-code snippets:

  1. On initial page load, we check to see if we have navigated from WebVR content by checking the active referring displays and automatically present (without any user-gesture requirement):
    // If there was a VR display to which content was previously being rendered,
    // use that VR display for render contenting upon page navigation
    // (assuming the `canvas` and other dependencies have been loaded and are ready).
    var canvas = document.querySelector('canvas#vr-canvas');
    navigator.vr.getReferringDisplays().then(function (displays) {  
      if (!displays.length || displays[0].isPresenting) {
      return displays[0].requestPresent([
        {source: canvas}
  2. If there is no active VR display to render to, then we enumerate and set up VR content to be displayed and provide a button on the page for users to enter VR mode.
    // Detect a VR display is available and accessible (e.g., permissions) on the user’s system.
    var vrButton = document.querySelector('button#vr-button');  
    vrButton.disabled = true;
    navigator.vr.getAvailability().then(function (isAvailable) {  
      vrButton.disabled = !isAvailable;
    navigator.vr.addEventListener('availabilitychange', function (event) {  
      vrButton.disabled = !e.value;
    // Present content to the first available VR display.
    navigator.vr.requestDisplays().then(function (displays) {  
      if (!displays.length || !displays[0].isPresenting) {
      return displays[0].requestPresent([
        {source: canvas}
  3. Render the VR content.
    function enterVR (display) {  
      // Render a single frame of VR data.
      var onVRFrame = function () {
        // Schedule the next frame’s callback.
        // Poll the `VRDisplay` instance for the current frame’s matrices and pose.
        // …
        // Your render loop code goes here for rendering viewports for left and right eyes.
        // …
        // Indicate that we are ready to rendered the frame to the VRDisplay.
      return function () {
  4. Handle navigation to another WebVR page.
    // Content can navigate to another page by simply modifying `window.location`.
    window.location.href = 'skyisland.html';  

So, what’s next?

Now we can change locations whilst remaining within the headset. But there are many unanswered questions that are important to consider as part of defining what the VR link would look like:

  • Transitions between content
  • Performance
    • Improving the time to first paint (avoiding the Web’s infamous "flash of white")
    • Identifying content as VR capable using a <meta http-equiv="…"> tag (and a respective server-side response header)
    • Loading transitions while application loads
  • Responsiveness (and progressive enhancement)
    • Link behavior and scene navigation in WebVR pages ought to be designed to accommodate various viewing environments (adapting to the platform and headset capabilities available)
  • Accessibility (for both humans and machines)
    • Differences in VR headset capabilities (e.g., roomscale with positional tracking, sitting vs. standing, gamepads with motion-control support, etc.)
    • Defining the behavior of the link in describing relationships between documents (e.g., <a rel="next" href="…">, other link types, response headers) and metadata (e.g., for search bots and scrapers - potentially advocating for the use of microdata and web app manifests)
  • Discoverability
    • The visual treatment and affordances that will help users identify links within a scene (possible solutions can include concepts already familiar with Web users, such as blue outlines, labels, and cursor interactions)
  • Interactivity
    • How users will trigger navigation behavior (i.e., should this always be a standard and consistent interaction?)
  • Navigation history
    • Directionality (e.g., back and forward)
    • Browser-chrome UI for listing and revisiting visited sites
  • Links within worlds (in-page DOM anchors)
    • Focussable areas of content within the page (e.g., #anchor could move the camera to and/or expose an <div id="anchor"> element, or an event listener could be written in JavaScript to handle when the page’s hash changes [i.e., when hashchange fires])
    • Moving from place to place within a world using common UX patterns in VR (e.g., teleportation, interacting with objects, walking to/through/ hotspots/portals, etc.)
  • Security
    • User-gesture requirements
    • Identifiability: trustworthy ways of the User Agent (browser) knowing to where users are going before following a link (you would never book a flight ticket on a 2D page from an unknown or untrusted location, so why would you do that in VR?)
    • Browser-chrome UI to present to the user the destination URL
  • Sharing
    • One of the key benefits of the URL is the ability to share a location of a page in a way that is universally understood by Web browsers, regardless of the platform and device
  • Permanency
    • Though WebVR will at least initially (and hopefully only temporarily) inherit one of the issues of links: the destination page can be taken offline or malfunction (potentially, we can work with efforts to distribute and decentralize the Web so there are no single points of failure in WebVR)

Get excited!

As with any early-stage API development, this work falls squarely under the highly-experimental, subject-to-change bucket. But, this is an excellent opportunity for your ideas, thoughts, and criticism to be heard. Help us shape what the future VR hyperlink will look like. File an issue, experiment, join the WebVR Slack (#hyperlinks channel) and A-Frame Slack (#hyperlinks channel), and share your experiences.

No pressure. We’re mostly here to get you excited about moving between WebVR worlds!

It’s a team effort

Here at Mozilla, the VR team has been researching, prototyping, and thinking about VR-first browser experiences and navigating between experiences for years now. It’s a huge deal. We would like to first thank Erica Layton a ton for her work in putting together the demo scenes. We would also like to acknowledge the WebVR implementers, namely Kip Gilbert (Mozilla Firefox), Brandon Jones (Google Chrome), Justin Rogers (Oculus VR), Michael Blix (Samsung WebVR & Gear VR), and Laszlo Gombos (Samsung WebVR & Gear VR) for being part of making this possible.

Stay tuned for page navigation shipping to WebVR-capable browsers near you. Stay hyper!