- Rendering Canvas (WebGL or 2D) to VR output devices.
- Rendering 3D Video to VR output devices (as directly as possible).
- Rendering HTML (DOM+CSS) content to VR output devices – taking advantage of existing CSS features such as 3D transforms.
- Mixing WebGL-rendered 3D Content with DOM rendered 3D-transformed content in a single 3D space.
- Receiving input from orientation and position sensors, with a focus on reducing latency from input/render to final presentation.
Early builds of the code are available for those interested in toying with it, all found on the official blog post. The project will enable web browsers and virtual reality interfaces to communicate with one another, potentially unlocking all manner of new input methods on the web. Implementation of the technology branches out far beyond gaming and could have deep impact on web development, if the experiment and VR are both successful.
- Steve "Lelldorianx" Burke.