flight simulators at the US Air Force or the Aspen Movie Map from MIT. VR is becoming increasingly interesting for various use case scenarios, journalism just being one right now. Current developments from the technical perspective set the pace which is why this post shall provide an overview about tools, SDKs, deployment and application scenarios for VR developers to come. So, let’s talk VR.
VR in a nutshell
VR puts a user into time- and space-separated three-dimensional virtual worlds. The degree of immersion is influenced by visual aspects, audio and – as opposed to AR – by the occlusion of the real world a user is in, while consuming VR content. VR devices are the presentational platforms that, by using one display per eye, create a three-dimensional perception of the provided content. Often, special audio hard- and software is used, to also create a spatial impression of ambient audio sources within a scene.
I feel dizzy
One thing that is particularly challenging is, that through a high degree of immersion, people might experience motion sickness and dizziness due to what their eyes perceive which is in contrast to what their body experiences.
Gravitational forces, correct head rotation and body movement have to be well aligned between a scene and the real world – or completely contrasted in order to break between the two. Alignment can be quite tricky, when applications are meant to be experienced from the couch, but allow users to roam freely in virtual worlds. Human-machine-interaction is going to be a broad field of study for these devices and research is widely ongoing.
I feel, a thorough approach to good VR experiences requires to focus on device development, HMI and scene creation, equally. Device development and HMI research are going to enhance the overall experience, tackle issues of motion sickness and provide overall access through presentation platforms. Scene creation is all about content presentation. Much like in movies, the scene is what makes a big part of the experience and knowledge of virtual stage setting and virtual cinematographic will enhance the overall quality. Knowing how to provide which content in a limited (as in current VR devices) FoV (Field of Vision) is going to be critical to success. Devices development, HMI methods improvement and scene and content creation form the trinity of VR.
Virtual worlds at a finger’s tip
There is also a fourth thing I have just hinted at above: content platforms. These can be distributors, archives, and developer centric environments publishing content to provide “virtual worlds at a finger’s tip” to users. Media broadcasters and their platforms are a good start for that, but eventually encouraging a shift in users towards becoming more than consumers (prosumers) will be a key success factor, just like YouTube was for video. Follow-me arounds will take the next step – follow my dreams in truly crazy made up environments, or, starting with a more simple approach, providing 360-degree filmed content of the private Japanese garden. Good platforms will support and encourage creation and distribution even for the non-tech savvy and programming experienced.
You work in VR device development? Cool, I don’t, so let’s get in touch! If not, you probably would like to know on how to get started developing scenes, and that’s what I’m going to talk about here.
I have the chance to access an Oculus DK2 and play around with it, so the first shot I took at developing was using Unity 3D 5 and the OVR integration packages, both standalone and mobile. For deploying natively on Android smartphones, I chose to work with the Cardboard integration package.
Cardboard, Three.js and a starter kit
With Unity 3D, creating content for native platforms, like standalone PC, Android and iOS worked out pretty well. Web works on desktop browsers. The Unity 3DWebGL format currently does not run on mobile browsers. But we don’t want to force users to install bulky app packages, even if for some larger experiences the better performance would actually require it. But different topic. We also want easily accessible VR content right through our mobile browser. That’s where Google’s browser-based Cardboard SDK comes in.
So, what we see here is how complex the technical approaches are – from developing virtual scenes, to testing out various devices all the way to taking gravitational forces into consideration, developing for VR is not an easy task. So far, it is difficult to know where the road will lead us to, but I’d like to invite you into this conversation. Contact us via Twitter or write a comment if you have any tips, cool links or a question.
Update 2015/7/10: Leap Motion
I recently bought a Leap Motion-Controller, that is still waiting to be shipped from Frankfurt to Berlin. Courious on getting my hands dirty inside VR. 😉