Creating “Stadtmitte in VR” – A Technical Perspective

Creating the virtual reality experience of Berlin’s Stadtmitte subway station was a valuable experience for us. Over the course of a little more than 6 weeks, we had a relatively short time for developing this gamified and immersive experience and it provided quite a few unique challenges for us. Our Marcus Bösch already talked about how to design a playful VR experience. This post will focus more on the technical issues we faced.

Since we were targeting for multiple platforms and also had to keep in mind the presentation mode at the “Lange Nacht der Wissenschaften,” this is what we were focusing on.

Target Platforms

The main focus was to develop the experience for the OSVR headset on a high-end gaming system. Sounds great! This allowed us to focus on design of the station at first and get everything into it which we thought was useful for the experience and atmosphere. Our second target platform, however, was Samsung Gear VR headsets operating with Samsung S6 smartphones. They offer far less performance, so we had to think about that at an early stage of the project as well. As a none VR option, we also wanted to provide a browser-based WebGL version, so users without HMDs could also view the experience.

Development Platform

With six weeks of project time, we wanted to reuse as much as possible. Most of our experiences are developed with Unity3D, so naturally, we chose this as the development platform. Add in the excellent VR Samples asset, and you’re pretty much ready to go. We had to do a few tweaks to make it trigger actions based on gaze timers, instead of using input buttons only, that are offered for example via the Gear VR headsets. We adapted the VRInteractiveItem class to provide the functionality needed – for navigation spots, buttons, elevators and trains. We made sure, all camera related scripts, that facilitate VR gaze based interaction are mounted to the proper game object and are fixed to the camera’s changing orientation.

Configuration of the player object and mount point of vr cameras for OSVR (VRDisplayTracked) and WebGL/ Gear VR (MainCamera).

Configuration of the player object and mount point of vr cameras for OSVR (VRDisplayTracked) and WebGL/ Gear VR (MainCamera).

Improving Performance

More than anything, high fps are key to a successful VR experience. While we were not that concerned on the OSVR PC combo, the WebGL and especially the Gear VR platforms needed careful optimization.

BakeBakeBake

While real-time lighting is nice and beautiful – better forget about it on mobile platforms – at least for now. Turning this off and baking all lighting is a really noticeable relief, if you run into low fps.

Culling

While we had all textures within an atlas, culling was even more important to cleverly hide all meshes, that are not in view. This usually is one of the major performance boosts. With the architecture of a train station, however, this was not that easy. It is long, does not offer many corners and usually has few occluders available. So viewing end-to-end, our tris count got dangerously close to the 100k magic mark, that is considered the upper limit on Gear VR. Adding complex textures and lightbakes, the upper limit then should actually be reduced much further. We figured, our upper limit of tris visible should be no more than 50k on an S6 smartphone. So we applied a next tweak.

Far Plane To The Rescue

So, a station is usually much longer than wide. Meaning, we could not reduce the far clip plane too much. However, it looked actually fine, going down as low as 50 units and finding a fitting background color for the clipping plane. I’d recommend not using full black in dark environments but rather a #101010 rgb value, as this seems more natural.

Culling Collider

We still had many interactive objects in the scene, that for simplicity were mostly canvas based sprite and text objects. To reduce their count and actually make effective use of them, we wanted them to not show when they are too far away from the player. We attached a collider object to the player with a diameter of 25 units (basically 25 meters) that made sure, we only show interactive objects within this range. This also brought in a nice feature, as this is totally what indoor location and navigation is about. Providing information and enabling actions based on where you are in the virtual space.

The collider used to enable only close interactive items.

The collider used to enable only close interactive items.

Doing all this, we managed to get decent results on Samsung Gear VR and the WebGL version of “Stadtmitte in VR.” Yay!

Navigating Through

Moving through the experience, we employed the standard NavMeshAgent component. Gazing on navigation items sets the new target destination for the agent to move to, or moving up to the elevators for example. Thus, we could plan a layout of accessible spots to keep both flow of movement and restrict areas that users should not run into.

TL;DR

Know your target platforms. Less real time, more performance. Care for what you show in your scenes.

Leave a Reply

Your email address will not be published. Required fields are marked *