Creating Stories in VR

Much has been written about virtual reality (VR) technologies and how journalists can utilize computer programming, video and audio effects, computer graphics, input and display technologies and the software that controls it all in order to create immersive scenes for users. Creating empathy and allowing concerned citizens to actually be at a particular event have been some of the driving motivations for media organizations to experiment with VR. At the heart of VR technologies is the human experience and by now, we are all aware of projects like “Hunger in LA” or the “Hong Kong unrest”.

Journalists in VR environments

VR in regards to journalism has mostly been discussed from the users’ point of view. How can journalists create environments in which empathy can be anticipated? What kind of devices are suitable for users?

I’d like to turn this around and suggest we look at journalists as users in VR. Specifically, how can journalists actually use VR environments in order to tell their stories in a more sufficient way? Instead of users experiencing new worlds, why not put the journalists into responsive environments for creating stories?

Fader

We are happy to announce that VRagments and The Center for Investigative Reporting will begin collaborating on a project called Fader, a dashboard in VR that will empower journalists to track shaming and ridicule online. Fader will provide journalists with the ability to set filters for search terms in social media related to journalism and the public discourse. It will enable the mapping of the content that evolves from memes and identify key influencers and opportunities for sentiment and trend analysis.

One first example can be seen below: this is a screenshot of tweets that have been imported into a Unity scene based on the search term #hatespeech via the Twitter API.

Screen Shot 2015-07-24 at 19.42.23by Stephan Gensch, VR developer and founder of VRagments

The more retweets, the bigger the bubbles. Now, this is the fun part: asking loads of questions, experimenting and testing various iterations throughout the project:

  • How can this be helpful for journalists?
  • Does it make sense to use the x-axis as depth-axis for the visualization (e.g. the closer the bubbles, the more influential the person who tweeted it?)
  • Which ways of navigation should be used? Gaze-based? Via motion-tracking? Does it make sense to use the Leap in order to re-organize the bubbles according to individual preferences (e.g. the good tweets to the left, the bad tweets to the right)?
  • How can journalists now change search terms?
  • What about collaboration with other journalists in real-time in this VR scene?
  • When do the users come in? Do they see the same scene or should it be in a somewhat more public mode?

This is just the beginning of an awesome exploratory experiment in which we regard the journalist as the user, working in a VR environment.
We’re looking forward to hearing your comments and thoughts about it.

1 thought on “Creating Stories in VR

Leave a Reply

Your email address will not be published. Required fields are marked *