Audio & VR Storytelling

To celebrate Google’s decision to create an open source spatial audio project for their Resonance Audio SDK, we want to talk a little about the impact of audio on VR storytelling.

Audio is often underestimated as a tool for immersive storytelling. It’s a powerful sensory stimulant, but its impact isn’t easy to describe. There isn’t really an accurate vocabulary to describe how good audio can make you feel. Plus, as audience members, we’re not always on the ‘lookout’ for good or bad sound work, so we don’t always recognise it when we see it. More than that, we just don’t talk about sound enough.

We want to help out with that.

First things first, let’s define spatial audio. In VeeR VR’s words, spatial audio is ‘a sonic experience where the audio changes with the movement of the viewer’s head. […] This involves the simulated placement of virtual sound sources anywhere in a simulated three-dimensional space, including behind, above or below the listener.’ Simply put, it refers to sound which works like real-life human hearing (in 360 degrees).

Spatial audio (or 3D audio) is one of the KEY ways a VR experience can create a sense of presence. In fact, according to Jaunt VR, audio accounts for ‘50 percent of the immersive experience.’ Sound isn’t just an accessory to visuals, it’s a storytelling tool in its own right.

Resonance Audio SDK header image

Resonance Audio SDK

 

My own personal experiences with VR have confirmed this idea. When I was creating a quick VR test scene in Unity, I made it so you could throw objects around the room. But when I showed my friends, they all said they felt like the object didn’t feel right. It somehow felt like it wasn’t really hitting the floor. Like it didn’t have any weight. I realised quickly that it was because, although there was a fun visual particle effect that appeared when the object hit the floor, there was no sound effect.

As soon as I added spatial sound to the scene, my friends immediately found the object more realistic (even though they couldn’t tell me why or how it had changed). Even more interestingly, they started to adopt a different play-style. Before, they’d only throw the object where they could see it – directly in front of them. Afterwards, because they could hear the THUD even when they weren’t looking at it, they started to throw it much more freely – over their shoulders, behind themselves etc. Sound became a KEY element of my test scene, affecting everything from presence to gameplay.

In fact, according to Ramani Duraiswami (Professor of Computer Science at the University of Maryland and co-founder of VisiSonics), sound is one of the most important ways in which our brains compute space and layout. He says, ‘There’s a little map in your brain even when you’re not seeing the objects. If the sound is consistent with geometry, you’ll know automatically where things are even if they’re not in your view field.’

 

Ramani Duraiswami, Professor of Computer Science at the University of Maryland and co-founder of VisiSonics

Ramani Duraiswami, Professor of Computer Science at the University of Maryland and co-founder of VisiSonics

 

Immersive sound, however, is still very much a work in progress. VR sound design is an emergent art-form, and not one with any hard-and-fast rules. Small and massive companies alike are all still experimenting with different techniques for using sound to enhance virtual experiences. As VR sound mixing/development software becomes more advanced, spatial audio will become more accessible to new developers – hopefully bringing even more great ideas to the table. We can’t wait to see what the future brings!