Posted on: February 7, 2019
Most of my focus has been around Augmented Sound Reality/Augmented Audio Reality (ASR/AAR) and its application within a cultural context, namely a museum or gallery. I’ve found some literature on and around this subject, had a look at some existing and potentially useable solutions, and looked at some of the cultural and HCI related issues that might help me form some useful research questions.
Whereas traditional AR environments are primarily concerned with the overlaying of visual graphics on the real world, I’m interested in how audio enhancements and spatialised sound can combine with physical real-world art objects and cultural artefacts and locations to improve their ability to communicate information about themselves.
Spatial sound has been proven to play an important role in AR applications. Its use has been explored in very different areas, from pure entertainment (Stampfl, 2003a) to video conferencing and remote collaboration (Billinghurst 2001, Regenbrecht, 2004), and also perceptual studies (Bormann, 2005).
Therefore, can we examine the possibilities and implications of sound as the sole augmenting medium? Without combining it with visual AR?
In terms of best practices, and studies of the navigational potential of audio in AR, there seems to be stuff on the relationship between virtual audio sources and virtual objects, but not much on the relationship between virtual audio sources and real world objects, Sodnik (2006) being a case in point.
I think the context of a gallery or museum, where we are concerned with the display of visual artefacts, presents a useful context within which to explore this, as audio augmentation provides us with the ability to augment without visual distraction or interference.
The following application demo of an art gallery AR solution demonstrates great granularity of information (the ability to pick out individual characters from a group painting) but also demonstrates the issues regarding visual overlay on visual material – interference with the primacy of the visual object.
mobile.twitter.com/nathangitter/status/1012328934581719041?s=11
Of course, the gallery audio guide is nothing new. But to implement an ASR audio guide with all the benefits of AR, such as object tracking and recognition, but without the visual distraction for aesthetically sensitive environments could provide interesting studies.
Such an approach also leads us on to some clearer interactional questions regarding the use of such a system in this context, and perhaps some clearer research questions around what the capabilities of sound on its own as an augmenting medium may be. Possible research questions around Augmented Audio Reality (AAR) in cultural spaces could include:
In relation to all of the above it would be interesting to find out to what extent visual clues are required in order for the listener to localise the sound source, and thus the object of interest.
So we could investigate audio as a micro orientation tool, and the related (visual) object as the macro-orientation tool. Audio localisation gives you a general idea, relating a real-world object to the sound seals the deal.
Some of the interactional questions center around the idea that augmenting objects with audio through such a system increases their ability to communicate, or advertise themselves beyond their line-of-sight.
This also leads us to think about visual objects being confined by line of sight (Kelly, 2017, 20) whereas sound can spread throughout the gallery and become a navigational aid or instrumental in providing a different/new experience. This also ties in with Attali’s ideas around noise and sound as greater representations of the truth.
Such an approach could be construed as an attempt to introduce context to the art object within the gallery space. This is not a suggestion that it could be a better experience, just possibly a different one, to reframe existing collections.
Additionally, this also raises some technical considerations that I believe could pose interesting opportunities as well as hurdles. If we consider that fact that sound can travel around a corner to a listener, how does a camera based object recognition system deal with this if the listener is wearing or holding the camera?
But also, one can see how this could be used to create sonic zones, or to audibly theme collections of objects together and be used as a part of the narrative structure.
Also, in terms of navigation, and the designing of experiential trajectories, if we can determine the location of the listener in relation to a stationary art object (proximity and bearing through the AR’s object recognition abilities) then if we have a map of the room, albeit a virtual one that exists as a part of the system, perhaps similar to that of a computer game level, then it would appear possible that we could determine, with pretty good accuracy, the location of the listener within the gallery space?
Interesting and related mobile applications
Fields: Spatial Sound in AR
Fields allows the user to record, place and visualise sound in three-dimensional space and create large scale sonic AR environments. This approach offers a possible insight into a curatorial authoring strategy, a bit like AR/DJ where the DJ can place virtual sound sources on a virtual map of the venue (Kjeldskov, 2006).
“Next” generation sonic augmented reality player
This app features two built in AR audio experiences that demonstrate really nicely some of the ideas I’ve mentioned above. The first is an AR music player that allows you to walk around the musicians in a virtual jazz band, enabling you to mix the levels and pan the position of the different virtual instruments by your position in real space. The second is a virtual soundscape of Istanbul at prayer time, allowing you to experience a 3D soundscape of all the different Mosques in their spatially correct position relative to the listener.
Microsoft Soundscape
Microsoft’s Soundscape app uses ASR as a navigational tool aimed at the visually impaired, allowing users to set geolocated sonic beacons, the direction of which, are played back to the in the stereo field relative to the listener’s bearing.