Posted on: March 7, 2018
The availability of locative and biophysical sensors and the personal data they capture offer new opportunities for personal soundtrack generation. Existing approaches to generative and interactive musical composition often rely on pre-composed segments of music , and can be plagued with issues regarding musical mechanization, repetitiveness and a lack of creative integrity . This paper presents on-going research into new methods and understanding to support listener’s in self-generating and performing their own interactive, personal musical soundtracks. It will attempt to detail how an amalgam of different approaches may present an opportunity for realising meaningful and personal musical soundtracks with creative integrity, expression, and location and listener sensitive generative content for the aural augmentation of daily activities. These approaches include the sonification of personal biophysical and locative data [4, 5] seamful system design , and the application of contemporary avant-garde compositional approaches  that embrace the inclusion of ambient artifacts, chance and indeterminacy within musical compositions. It also identifies currently available solutions in the area of generative soundtracking and their limitations, such as Weavrun1, along with the possible applications, societal benefits and possible future research into such an approach.
II. Letting the outside in
This paper outlines a seamful approach to location-based musical generation and an open approach to the inclusion of external ambience within the generative composition, as proposed by Cage , letting the outside in. This is suggested as a means of complimenting a data-driven approach in an attempt to generate a more spontaneous, personal and location-sensitive composition. This approach could go beyond the appropriation and exploitation of identified seams within a system , by utlising a system that creates seams by design, or can dynamically control the size of seams and their impact on the generative composition at given points.
This approach could also further address issues outlined by Berndt et al. , who recognise that there is ‘an existential danger’ involved in the generation of music for interactive purposes which stems from the unknown length of specific scenarios, which leads to soundtracks with repetitive and mechanical characteristics and a loss of musical integrity. Berndt et al.  identifies various compositional approaches which are used to tackle these issues, including structural diffusion, sequential variance, polyphonic variance, orchestrational variance and reharmonisation. An examination and evaluation of the open and seamful generative compositional approach outlined here could lead to a potential addition to the interactive music composer’s toolkit.
III. Future research
Future research includes an examination into the possible curatorial role of the user, and the role that musicians may play within this curatorial role, as a means of specifying genre and instrumentation. Weavrun, a real-time tempo adaptive mobile music application, presents an interesting concept regarding this, where musicians can create adaptive, tempo variant tracks using the WeavMixer2 software for specific use in the Weavrun application.
Laurence Cliffe is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/L015463/1) and the grant Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption – EP/L019981/1
 Berndt, A., Dachselt, R., & Groh, R. 2012. A survey of variation techniques for repetitive games music. Proceedings of the 7th Audio Mostly Conference, pp. 61-67.
 Broll, G., & Benford, S. 2005. Seamful design for location-based mobile games. Entertainment Computing – Icec 2005, 3711, pp. 155-166.
 Cage, John. “Silence : lectures and writings.” Middletown, Conn. : Wesleyan University Press, 1961. pp. 7-8
 Chaparro, I., & Duenas, R. 2015. Psychogeographical Sound-drift. Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition, pp. 187-188.
 Chen, S., Bowers, J., & Durrant, A. 2015. ‘Ambient walk’: A mobile application for mindful walking with sonification of biophysical data. Proceedings of the 2015 British HCI Conference, p. 315.