Geospatial Audio


With “spatial audio”, Apple audio proposes to change how we listen to music. Spatial audio is being called an “immersive” listening experience unlike any other (Dudamel). Inspired by the promise of 360 sound, and driven by our own immersive storytelling mandate, the Centre for Engaged Documentation and Research (CEDaR) is deeply invested in what this extremely emergent media (Apple only launched its spatial audio initiative on June 17, 2021) might mean for community-led storytelling, namely with the Indigenous and Asian-Canadian communities CEDaR works with. From the research we have conducted, there is next to no data on how spatial audio can be used to tell stories, not many storytellers mobilizing the technology to craft an immersive narrative. Working with world-class storytellers, dramaturges, and audio designers, CEDaR aims to create path-clearing research in this field while generating dynamic cultural heritage built to meet the needs and protocols of our communities.


For this project, we are interested in creating a web-based application for geo-located spatial audio (or 360 audio or binaural audio). This application, which we expect would be retrofitted from existing tools, would provide a relatively simple interface that users can manipulate via CEDaR computers to create a 360 listening experience for the purpose of deploying place-based audio: stories that audiences can listen to onsite via a smartphone and QR codes. This storytelling framework would build on the work of podplay creators such as Adrienne Wong and Quelemia Sparrow, who have already illustrated the ways in which stories can be told in relation to land. Wong and Sparrow, however, did not have these tools at hand. What makes this project innovative is that it would build 360 sound into the narrative creating an immersive experience that reflects the ways in which storytelling and space/place are intimately connected.


Our work will be supported by sound designer Bill Hardman, who has been experimenting with binaural sound for the past two years. Within our GCRC grant “Relational Technologies”, three cluster members have outlined place-based audio projects from which we can select and develop proof-of-concept prototypes: (1) a geolocated installation at the UBC campus which speaks placenames in hən̓q̓əmin̓əm̓ within a rich soundscape reflecting the landscape features embedded in the grammar of the name (see for these street signs and recordings, such as stəywət, Lower Mall, which describes the experience of walking close to the shoreline and feeling a westerly wind off the Salish Sea); (2) a new podplay created with Anishinaabe dramaturge Lindsay Lachance; and (3) an app designed use smartphone motion sensors to determine the appropriate Maliseet prayer in response to seasonal and directional context.


  • Web-based geo-located spatial audio
  • Motion sensors to display directional context

The Team

Principal Investigator(s)

  • PI: Dr. Daisy Rosenblum, Assistant Professor, UBC First Nations and Endangered Languages Program
  • PI: Dr. David Gaertner, Assistant Professor, UBC First Nations and Indigenous Studies Program


  • Olivia Chen, UI/UX Designer (September 2021 – April 2022)
  • Vita Chan, UI/UX Designer (September 2021 – April 2022)
  • Joshua Lim, Honorary Member (January 2022 – April 2022)

Past Team Members

  • Dante Cerron, Project Lead (September 2021 – March 2022)
  • Michelle Huynh, Developer (September 2021 – February 2022)
  • Nemo Wax, Developer (September 2021 – February 2022)