Ascendance

Ascendance explores the use of CGI abstract imagery and audio spatialization to represent colour and sound in 3D space. It is inspired by abstract animation giants Oskar Fischinger, Norman McLaren, Mary Ellen Bute and John Whitney et al. By applying their historical approaches and philosophies using present-day VR and CGI technologies, we can gain further insight into the relationships between colour, light, sound and music.

  • Louise Harvey
    Director
  • Peter Moyes
    Producer
  • Mark Douglas Williams
    Music and 3D audio spatialization
  • Louise Harvey
    Animation
  • Project Type:
    Virtual Reality, 360 Video
  • Genres:
    music video, experimental animation, abstract animation
  • Runtime:
    6 minutes 53 seconds
  • Completion Date:
    November 30, 2020
  • Country of Origin:
    Australia
  • Student Project:
    No
  • WIFT V-FEST
    Brisbane
    Australia
    March 28, 2021
    Australian Premiere
    Official Selection
Director Biography - Louise Harvey

Dr Louise Harvey is a 3D artist and filmmaker who has been combining her interests in animation production, research and teaching since 2001. Her doctoral thesis - an examination of 3D animation production techniques and principles - formed part of a major Australian Research Council-funded study on the topic. Her current research focus is on motion capture, virtual production, and abstract 3D animation for Virtual Reality platforms. Outcomes from Louise's research have been articulated via conference presentations and papers, numerous digital art works and animated films. Louise's work background is in 3D layout/previsualisation for animation production, and Visual Effects coordinator roles in feature film. She directs Griffith Film School's Honours Program and its CGI, Post Production and VFX courses and manages its motion capture facilities.

Add Director Biography
Director Statement

The decision to animate this film was grew out of a number of events and people in my orbit across 2019 and 2020. I had recently completed work on another animated VR film (Thirst, 2018), so I had some experience in this medium. The producer of Thirst, my Griffith Film School colleague Dr Peter Moyes, discussed the possibility of using VR to explore and re-imagine the work of abstract animation pioneers (such as Norman McLaren and Oskar Fischinger). He talked about Jordan Belson and sound artist Henry Jacobs’ Vortex Concerts at the Morrison Planetarium in San Francisco in the fifties and sixties, which attempted to bring visuals and audio into a 360 space. He wanted to explore how this marriage of abstract animation and 360 audio and visual space might impact us today, using modern-day VR hardware and animation software.
I was considering his idea when coincidentally, on my birthday my brother sent me a copy of the book The Toaster Oven Mocks Me, a memoir by Steve Margolis. His book documents his experience of growing up with the condition of synaesthesia, a neurological condition where the stimulation of one sense can trigger a reaction in another sense. For example, when listening to music, a person with synaesthesia may spontaneously see colours and shapes. I wondered what these colours and shapes might look like, and how they might move and change shape in response to the music.
Shortly after this, Peter and I were introduced to music composer Mark Douglas Williams, who had recently completed a master’s degree in screen music, which involved the implementation of ambisonic/audio spatialisation in the Unity game engine. He was interested in how audio spatialisation might be created and applied in 360 video, in a musical composition.
So the three of us decided to collaborate on a short animated film, where we could combine our interests and develop our understanding of how music, shape, movement and space relate to each other and how they can be represented successfully in VR. There was a steep learning curve on the technical side, where working in the VR space very much determined the visual and musical approaches. For example, we discovered that the human ear could easily discern the direction of sound source if it was panning horizontally (e.g. from the left to the right), but was not so easily discernible if it was panning vertically (a sound source directly above the viewer was not easily distinguished from one below the viewer). Potentially this limitation could be countered by using visual cues, and that's something we'd definitely explore in more depth in future VR projects. We also learned from trial and error – for example we learned not to make the audio and video spatialisation too ‘busy’. Too much complexity can overwhelm the viewer, so we had to be very selective about what musical motifs we animated, and the speed and direction of their movements. There were many more challenges along the way, and we are still wrestling with the limitations that currently exist in terms of being able to deliver audio spatialisation/ambisonic audio to the viewer. At the moment YouTube is the one of the only online screening platforms where this is possible, and in regard to VR headsets, we have been able to generate formats that retain ambisonic properties for playing on GearVR and Quest. Hopefully the developers of VR video players will incorporate the capacity to play ambisonic audio (especially 3rd order ambisonics) in forthcoming versions of their software.