Now at the end of the technical exploration process I was ready to implement more articulate gestures into the project.
I wanted to retain a sense of improvisation, aleatoricism, spatiality and dramatic sonic transformation across small gestures on the surface. I also wanted all structure, rhythm and time qualities of the sound to be generated through gestural input. The surface was to be central, dictating all sonic possibilities through gestures - playing the surface along to rhythms generated from it, or externally, was not what I am aiming for.
The concept of building the surface from parts instead of using a freely available touch device like a tablet or phone is to remove all association from the surface so as engaging with it is less encumbered with any preconceived notions of other purpose built technical applications.
I personally find experimentation the most gratifying part of engaging with audio, especially when manipulating sound to the point where it creates its own reference. Sculpting sound using traditional composition methods is tedious, dull and antithetical to immediate engagement with sound. Building an environment to compose live with deliberate gestures that translate to compositional axioms are inherently limited to those axioms and therefore not exploratory and bound by context.
This project is an expression of my search to recontextualize and extend my ways to interact with audio by disassociating the process from the limitations explained above. Most contemporary digital mediums for manipulating audio are visual, referential, tempo based and linear. Popular node-based modular environments are not as linear, but still suffer from their inherent visual nature. I find the most engaging moments sonically are where the eyes stop receiving input, no longer listening, purely hearing - engaged and focused.
To facilitate that experience the interface must be stripped of all meaning, the process be immediate and non-static. Interaction must have the sensation of oscillating between control and yielding to immersion.
With this in mind, I kept the spatiality controls logical and intuitive - allowing the controls for timbrality, pitch and tempo to be much less straightforward.
 |
Spatiality Controls
Put simply, a left gesture with the first finger will move the sound to the left speaker from both sources
a right gesture with the first finger will move the sound to the right speaker
an upward gesture with the same finger will move the sound away from the rear speaker
a downward gesture will move the sound toward the rear speaker.
Next, I made the second finger controls dramatic global (whole project) assignments, linking global tempo control and global clip pitch control / transpose to be assigned to the x and y position of the second finger input. |
The controls of both of these assignments were intended to span their entire range over the surface, giving full gestural control to the the most traditionally static, governing values of composition.
The first 2 processes of most composers is to commit to a key and a tempo for their composition.
I have questioned these two constraints in my compositional process since being made aware of them. Once I began adhering to these conventions, despite their fundamental advantages of assisting with crafting logical complementary harmonic and structural elements and providing a mathematical framework to create detailed, layered compositions that make sense to the human ear - I became increasingly dissatisfied with the process. The more you adhere to logical frameworks, the less you are expressing anything other than the scope of those frameworks, essentially joining the necessary dots until completion. I found my compositions to be nothing other than an alternative perspective of someone else's expression.
By making these two values wildly maleable, the surface interface becomes a tool of subversion for these principles, theoretically allowing far more sonic possibilities than any systemic approach.
These global values being gesturally controlled allowed the various warp mode settings of each clip to be key in the articulation of sound, as each setting responds to global pitch and tempo changes differently.
Texture mode in particular is very responsive to dramatic changes in pitch and tempo, especially when used with percussive samples, which the sample choice I used for this source (a loop I had created generatively with a stochastic rhythm based drum sampler)
Re-Pitch mode is very responsive to changes in tempo, ignoring any clip transposition, raising and lowering the pitch based on the global tempo value. This allowed each source to be effected by the same changes in global values in independent ways, making their articulations divergent while still being bound by the same gesture.
I settled on two clips on each of these settings to function as source inputs, with their spatiality controls being mirrored on each channel, so that as one moved across to the opposing channel the other would do the opposite - but I found this to keep both sources isolated from each other which removed immersion. I changed the spatiality controls to those explained above to rectify this, making the sound feel more cohesive.
Using clip settings in this way to sculpt sound is essentially
Granular synthesis as these warp modes are just grain envelopes. With granular synthesis being the basis of articulation, for the next gestural element I used an instance of Ableton's Grain Delay on each channel as a means to transform the sound further - each Grain Delay having pitch and time controls to contort the sound and the delay aspect being able to disrupt any rhythmic linearity that may emerge from the basic global controls, or the absence of gestural movement.
Feedback essentially became the volume control of the unrestrained emergent rhythms and repetitions, making this value gestural allows these emergent rhythms to become the focus of the sound, then eclipsing all other sounds as it reaches its full value. This gives the sensation of having the sound repel you sonically as you push it to its limit, the balance of this tension perpetuating your engagement with it.
I imposed some limitations on the Delay Times, shorter times making the articulations more clear and definitive, silence being just as integral in the articulation of gestural movement as the sounds themselves. Shorter delay times allows the sound stage to clear and settle at a much more responsive rate. I assigned the delay time controls of each Grain Delay to the x position of the first finger on the first source channel, and the y position of the first finger of the second source channel so that both were linked to the same gesture in opposing planes.
Both Dry/Wet values were assigned to the same y position gesture, though in opposition - so that as one rose the other would fall. This linked the 2 grain delay effects to the same gesture, crossfading between allowing each channel to be fully processed by the grain delay or to bypass it. Linking these to the same gesture places the effect over both sources on a continuum and helps to makes the gesture more clear and meaningful despite the chaos being generated, making the user feel more engaged with the interface.
The pitch controls of both Grain Delays were similarly assigned to the same gesture, though instead of opposing the values across one plane, I opposed the planes, so that pitch control here would not feel like a linear continuum from left to right or up to down, instead encouraging diagonal gestures as there would be a clearer link to from sound to gesture in diagonal movements with this mapping choice.
I also accentuated the spatial controls by further assigning panning on the source channels to the same gesture that routes audio to each stereo speaker.
At this point I was satisfied that the gestures were meaningful, engaging and powerful enough to trigger dramatic articulations while creating an immersive and reactionary sound stage that could be pushed spatially through intuitive yet obfuscated controls.
Though I had spent the process of trialling these assignments, checking the gestures were tied to controls in the software visually - and having abandoned projecting on to the surface for now, this mapping was designed to have no visual component, so as to be as conducive to the ideals I explained above.
The sounds and rhythms generated were sufficiently abstracted to also complement this motive.