Tuesday, 31 March 2015

VART3510 - AkE Internship Journal Week 1

Began modification on hardware and development of custom EEG software for preliminary use.

The MindFlex consumer EEG device serves as a perfect prototyping solution to begin working with EEG data. The device contains the Neurosky TGAM1 board which can be reprogrammed to send raw waveform data at a high baud rate of 57.6k equating to approximately 512 packets of data per second (or 1 packet every 2 milliseconds.) This is achieved by either desoldering a resistor on the board, or sending a command byte via serial.


The default configuration mode of the MindFlex device is set to is 9600 Baud without Raw Output, which is unsuitable for measurement purposes. I used an HC-05 Bluetooth module soldered to the TX and RX (Send & Receive) pins of the TGAM1 board to hijack the signal so I can send and receive packets in MAXMSP.



The MindFlex device is uses a monopolar montage EEG collection system, referencing a single dry electrode with two common reference electrodes attached to the earlobes. This is in contrast to bipolar montage collection which references two active scalp sites, commonly using a wet electrode system.

The results of the NeuroSky device have been tested and are comparable to Biopac wet electrode medical grade data acquisition devices used in research that has been published in peer reviewed journals, making it a suitable prototyping solution.




I've started some basic preliminary research of brainwave entrainment using binaural beats. I'm also looking into Isochronic Tones as per Darrin's suggestion.




Friday, 27 March 2015

VART3459 - Production Strategies Journal 3

Week 3 - 
Academic paper on Meshuggah:
  • Re-casting Metal: Rhythm and Meter in the Music of Meshuggah
  • JONATHAN PIESLAK
  • Music Theory Spectrum
  • Vol. 29, No. 2 (Fall 2007) (pp. 219-245
"In the first section of this essay, I examine rhythm and meter in Meshuggah’s music from 1987–2002, which is based on three specific techniques: large-scale odd time signatures, mixed meter, and metric superimposition. Scholars like Mark Butler, Walter Everett, and David Headlam employ models for the rhythmic analysis of pop-rock music based on ideas of meter, hypermeter, and metric dissonance developed by Harald Krebs and William Rothstein. These methods provide a useful framework for my discussion of Meshuggah’s music during this period.

....


This type of metric superimposition, or overlay, characterizes many Meshuggah songs and is articulated typically through the instrumental texture, where the guitars, bass, and pedal bass drum are based on a large-scale odd time signature and mixed meter while the cymbals (or some other instrument of the drum set, usually a hi-hat) maintain a steady quarter-note pulse that expresses a symmetrical hypermetric structure."




Creative AI: Computer composers are changing how music is made: http://www.gizmag.com/creative-artificial-intelligence-computer-algorithmic-music/35764/
Euclidean rhythms:

Euclidean sequencer in browser: http://www.groovemechanics.com/euclid/
Euclidean sequencer for Raspberry Pi: http://xavierriley.co.uk/neutron-accelerators-and-drum-machines-with-sonic-pi/
Euclidean sequencer in MaxMSP + Python: http://pattr.ru/rhythm-happy-failure.html
Euclidean sequencer in Ruby: http://blog.noizeramp.com/2008/10/26/rhythm-generation-with-an-euclidian-algorithm/

Factoring, Euclidean algorithm and rhythms: http://bbolker.github.io/math1mp/notes/week5A.html

Algorhythms: Generating some Interesting Rhythms: http://www.maths.usyd.edu.au/u/joachimw/talk2.pdf
MATHEMATICAL INVESTIGATIONS INTO RHYTHM: http://vrs.amsi.org.au/wp-content/uploads/sites/6/2014/09/Adam_Rentsch_AMSI_report.pdf

Algorhythms:Sonic Journeys and Emergent Patterns in Rhythmic Music: http://www.eca.ed.ac.uk/sites/default/files/documents/news/Algorhythms_Dave-House.pdf

Bjorklund Algorithm Python Script: http://brianhouse.net/files/bjorklund.txt
Euclidean Rhythm Generator using only MaxMSP (No Externals/Abstractions): https://cycling74.com/forums/topic/using-euclideanbjorklund-algorithm-for-rhythm-generation-purely-in-max/


Acreil post ia gearsluts:


It's helpful to link multiple Euclidean patterns in series, and also to have a different counter/divider driving each part so you can get polyrhythms.

I've tried a lot of other ways to automatically generate drum patterns; mostly it ends up sounding like a train wreck. I even replaced the pattern ROM chip in a Korg KR-55B with a salvaged EPROM from something or other. The results weren't really compelling.

Another reasonably useful thing you can do is take a 16 (or whatever) step pattern and partition it into 4 parts. You can program a number of complete patterns and switch between them independently for each 4 step section. This is nice if you want some "variation within pre-determined limits". You may want to treat this differently for each sound.




Possible Non algorithmic (hand sequenced Polymetered + Polyrhythmic examples)

Using the 3 reference tracks as a basis to create a new track.


Start with simple rhythmic marker (ae @ flex) create polymetering (meshuggah) incorporate vocal structures (md + p73) analyze speech patterns and transpose to rhythm? <audio to midi> 

Tuesday, 24 March 2015

VART3510 - AkE Internship :: Intention Statement

VART3510 - AkE Internship :: Intention Statement

1. What you hope to gain from your experience at your host organisation
2. Your schedule at the organisation
3. The tasks or special project you have been assigned to
4. Information about the host organisation and why you selected this organisation in relation to your own career ambitions


I aim to learn and work within the framework of scientific research whilst exploring the potential for audio as a basis for academic study. I also aim to work toward a publishable paper of my findings. I will be developing skills that can be applied to development of gestural interfaces and installation artworks, exploring the articulation of raw brainwave data with hydraulic kinetic gestures. 

I have booked AkE Lab every Tuesday from 9:00am - 1:30pm for the duration of my Internship. In this time I will be developing the necessary software to undertake the research project and consequentially observing and recording the results of a small selection of participants. I have arranged a sample group of RMIT sound students to participate in the study which I will be balancing with a control group of non sound focused volunteers.

I will be conducting a volunteer based research study into brainwave entrainment via specific audio cues. I will also be experimenting with the use of specific audio stimuli as a modulation source for realtime EEG data driven gestures of the Thruxim Motion System based at AkE Lab, RMIT University Melbourne. The study will be supervised by Darrin Verhagen, but will be entirely self directed. I will be journaling the development process with preparatory research, technical notes and personal reflection.


AkE (pronounced "ache") is a multidisciplinary research, teaching and learning laboratory using motion simulators and 4D cinema seating to explore relationships between sound, movement and vision.  AkE Lab presented an opportunity to work on an academic research project on campus at RMIT. This opens exciting pathways at a postgraduate level.

ARCH1372 - Major Project Outline

Project Title

Axes (/ˈæksiːz/)

What is your project about? 

By applying the fundamental properties of sound in a multidimensional spatial framework I will create an immersive kinetic sound experience that explores the relationship between sonic gestures and perceived space.


Why do you want to create it?

I want to utilise the full potential of the 16 channel array at SIAL to explore the possibilities of a purely aural kinetic space. By examining and applying the fundamental properties of sound to a spatial framework I aim to create a sense of immersion and engagement that makes a unified tangible experience for the participants within the space. Through this process I aim to highlight the capacity sound alone has to create an objectively perceived space and the potential for the physicality of sound when applied spatially.











Friday, 20 March 2015

VART3459 - Production Strategies Journal 2

Journal 2 - Post presentation


I played my RMIT application folio, stating that it really felt like the most recent collection of musical pieces I've worked on.

Phil picked up on a temporal quality being explored. Phil said, "Sounds like these pieces have been playing before we heard them." or words to that effect. Interesting, considering the compositions I enjoy the most create a similar sense. The notion that when you look through a window the world outside doesn't begin the instant you engage with the window, activity has been occurring in perpetuity. I'd like to explore this more in music. I think Phil was just commenting on the fades. There is so much scope to compose without being clamped to the x axis with only a time vector to slide across it to the right..


3 key influences:

I would have liked to have picked a Xenakis and Cage pieces for this, but fundamentally I find their structures and concepts to be far superior to their aesthetics. The following artists left more of an impact on me sonically as I enjoy their aesthetic as much as their technical skill. Parmegiani, Penderecki and Wendy Carlos are all classic composers I really enjoy.




Autechre - Live at Flex Vienna (1996)
Long form minimalist performance, exercise in restraint. I love this.





Meshuggah - In Death is Death [Catch 33]
Sections of this manages to capture order and chaos simultaneously. 
Polymetric timing and odd tonal choices create this effect.




Machinedrum - Half The Battle // Prefuse73 - Radioattack
Using the voice as rhythmic sound object allowing hiphop to be stripped of 
contextual baggage and retaining/developing its aesthetics.




Key concepts, motives that can be transposed out to new presentation models beyond non visual stereo speakers/pa.

Development of a performative aspect to recording and presenting work.

Focused, directed work. Objective driven experimentation. Conceptual works obfuscated with process.

Working toward to a sound design portfolio.



Tuesday, 17 March 2015

VART3510 - Internship Program (AkE Lab) Project Outline

Volunteer based research with a small study group examining the relationship between audio stimuli and neural oscillations in an immersive kinetic environment.

Observing and recording results of the use of specific audio stimuli as a modulation source for realtime EEG data driven gestures of the 2DOF Thruxim Motion System based at AkE Lab, RMIT University Melbourne.

Noise stimuli (Pink, White)
Binaural Beats

Scope to sonify the EEG data as sonic biofeedback played to the participants.

Scope to test claims made of binaural beat frequencies directly modulating brainwave activity and record the results.

Scope to study the possibilities of thought driven motion simulators and the potential relevance of sound as the key modulation source.



Required:
Participant Information Consent Form: CHEAN

Background research:
audio brainwave entrainment
binaural research
isochronic tones
neural oscillation

Meeting with Jenny Robinson

Preparatory hardware & software tests at AkE next week.


Monday, 16 March 2015

ARCH1732 - Convolution Experiments

Using FScape I experimented with convolution using samples from my library.



Experiment 1:
For the input file I selected a short transient (a snare i'd made from my own library)




For the Impulse Response I selected a long granular choir sample from a free pack downloaded a long time ago.




The results were not impressive. I made the assumption that a long pad sound would make a good impulse for the short transient input file. 




I reversed the sample selection for the input file and impulse file to test whether this would have some effect on the output. 




I was disappointed to discover that it didn't. 






Clearly I have a lack of understanding of the convolution process. After taking a look at the FScape Module documentation I took another approach with sample selection and experimented with the settings.

"the spectral characteristics of the IR (both amplitude spectrum and phase resonse) are superimposed onto the input file. When using normal sounds the output tends to emphasize the low frequencies unless the IR has a flat spectrum. It tends to emphasize the frequencies that exist both in the input and the IR and cancels those frequencies which are not present in either file."

I opted to use a dynamic sample from my library as the impulse response with a sample of granular drone textures from a sonified image as the input file. I also experimented with the convolution with inversion mode, morph IRs with correlate and shift policy setting along with Truncate overlaps which the documentation stated caused "interesting" results from buggy code.



Input file: granular drone textures from sonified image


Impulse file: dynamic digital transient + fx sample from personal library



Output file: Convolution with inversion mode, morph IRs with correlate and shift policy setting along with Truncate overlaps



Output file: Convolution with inversion mode, morph IRs with Polar Crossfade policy setting along with Truncate overlaps



Output file: Deconvolution mode, morph IRs with Polar Crossfade policy setting along with Truncate overlaps


Output file: Convolution mode, morph IRs with Polar Crossfade policy setting along with Truncate overlaps



Interesting method for generating textures when superimposing rhythmic content. I would like to use recordings of speech patterns as a source for future experiments. 

Finally I decided to use an impulse response from a range I recorded myself last year at First Site Gallery at RMIT for a previous assignment. For these experiments I used dry sample of a Koto playing Kojo no Tsuki as the input file and a mouth click impulse rather than one of my balloon pop impulses.




Input file: Dry sample of Koto playing Kojo no Tsuki (The Moon Over the Ruined Castle)


Impulse file: Mouth Click + Impulse Response recorded at RMIT First Site Gallery


Output file: Convolution mode


Output file: Convolution with inversion mode + truncate overlaps


Output file: Deconvolution mode + truncate overlaps


Great results, especially with deconvolution mode. I have used convolution reverbs in this way in the past, but not with this utility. FScape has some interesting quirks I enjoyed exploring. I will definitely be exploring the rest of its modules and experiment with convolution further with other rhythmic and textural combinations.














Friday, 13 March 2015

VART3459 - Prod. Strategies Journal 1

Journal 1 - Reflection
How divergent practices have informed an ongoing design motive.

What I am inspired by?

Contextual transposition, steganography, Ordered Chaos
'Linguistic' Music - Phraseology

I want to study DSP algorithms and articulate them gesturally. The gestural component provides the "linguistic" articulation.


Non algorithmic (hand sequenced Polymetered + Polyrhythmic examples)


[Interventions / Actions]

Eno - Imaginary Landscapes
David Byrne - How Music Works

Transducer - Speak Percussion



Meshuggah - In Death Is Death (Cued from ~4:40)
Autechre - Live At Flex 1996 (cue from 7:30)
Machine Drum - Half the Battle
/ Prefuse73 - Radioattack




Tuesday, 10 March 2015

ARCH1372 - Image to text exercise


Find an image, describe the sound of this image in words.




Literal: Monochromatic, noisy, granulised, textural, ragged, smeared, viscous, abraded.

Symbolic: Static, Rorschach, Shroud, Newsprint, Stone Rubbing.

Alliteration: A Tattered tactile textured text textile.

Onomatopoeia:



Random Inkjet Print (Readymade) Julyyb via Deviantart

Thursday, 5 March 2015

ARCH1372 - Spatial Sound Modeling :: First Class Notes

SIAL ::

Major Assessable Work Development Notes

Working Title: 'AXES' ? Axis

Recorded composition could be adapted to audience/gesture driven realtime piece (kinect) 
Offset clusters through sonic cues / 'tilting'
Reinforce clusters through sonic cues / 'tilting'

Resistance / Attractant 

** Sheppards Tone? Risset?? ***


Key question: Is this ambisonics? 
Use of Abstracted sound choices/Non real space? Non representational sonic space = Is this Ambisonics? 

** Key distinction: Using space to understand sound, not using sound to understand space. **

Main assessment: Preparation

Readings
Idea Dev: 16 as a concept, mathmatic significance http://en.wikipedia.org/wiki/16_%28number%29
_Pairings_ as a key concept for composition
Ambisonics Research, 

** Gestalt Driven Design** Unified experience, not subjective. Intentional, for greatest impact. The space _is_ the piece.
** Contextually reduced sonic space  (No recordings)
** Physicality driven / Objective experience intended **

** __Implied__ circular space.**

Donald Schon - "The designer designs not only with the mind but with the body and senses."
The Reflective Practitioner
Educating the Reflective Practitioner

Depth? (Frequency drop/rise) - Spatial Relationships through Frequency VERTICAL AXIS SPATIALITY
Spatial Relationships through Call & Response [Vocal/Voice Pairings]
Spatial Relationships through Mathmatical Geometry/Symbology [RESEARCH]
Spatial Relationships through Panning (Combined with Freq for "Tilting")
Spatial Relationships through Time & Distance

Transient Experiment: 
Fire 100% dry transient through one side of a pair // fire 100% wet reverb/other fx through other pairing
Cross pairing (Firing both at each other simultaneously)

Spatial Relationships through Listener Placement [Static vs. Dynamic]

Spatial Orientation? Should ambisonic/spatial composition consider/maintain traditional notions of basic L+R orientation?

Compositional/Temporal Orientation: Should ambisonic/spatial composition consider/maintain traditional notions of basic L to R temporal orientation?

[Establishment of Spatial Language]


Example Performance: '40 Part Motet'

Binaural recording: http://www.amazon.com/Roland-CS-10EM-Binaural-Microphone-Earphones/dp/B003QGPCTE