Friday, 9 October 2015
Thursday, 1 October 2015
VART3492 :: Advanced Studio Technologies 2 - Rhythm Design notes
*To achieve 16 pulses over 8 plinths, each sequence plays twice
One of the most effective ways to add spice to rhythms by means of syncopation is to use it in such a way as to create what might be called a slight temporary confusion or cognitive insecurity, or metrical dissonance, or what Neil McLachlan calls a gestalt despatialization. In the case of 16-pulse timelines, which necessarily have four strong fundamental four-pulse beats felt at pulses 0, 4, 8, and 12, a gestalt despatialization can be introduced by first misguiding the listener into perceiving and cognitively predicting a sequence of three-pulse duration intervals. For this to happen there must be at least two initial IOI’s of duration equal to three pulses. This means that the first three onsets must occur at pulses 0, 3, and 6. However this is not sufficient. To achieve the gestalt despatialization the last IOI of the rhythm must have duration equal to four units, and hence be determined by onsets at pulses 12 and 0. In this way the rhythm ends with two clear fundamental four-pulse beats.The Rhythm that Conquered the World: What Makes a“Good” Rhythm Good?(to appear in Percussive Notes.)ByGodfried T. Toussaint
Thursday, 24 September 2015
VART3492 :: Advanced Studio Technologies 2 - Bjorklund Algorithm for Arduino
int steps = 8;
int pulses = 5;
int remainder[16];
int count[16];
int level;
int stepstatus=0;
char pattern[16];
float currentmicroTime;
float currentmicroTime2;
char trackArray[8][16];
void setup() {
Serial.begin(9600);
}
void loop() {
delay (3000);
currentmicroTime = (float)micros();
compute_bitmap(steps,pulses);
currentmicroTime2 = (float)micros();
Serial.print("For ");Serial.print(steps);Serial.print(" steps and ");Serial.print(pulses);Serial.print(" pulses, "); Serial.print("the pattern is:");
Serial.println();
for (int i = steps-1; i >= 0 ; i--) {
Serial.print(trackArray[1][i]);
}
Serial.println();Serial.println();
Serial.print("Bjorklund took ");Serial.print(currentmicroTime2-currentmicroTime);Serial.print(" microseconds.");
Serial.println();Serial.println();Serial.println();Serial.println();
delay (5000);
}
void compute_bitmap (int num_slots,int num_pulses) {
//int remainder[16];
if (num_pulses > num_slots) {num_pulses = num_slots;}
int divisor = num_slots - num_pulses;
steps = num_slots; pulses = num_pulses;
remainder[0] = num_pulses;
level = 0;
do {
count[level] = divisor / remainder[level];
remainder[level+1] = divisor % remainder[level];
divisor = remainder[level];
level = level +1; }
while (remainder[level] > 1);
count[level] = divisor;
build_string (level);
}
void build_string (int level) {
if (level == -1) {
//Serial.println('0');
trackArray[1][stepstatus]='0';
stepstatus=stepstatus+1;
}
else if (level == -2) {
//Serial.println('1'); // Debug
trackArray[1][stepstatus]='1';
stepstatus=stepstatus+1;
}
else {
for (int i = 0; i < count[level]; i++)
build_string(level-1);
if (remainder[level] !=0)
build_string(level-2);
}
}
Thursday, 17 September 2015
Thursday, 10 September 2015
VART3492 :: Advanced Studio Technologies 2 - Bjorklund Algorithm & World Music
A new family of musical rhythms has been described, called Euclidean rhythms, which are obtained by using Bjorklund’s sequence generation algorithm, which has the same structure as the Euclidean algorithm. It was shown that many rhythms used in world music are Euclidean rhythms. Some of these Euclidean rhythms are also Euclidean strings [11]. The three groups of Euclidean rhythms listed in the preceding section reveal a tantalizing pattern. Those Euclidean rhythms that are also Euclidean strings (the first four of group one) are favoured in classical, jazz, Bulgarian, Turkish and Persian music, but are not popular in African music. The Euclidean rhythms that are neither Euclidean strings nor reverse Euclidean strings (the first two of group three) are used only in subSaharan African music. Finally, the Euclidean rhythms that are reverse Euclidean strings (the second group) appear to have a much wider appeal. Finding musicological explanations for the preferences apparent in these mathematical properties raizes an interesting ethnomusicological question. The Euclidean strings defined in [11] determine another family of rhythms, many of which are also used in world music but are not necessarily Euclidean rhythms, as for example (1221222), an Afro-Cuban bell pattern. Therefore it would be interesting to explore empirically the relation between Euclidean strings and world music rhythms, and to determine formally the exact mathematical relation between Euclidean rhythms and Euclidean strings.
The Euclidean Algorithm Generates Traditional Musical Rhythms - Godfried Toussaint (School of Computer Science, McGill University 2005)
![]() |
Max/MSP implementation of Bjorklund algorithm |
![]() |
Max/MSP implementation of Bjorklund algorithm - Circular pulse distribution (Model for Elements Installation) |
Thursday, 3 September 2015
VART3492 :: Advanced Studio Technologies 2 - Further Polymeter research
Let us begin with a look at the music of Steve Reich, specifically one his seminal works, Drumming (1970-71). The piece was composed over the course of roughly one year following Reich's return from Ghana, where he studied drumming with Ewe master drummer Gideon Alorworye2. Reich recorded each lesson with Alorwoye and afterward played the tapes back at a slow enough speed so as to be able to transcribe the individual patterns. By doing so he noticed that different instrumental patterns present in Gahu music, although rhythmically compatible either by a simple or polyrhythmic relationship, were not bound by any sort of attachment to the first downbeat of each measure.
The way in which Reich has established the measure lines for each pattern in his transcription is a clear indication of his perception of meter in this type of music. Rather than simply dropping all of the patterns into 4/4, Reich maps the relationship between each instrument in a polymetric, albeit very closely related, way. The top gong gong line is an eight quarter note long phrase, whereas the rattle part directly below is exactly half the length at four quarter notes. The third line, a one quarter note long kagan pattern, is basically the tactus of the whole piece but displaced by one eight note. The bottom three lines are all four quarter notes long but displaced by one and a half quarter notes (kidi), one quarter note (sogo), and a sixteenth note (agboba), respectively. This rhythmic displacement is where we start to see the influence of Reich's time in Africa on his compositional approach to Drumming. The variability of the displacement is crucial to our understanding of Drumming and how the underlying process of rhythmic phasing works.
![]() |
Midi transcription of 'Drumming' |
The sense of becoming fixed in time (to paraphrase Gretchen Horlacher) frequently goes hand-in-hand with the sense that two separate motives are actually fused into a single pattern. Put another way: because displacement dissonances do not challenge our sense of metrical periodicity (though they may challenge our sense of downbeat), two ostinati that form a grouping dissonance are far more likely to be differentiated as separate motivic strains than are two ostinati that form displacement dissonances. By intermingling these two types of metrical dissonance — often willy-nilly — Adams projects a freer relationship between two textural lines. This is one feature that sets his minimalist technique apart from that of Steve Reich or Michael Torke, who generally don’t oscillate between metrical dissonance types quite so readily.
The Sonic Illusion of Metrical Consistencyin Recent Minimalist Composition - Michael Buchler Florida State University 2006
Titles to check out:
Hearing in Time: Psychological Aspects of Musical Meter by Justin London
Meter as rhythm by Christopher F. Hasty New York and Oxford: Oxford University Press, 1997.
Thursday, 27 August 2015
VART3492 :: Advanced Studio Technologies 2 - Polymeter Studies
Basic Polymeter studies & Rhythmic Displacement:
Asymmetric lengths force pulses to shift around each other.
(Equal measures appears to shift as odd measure begins a new cycle, we perceive the odd measure to be in the foreground (figure) as it contrasts prominently against the constant equal measure that appears more background (ground))
![]() |
7/8 (top) against 8/8 (middle: downbeat/kick, lower: downbeat/snare) - taking 8 measures to resolve |
![]() |
9/8 (top) against 8/8 (middle: downbeat/kick, lower: downbeat/snare)
- taking 10 measures to resolve
|
Thursday, 20 August 2015
VART3492 :: Advanced Studio Technologies - Class Notes (Phil Brophy)
Interesting idea:
Cubism - multiple perspectives - reaction to sick of looking at one perspective
(can this be appropriated to music?) // crux of personal installation work
How to approach this with rhythm?
Is Polymeter cubist? *(multiple time signatures playing over each other - multiple perspectives)
Baroque - "a strange mix of details" - broph
suggested to oliver to write a (drumcentric) breakcore track then remove all the drums
-art should be the answer to interesting questions/experiments - shouldn't it??
sibilance as a creative tool -
you can communicate with just consonants
remove sibilance/consonants/transients - pure vowels
mush, warm mushy noise
http://www.dariusjalexander.com/2015/07/14/mixers-tuning-phase/
http://www.britannica.com/art/isorhythm
Cubism - multiple perspectives - reaction to sick of looking at one perspective
(can this be appropriated to music?) // crux of personal installation work
How to approach this with rhythm?
Is Polymeter cubist? *(multiple time signatures playing over each other - multiple perspectives)
Baroque - "a strange mix of details" - broph
suggested to oliver to write a (drumcentric) breakcore track then remove all the drums
-art should be the answer to interesting questions/experiments - shouldn't it??
sibilance as a creative tool -
you can communicate with just consonants
remove sibilance/consonants/transients - pure vowels
mush, warm mushy noise
http://www.dariusjalexander.com/2015/07/14/mixers-tuning-phase/
http://www.britannica.com/art/isorhythm
isorhythm, in music, the organizing principle of much of 14th-century French polyphony, characterized by the extension of the rhythmic texture (talea) of an initial section to the entire composition, despite the variation of corresponding melodic features (color); the term was coined around 1900 by the German musicologist Friedrich Ludwig.
A logical outgrowth of the rhythmic modes (fixed patterns of triple rhythms) that governed most late medieval polyphony, isorhythm first appeared in 13th-century motets, primarily in cantus firmus ortenor parts but occasionally in other voices as well. Abandoning all modal limitations, the isorhythmicmotet of the 14th century managed to derive decisive structural benefit from the systematic application of given rhythmic patterns without the inescapable dance associations of its 13th-century predecessor. The first great master of the isorhythmic motet was Guillaume de Machaut (c. 1300–77), but instances of isorhythm occurred as late as the early work of the 15th-century Burgundian composerGuillaume Dufay (c. 1400–74). As an analytical concept, isorhythm has proved valuable in connection with musical practices quite unrelated to those of the European Middle Ages—for example, peyote cult songs of certain North American Indian groups.
Thursday, 18 June 2015
VART3491 - Advanced Studio Technologies :: NeuralOSC
NeuralOSC is prototype interface for the sonification of live brainwave data that uses Neural Feedback to entrain both focus and relaxation in a gestural musical environment. The user is coaxed to engage with musical stimuli generated from their own neural oscillations in real time from a single dry electrode interface that communicates wirelessly via bluetooth into MaxMSP. The user's focus is entrained through the articulation of filter movements, pitch and timbral modulations, while a state of relaxation is used to lower the amplitude and tonal quality of other elements. NeuralOSC is an immediate and engaging way to explore neurofeedback and an interesting environment to play music!
Wednesday, 10 June 2015
VART3510 - AkE Internship :: Final Report Responding to Host Feedback
My internship experience gave me insight into the research application process and the speed at which the process to undertake academic research occurs. Darrin provided guidance in shaping the research and vital assistance with networking within the University and the protocols involved with preparing a research proposal. The process felt fluid and I was given freedom to set my own research goals which we developed in tandem. I was given the opportunity to work toward a postgraduate level of academic inquiry and developed some of the knowledge and skills required to undertake further research. Having had no prior experience with academic research projects I feel this is excellent preparation for the future, especially with the scope that this initial study may lead to further research.
Darrin and I have similar interests artistically so thankfully at all times I felt we were approaching the same goal in terms of field of inquiry - I think we are both genuinely excited to pursue this research topic. Darrin's own experience and knowledge generated from his postgraduate studies has been very beneficial. Along with my own literature research, his recommendations of texts have given me an ideal knowledge base to become familiar with when looking at the area where music, sound and neuroscience intersect.
The research project is scheduled to occur next semester where I will continue to gain experience in undertaking the research with participants off campus at Monash and learn the process of publishing the findings of which my name will be attached.
It is my intention to apply for the honours program and eventually move to PhD. This has been a great experience in moving toward this direction for the future.
Comments from Supervisor:
Darrin and I have similar interests artistically so thankfully at all times I felt we were approaching the same goal in terms of field of inquiry - I think we are both genuinely excited to pursue this research topic. Darrin's own experience and knowledge generated from his postgraduate studies has been very beneficial. Along with my own literature research, his recommendations of texts have given me an ideal knowledge base to become familiar with when looking at the area where music, sound and neuroscience intersect.
The research project is scheduled to occur next semester where I will continue to gain experience in undertaking the research with participants off campus at Monash and learn the process of publishing the findings of which my name will be attached.
It is my intention to apply for the honours program and eventually move to PhD. This has been a great experience in moving toward this direction for the future.
HOST ASSESSMENT OF STUDENT ENGAGEMENT
Criteria
Ranking: circle (1 - not evident, 10 – high)
A) Tasks and Presentation: 8
B) Reliance and Independence: 8
C) Skills and Application: 8
D) Communication and collaboration with other staff: 8
Comments from Supervisor:
Jay’s original research project was to explore how binaural beats entrain particular brain states, with a view to collecting EEG data to control motion simulation systems on which the audience member would be seated. Through preliminary discussions it was decided to separate the research elements from the artistic outcomes – as the latter had the potential to confound the data through both their experiential complexity and the inherent recursion of the system.
In preparation for a lit review, and in the process of discussions with other departments, the existing data was surveyed and it was decided that such a seam of pure research stood to be too dry to explore within a fine art context. In order to find a meaningful relationship between sonic practice and psychophysiological exploration a new topic was considered – whether brain patterns would be substantially different for sound art versus traditional musics. The question at its core was whether genres such as noise and electroacoustic composition might be processed more as an environment, and not engage the language centres in the brain which have been linked to tonal music grammar.
Jay’s internship was designed to give him exposure to academic research practices and procedures. The key outcomes have been
- Research into generating technical access for more professional equipment
- Exploration of different departments at uni
- Preparation for lit review
- Exploration of ethics procedures
- Experiment design
- Survey design
- Technical research (EEG)
The tests he will be undertaking have been designed, and an ethics application is in process with CHEAN. Work with the experiment will continue beyond the internship.
Thursday, 4 June 2015
VART3459 - Production Strategies :: Final Essay
I have predominantly spent undergrad exploring procedural driven technical projects, developing project specific design objectives and undertaking them systematically. I have really enjoyed the departure from working purely intuitively and developing a clear line between concept development stages and the execution stages of a project. The process of reflection in each task has helped me realise this and build upon it.
My coursework has acted as focal points to teach myself electronic engineering, computer programming and develop music theory skills. Rather than take the corresponding electives I developed all of these skills within arts based practise. It is only now that I find I have to account for this. Unless I have an artistic objective to employ a technology I do not find the process of learning that technology to be engaging in the slightest. Objective oriented learning in conjunction with concept development has proven to be an effective strategy.
In terms of expression, I have found arts based practise to allow more breadth in scope for following conceptual threads in the articulation of sonic design opportunities. Not being tethered to a summed stereo file is liberating to say the least. Of course the scope of stereo sound is vast, and the merits of being able to translate ideas working to a reductive format are numerous - but stepping outside of those constraints in an artistic context has been akin to adding a third axis in cartesian space.
At this junction I'm forced to consider why I am drawn to an installation arts practise and assess how divergent it is from my musical practise. Over the course of my degree I've neglected music almost entirely, I'm not convinced this is accidental. I'm compelled by the research driven component of installation design and the ability to articulate ideas spatially. The spatiality of stereo sound compositions is almost entirely psychoacoustic, I have a need to investigate tactility and materiality in a sonic context. I feel that one practise informs the other. The perspective from experimenting in an installation context has deepened my understanding of sound outside of a musical framework.
This course in particular has required me to reassess and clarify design objectives within my music. Early on I created a laundry list of design elements I am still yet to fully investigate. Elements of my early music were driven by the exploration of chaos and order, steganography, esotericism and temporality. Later I allowed myself to be indoctrinated by conventional compositional expression and standardised production techniques leaving most of this behind. I became tired of music. Installation work allowed an avenue to continue exploring these concepts from a new perspective.
This is perhaps why I selected a project for this course that is musical in nature, but expressed in an installation context. One element I identified early in the course that I wanted to explore and had yet to in academia was algorithmic composition. My motivations behind this are not technical. Chaos and order remain the primary elements that fascinate me in music. Being able to construct and permutate complex rhythms is integral to reconstructing the moments in music I find the most satisfying. To clarify, I have no interest in creating musical algorithms in the sense that renders the only performative gesture as a single catalytic keystroke. To me that is as fascinating as a screensaver. Or a pre-rendered composition. The actuation of process is identical.
To me, the performative possibilities of algorithmic composition are exciting. There is a gestural limit that greatly reduces the improvisational potential of live electronic music. This chasm is primarily what removes electronic music performance from the immediacy and excitement of live instrumental music, what is lost in sonic potential is gained in the immediacy of performance and the range of improvisational possibilities. I guess I like to consider rhythm as a sound object in the way a drone musician considers tone. The ability to manipulate temporal sensation through dynamics and density. My feeling is that drone and noise remain the dominant modes of improvisational electronic music performance purely because of the immediacy of the interfaces used and the speed with which humans can interact with them. I would like to see the microtonal complexity these performances be seen in an improvisational, gestural rhythmic context.
So why not a folio of music compositions? I don't find it to be a suitable way to conclude my degree. I came here to find and explore new modes of expression for sonic concepts. I intend to graduate having created a central installation work that may lead me to residency opportunities and post graduate studies. I am very interested in pursuing deeper academic enquiry into sound based research. Despite having to sacrifice the performative nature I outlined above, I wanted to find a way to incorporate algorithmic composition into an installation piece.
My proposed piece "Elements" explores the nature of matter and meter. 8 metal bowls of ice are arranged in a 1 metre radius each suspended above glass jars. Actuators are attached both the bowls and the jars as euclidean divisions of tempo are dispersed polymetrically around the circle. As the ice melts to water the glasses fill, gradually altering the resonance of each strike.
The title of the piece references Euclid's Elements, the famous thirteen book treatise by the Greek mathematician which contains his life's work.
Articulating this work electro-acoustically allows me to explore elements of materiality and visual aesthetics, elements that are not primary considerations when working digitally. The removal of almost limitless freedom to sculpt sound in the digital domain has forced me to consider sound much more maturely. I am enjoying this. To be restricted to minimalist sound palette with no post processing forces me to consider how sound will react within the space. There is something liberating about this. Being able to step back and just let sound exist. Just using a small set of acoustic parameters to alter the sonic characteristics - that on a nano-level are somewhat aleatoric.
My coursework has acted as focal points to teach myself electronic engineering, computer programming and develop music theory skills. Rather than take the corresponding electives I developed all of these skills within arts based practise. It is only now that I find I have to account for this. Unless I have an artistic objective to employ a technology I do not find the process of learning that technology to be engaging in the slightest. Objective oriented learning in conjunction with concept development has proven to be an effective strategy.
In terms of expression, I have found arts based practise to allow more breadth in scope for following conceptual threads in the articulation of sonic design opportunities. Not being tethered to a summed stereo file is liberating to say the least. Of course the scope of stereo sound is vast, and the merits of being able to translate ideas working to a reductive format are numerous - but stepping outside of those constraints in an artistic context has been akin to adding a third axis in cartesian space.
At this junction I'm forced to consider why I am drawn to an installation arts practise and assess how divergent it is from my musical practise. Over the course of my degree I've neglected music almost entirely, I'm not convinced this is accidental. I'm compelled by the research driven component of installation design and the ability to articulate ideas spatially. The spatiality of stereo sound compositions is almost entirely psychoacoustic, I have a need to investigate tactility and materiality in a sonic context. I feel that one practise informs the other. The perspective from experimenting in an installation context has deepened my understanding of sound outside of a musical framework.
This course in particular has required me to reassess and clarify design objectives within my music. Early on I created a laundry list of design elements I am still yet to fully investigate. Elements of my early music were driven by the exploration of chaos and order, steganography, esotericism and temporality. Later I allowed myself to be indoctrinated by conventional compositional expression and standardised production techniques leaving most of this behind. I became tired of music. Installation work allowed an avenue to continue exploring these concepts from a new perspective.
This is perhaps why I selected a project for this course that is musical in nature, but expressed in an installation context. One element I identified early in the course that I wanted to explore and had yet to in academia was algorithmic composition. My motivations behind this are not technical. Chaos and order remain the primary elements that fascinate me in music. Being able to construct and permutate complex rhythms is integral to reconstructing the moments in music I find the most satisfying. To clarify, I have no interest in creating musical algorithms in the sense that renders the only performative gesture as a single catalytic keystroke. To me that is as fascinating as a screensaver. Or a pre-rendered composition. The actuation of process is identical.
To me, the performative possibilities of algorithmic composition are exciting. There is a gestural limit that greatly reduces the improvisational potential of live electronic music. This chasm is primarily what removes electronic music performance from the immediacy and excitement of live instrumental music, what is lost in sonic potential is gained in the immediacy of performance and the range of improvisational possibilities. I guess I like to consider rhythm as a sound object in the way a drone musician considers tone. The ability to manipulate temporal sensation through dynamics and density. My feeling is that drone and noise remain the dominant modes of improvisational electronic music performance purely because of the immediacy of the interfaces used and the speed with which humans can interact with them. I would like to see the microtonal complexity these performances be seen in an improvisational, gestural rhythmic context.
So why not a folio of music compositions? I don't find it to be a suitable way to conclude my degree. I came here to find and explore new modes of expression for sonic concepts. I intend to graduate having created a central installation work that may lead me to residency opportunities and post graduate studies. I am very interested in pursuing deeper academic enquiry into sound based research. Despite having to sacrifice the performative nature I outlined above, I wanted to find a way to incorporate algorithmic composition into an installation piece.
My proposed piece "Elements" explores the nature of matter and meter. 8 metal bowls of ice are arranged in a 1 metre radius each suspended above glass jars. Actuators are attached both the bowls and the jars as euclidean divisions of tempo are dispersed polymetrically around the circle. As the ice melts to water the glasses fill, gradually altering the resonance of each strike.
The title of the piece references Euclid's Elements, the famous thirteen book treatise by the Greek mathematician which contains his life's work.
Articulating this work electro-acoustically allows me to explore elements of materiality and visual aesthetics, elements that are not primary considerations when working digitally. The removal of almost limitless freedom to sculpt sound in the digital domain has forced me to consider sound much more maturely. I am enjoying this. To be restricted to minimalist sound palette with no post processing forces me to consider how sound will react within the space. There is something liberating about this. Being able to step back and just let sound exist. Just using a small set of acoustic parameters to alter the sonic characteristics - that on a nano-level are somewhat aleatoric.
Saturday, 30 May 2015
VART3459 - Production Strategies Journal 10
Spatialised Euclidean rhythmic divisions prototyping:
http://www.instructables.com/answers/How-do-you-use-this-relay-module/
https://www.youtube.com/watch?v=sR3xmof8rZY
Electro acoustic instead of synthesis/speakers??
Solenoids & Relay via Arduino over serial
Example (video not mine):
16 channel relay! 16 solenoids..
RELAY + ARDUINO LINKS:
http://www.instructables.com/answers/How-do-you-use-this-relay-module/
https://www.youtube.com/watch?v=sR3xmof8rZY
Inspiration links from Phil:
Listening to the Reflection of Points
Toshiya Tsunoda
In order for a sound wave to be heard as sound, various physical and qualitative attributes must interpose.
Using the reception of footsteps as metaphor, Tsunoda’s installation features a table with small speakers and corresponding numbers of everyday objects. A sound wave moves through the speakers at fixed intervals, exploring the changes a sound wave makes to a fixed space when it progresses towards objects, collides with them and is reflected. As such it becomes an index for the fixed space, and generates changes as it moves. These changes are perceived by the observer and appear as occurences. Reception of these occurrences means that the state of the space has been grasped, thereby completing the change generated.
Toshiya Tsunoda’s work is internationally recognized for its unique and conceptually rigorous take on sound installation and field recording. Born in Kanagawa, Japan in 1964, Tsunoda received his MFA from Tokyo National University of Fine Art and Music.
Floating Glass
Ros Bandt
Empty Vessels
ALVIN LUCIER
Empty Vessels (1997) belongs to a series of works in which Alvin Lucier studies the resonance characteristics of the smallest interior spaces. He has placed eight glass water containers and vases on pedestals along the space’s south wall. Microphones are placed in the openings of the empty vessels, and each is rooted through limiters to speakers. The eight loudspeakers are positioned exactly opposite, along the space’s north wall. The amplifier in each microphone-loudspeaker system is selected in such a way that each strand of feedback is marked by the resonant characteristics of the respective vessel. This creates a controlled feedback field of resonant tones and their interferences in the ANX gallery space. When entering the field, the visitor disrupts the delicate balance of the system, creating new feedbacks with unexpected frequencies. Even a small head movement is enough to hear a multifaceted, fluctuating spectrum of pitches. Realization: Nicolas Collins and curator: Carsten Seiffarth. Read more about Alvin Lucier and his program in Oslo during the Ultima Festival.
Thursday, 28 May 2015
ARCH1372 - Development Notes
TILTING:
AUTOMATE CIRCULAR/SPIRAL MOVEMENT IN NODE? - <ARCHEMEDIAN PATCH>
- AS SUB PITCH RAISES AMPLITUDE SHOULD LOWER NOT RAISE [SLIGHTLY] - WEIGHT IS IN THE OTHER QUADRANTS ON A TILTED PLANE - PITCH RAISE REINFORCED BY SHEPARDS TONE?
- PITCH REINFORCED BY HYDRAULIC NOISES?
- GRANULAR PITCH CLOUDS / TONES
- EXPONENTIAL CURVES
- TRITONE PARADOX FOR OPPOSING QUADRANTS (ONE PITCH RAISES THE OTHERS APPEAR TO FALL)
- ^ STEPPED TONES [HALF OCTAVE APART] TIMBRE: LIKE "CREAKING PLATFORM" CROSS MAP RAISED PITCH WITH FALLING PITCH ACROSS OPPOSING QUADRANTS
- ** USE REAPER TO FINESSE ARTICULATION **
- MAX TO PROTOTYPE
AUTOMATE CIRCULAR/SPIRAL MOVEMENT IN NODE? - <ARCHEMEDIAN PATCH>
Friday, 15 May 2015
VART3459 - Production Strategies Journal 9
Arduino circuit with waterproof temp sensor, photoresistor and audio output
arduino + temp & light sensors in situ with piezo contact mic triggering generative tones and modulating live audio feed in maxMSP
Reflection: Best I could do given the circumstances. Notes for the future, generative site responsive works should feel more emergent of the space. We also should have directed people to enter one at a time. With a longer installation the body heat of the people would have altered the tone more. I felt the sounds generated were a decent approximation of the space. It felt more like a score than a sound design piece, but I think ultimately that was the intention. It was performed, but by a computer and electronic sensors.
Tuesday, 12 May 2015
VART3510 - AkE Internship Journal Week 7
With the research now being projected to occur after my Internship at AkE has concluded I've been looking at some of the other objectives I wanted to complete during the semester at AkE. Though the LIEF grant equipment we've requested access to at Monash is far superior to the NeuroSky hardware I modified, I've spent this session at AkE continuing to develop the software I started in week 1.
The patch receives 10 values from TGAM1 board via bluetooth:
- "/eegdelta" : 1-3Hz
- "/eegtheta" : 4-7Hz
- "/eeglowalpha" : 8-9Hz
- "/eeghighalpha" : 10-12Hz
- "/eeglowbeta" : 13-17Hz
- "/eeghighbeta" : 18-30Hz
- "/eeglowgamma" : 31-40Hz
- "/eeghighgamma" : 41-50Hz
- "/attention" 0-100
- "/meditation" 0-100
- "/signal" 0-200
- "/raw" -2048. - 2048.
Friday, 8 May 2015
VART3459 - Production Strategies Journal 8
Group task preparation
Group is having difficulty selecting a space. Oli found a space in Trades hall.
We all seem to want to use a confined space. Oli suggested using arduino.
Ash wants to make it comedic/performative. I'm not sure I can contribute to that.
Backup space is the ladder area near the toilets in building 14, can we access it?
use arduino sensors to approximate/express a space we can't access musically?
-temp
-light
-contact mic?
Group is having difficulty selecting a space. Oli found a space in Trades hall.
We all seem to want to use a confined space. Oli suggested using arduino.
Ash wants to make it comedic/performative. I'm not sure I can contribute to that.
Backup space is the ladder area near the toilets in building 14, can we access it?
use arduino sensors to approximate/express a space we can't access musically?
-temp
-light
-contact mic?
Tuesday, 5 May 2015
VART3510 - AkE Internship Journal Week 6 :: Further refinement & CHEAN
Darrin has advised me of the College Human Ethics Advisory Network [CHEAN] and the process required to authorise research involving participants. He has advised me to outline any potential risk factors of the test design.
I've made note of the following considerations for our application to CHEAN:
In the Ethics Checklist for Low Risk Projects I only identified one question that may prevent us from applying under the Low Risk category. Under administration of other substances or devices I ticked YES on the basis that we will be using the EEG device on our participants. Darrin is going to contact CHEAN to ascertain whether this does disqualify us from the low risk category.
We also decided to restrict the audio stimuli to two sources. We are both fans of the contemporary electronic noise musician Merzbow, we opted to compare and contrast the neural oscillatory response from a single piece by Merzbow and a single piece by Mozart. Simplifying the list of auditory stimuli to be played to the participants allows us to avoid having to tightly control the timbral, historical, dynamic elements and energy levels of the compositional stimuli. This allows for a piece by a contemporary electronic noise musician to be incorporated into the research.
This is also under the suggestion from Darrin that the most opportune pathway with research is to begin with broad strokes and follow up with detail in subsequent research - which feels much more suitable.
Refined research proposal:
I have modelled our Participant Survey after a previous study that Darrin had undertaken.
I've made note of the following considerations for our application to CHEAN:
*Volume - Decibel level /SPL using Earbuds
We can control for this using a suitable volume
*Dry electrode sensor considerations
Participants need to informed that they can not have any cuts or abrasions in sensor area, jewellery, metal plates in skull, electrical oversensitivity, pacemakers.
*Psychological issues
Controllable, we will require that participants cannot undertake the research if psychological issues are present
Also, after filling out the CHEAN application form: Ethics Checklist for Negligible Risk Projects - questions 1, 2 and 8 required to be answered YES, requiring the Ethics Checklist for Low Risk Projects to be filled.
In the Ethics Checklist for Low Risk Projects I only identified one question that may prevent us from applying under the Low Risk category. Under administration of other substances or devices I ticked YES on the basis that we will be using the EEG device on our participants. Darrin is going to contact CHEAN to ascertain whether this does disqualify us from the low risk category.
We also decided to restrict the audio stimuli to two sources. We are both fans of the contemporary electronic noise musician Merzbow, we opted to compare and contrast the neural oscillatory response from a single piece by Merzbow and a single piece by Mozart. Simplifying the list of auditory stimuli to be played to the participants allows us to avoid having to tightly control the timbral, historical, dynamic elements and energy levels of the compositional stimuli. This allows for a piece by a contemporary electronic noise musician to be incorporated into the research.
This is also under the suggestion from Darrin that the most opportune pathway with research is to begin with broad strokes and follow up with detail in subsequent research - which feels much more suitable.
Refined research proposal:
Observation and recording of comparative brainwave activity changes in response to musical and noise based sonic compositions in order to examine whether a conventional musical framework is a significant component in the neural response to composed sonic stimuli.
Participants will be played a single piece Mozart followed by a single piece by Merzbow. Responses to the stimuli will be measured via EEG over the duration of the composition after a baseline response has been recorded. Respondents will also be surveyed via questionnaire following the experience.
I have modelled our Participant Survey after a previous study that Darrin had undertaken.
Saturday, 2 May 2015
VART3459 - Production Strategies Journal 7 [Post 2nd review]
Further development of Euclidean Rhythmic Distribution using Bjorklund algorithm in Max MSP
3 Polyrhythmic pattern generators triggering sample playback.
Midi export as a compositional tool:
Further implementation:
Polymetric timing
spatial presentation of rhythm [multichan] ?
kill sample perc / use synthesised perc
automate permutations
shape dynamics
create/modulate space through rhythmic density
vol ramps/velocity for temporal flux
controlled relational timbral changes
begin dualistic relationships [freq/pitch] move to triadic relationships
swing?
flam?
dan graham > surveillance installations
Tuesday, 28 April 2015
VART3510 - AkE Internship Journal Week 5
Working with the current research proposal below, I've been looking at developing the test design.
Musical choice considerations:
With such a broad range of stylistic material the challenge has been to make the stimuli feel related in order to control the response.
Temporal, Timbral and Spectral content are real considerations. The "energy" and tempo of the music should be as similar as possible in order to control for brainwave response.
I've outlined some potential candidates for music selection to fulfil the criteria outlined in the research proposal:
Test design:Observation and recording of brainwave activity changes in response to a range of musical and soundscape compositions in order to examine whether a conventional musical framework is a significant component in the neural response to composed sonic stimuli. Participants will be played pieces of atonal music, noise music, soundscape & stochastic music in contrast to more conventional musical compositions such as pop, rock, jazz & classical. Responses to the stimuli will be measured via EEG.
- Participants are seated and given in ear monitors that will not obstruct the electrodes of the EEG device.
- A 30 second baseline EEG response is taken in between audio stimuli.
- EEG response is observed and recorded while participant is played audio stimuli.
- Participant is given a survey at the conclusion of the test where experiential biases and qualitative emotional responses are recorded.
Musical choice considerations:
With such a broad range of stylistic material the challenge has been to make the stimuli feel related in order to control the response.
Temporal, Timbral and Spectral content are real considerations. The "energy" and tempo of the music should be as similar as possible in order to control for brainwave response.
I've outlined some potential candidates for music selection to fulfil the criteria outlined in the research proposal:
Classical: Mozart - Symphony No. 9 in C Major
Energetic and Dynamic.
Atonal: Dane Rudhyar - Granites (1929)
Energetic and unusually dynamic for an atonal piece. Similar length.
Jazz: Duke Ellington - The Clothed Woman (1947)
Actually considered an atonal piece, this piece could control for the energy level of the previous piece.
Dynamic, movement resembles the classic piece. Moments of energy.
Actually considered an atonal piece, this piece could control for the energy level of the previous piece.
Dynamic, movement resembles the classic piece. Moments of energy.
Noise music: Edgard Varese - Ionisation (1929 - 1931)
Composition modelled on noise, closer to timbral quality of previous pieces
Stochastic Music: Xenakis - Pithoprakta (1955/1956)
Matches similar timbre and dynamics of previous pieces. Similar length.
Musique Concrete: Bernard Parmegiani - Violostries (1969)
Hard to find a musique concrete piece that matches the timbre and energy -
but this Parmegiani piece feels musical and dynamic.
but this Parmegiani piece feels musical and dynamic.
Rock: Amon Düül II - Between The Eyes (1972)
Very difficult to select a "rock" track that is within the timbral, spectral and dynamic energetic levels of the rest of the selected pieces,
but I think this piece by Amon Düül II best fits the criteria.
but I think this piece by Amon Düül II best fits the criteria.
Pop: Queen - Bohemian Rhapsody (1975)
Very difficult to control for bias, tone, length and energy levels. Pop is a very broad and culture bound genre.
Selected this based on its changing dynamics, timbral qualities and similar time period to the rest of the material.
Selected this based on its changing dynamics, timbral qualities and similar time period to the rest of the material.
Tuesday, 21 April 2015
VART3510 - AkE Internship Journal Week 4
I've taken the time to look into some of the other literature in AkE lab that Darrin suggested:
Daniel Levitin - This Is Your Brain On Music
David Huron - Sweet Anticipation (Music and the Psychology of Expectation)
Lerdahl, Jackendoff - Generative Theory of Tonal Music
The content explored in these books is brilliant! Many topics I wanted to explore in the coursework of my program are contained within these books. Many of the theories explored in the discourse will be very beneficial when we eventually get to undertake the research. There is a wealth of information on the neuroscience of processing music, but far less on the topic of "sound art" - understanding the current literature that explores the neuroscience of music is vital when juxtaposing this with sound art in a research context.
I've ordered my own copies of these books, I've borrowed the Levitin title from the library to try and get through it while we're working on this.
I've also listed the primary research questions we're looking to pursue:
Research Questions:
1: Is there a remarkable difference in the neural oscillatory response to contemporary sonic compositional works when compared to traditional musically structured composition?
2:Does the brain interpret these stimuli independently?
3:Does experiential bias affect these measurable responses? If so, how?
4:Does the neural oscillatory (and psychophysical) response to non musical sonic compositional works match the pattern of the neural oscillatory response to environmental audio stimuli?
Subscribe to:
Posts (Atom)