DUETS

Another thing I want to keep on experimenting with in the coming years is the modulation of another instrumentalist playing in a duo setting. I experimented with my setup on various instruments recordings (strings and percussions also caught my attention for the future…) and I enjoy the differences they imply on how the template functions. I also had the opportunity to practice live with 2 musicians.

Conversations with a Prepared Piano with Alex Pielsticker

To confront my setup to other instruments and observe how it could respond to their specific sonic characteristics, I invited Alex for a few improvisational sessions. We explored the interaction between his prepared piano and my modulation setup and quickly realised the duet was much more of a duet than we had ever thought. Limiting myself to processing his sounds, I was nevertheless able to really dialog with him musically with sound treatments, looping, sampling… and we both realised the collaboration was inspiring us to play very differently, in the way a ‘traditional’ duet does. Here are a few examples of what we recorded. The manipulation of the prepared piano sound was particularly interesting for me since I want to explore ways to extend the playing techniques and specificities of the instruments rather than turn them into digital instruments. It converges with my research on the multiphonics and other extended techniques on the saxophone and how I could expand these electronically. This time I was trying to expand the ‘preparation’ of the piano. It was also interesting to blur the line between the prepared piano which sounds at times very ‘electronic’ and the actual modulations I was applying, moving towards the creation of a true hybrid instruments.

LE MOT N’EST PAS LA CHOSE WITH ANDY DHONDT (composition for saxophone and Live electronics)

This is a composition for saxophone and Ableton Live to be performed by a soloist and a ‘computer artist’. ‘the word is not the thing’ is a reflection on our perception of the world and how much we can trust our knowledge (based on words) and our senses. It explores the interaction between instrument and computer and how far we can go towards the hybridisation of the instruments. It is played 100 prcts live and uses no prerecorded samples. This is an important parameter for me, because as I already wrote it and with all respect to the great music recorded and performed with tape, I came to the conclusion that I really don’t endorse this way of making electronic music. And in my case, since all electronics are derived from the acoustic instrument, I don’t want to use tapes. I considered preprogramming FXs that would react to the instrument playing and variations (through envelope and attack followers detecting what the instrument does), but even that felt very limiting and not really relevant to my artistic approach. Since it questions what comes from the saxophone and what comes from the computer, I chose to present a video of our performance instead of an audio recording.

It starts with a purely acoustic multiphonic played by the saxophone that actually sounds a bit like electronic music. I then switch on a light bulb on stage signifying the beginning of the electronic modulation, then switching it off and on again to explicitly link the 2 phenomenons and contradict the auditive impression of the audience that the sound was modulated all the time. This visual effect will need more staging that what I quickly set up in the studio, but I believe integrating a visual part to the performance will be an interesting challenge for the future because computer processing on stage is in itself terribly uninteresting visually and it is worth reflecting upon, especially if I want to question hybridity with my music. Benjamin Van Esser’s thesis on the visibility of electronic modulation on stage called (in)visible is very interesting and it comforted me in the idea that this will need some reflection on my part in the future.

At various points of the composition, I reverse the paradigm where the computer answers the saxophone lines by manipulating the sound instantly and playing back the pure, unaltered sound a few seconds later, which seems to answer the modulated sound. The most extreme moment being when the sax is replaced by my vocoded voice and answered by the ‘real’ saxophone sound when the saxophonist is actually not playing anymore. Right after that, I flatten all notes variations with a pitch shifter in real time and the actual melody only comes delayed. Further on, the electronics perform a polyrhythmic pattern, later imitated by the saxophone (and loops) playing clefs sounds. Again, the electronic ‘extrapolation’ precedes the acoustic reality (though this time the electronics are derived from something else the saxophone played). I chose to perform it without click to focus more on the interpretation than on any form of correct and fixed performance and when needed, the tempo is given by a delay in the actual music we play rather than external click sounds in the headphones. The score can be found under the video link, where I indicate in red what I should trigger related to the saxophone part. Actually writing down the modulations, eventhough it is meant for me in this case, was an interesting extra step for me in the preconceptualisation of what FXs should do at a given time as it incited me to fix more definitively what I would do, though leaving room for instant dosing of the Fxs and interaction with the saxophone since some parts are left to improvisation. The parts where I sample and loop what the sax plays are definitely best left partly improvised because a real interaction can then happen, which is in my eyes preferable to total planning in this context.