
Participants 2025
Carmel Freeman
AI Song Contest 2025 / participants
TEAM / Carmel Freeman
SONG / h e r b e r t s d r e a m
About the TEAM
I am a listener-composer with an indiscriminate attitude towards sound. My music is rooted in the corporeal: the performer in motion struggling to play an instrument, the feeling of music vibrating in your gut when you stand in front of a sound system at a dance club, the ways in which you listen to a space as a sounding object filled by the vibrations of a piano. Through sound I conjure the organic and living. By exploiting the instinctual qualities of sound I understand and challenge the extra-musical. I connect to myself and through my body learn new ways of experiencing the world. When we listen we remember, we feel, we question, we understand.
About the SONG
Our dreams are made of us. Machine intelligence is made of us too: its data is our thoughts. This song is a fever dream conjuring a collision of worlds, imaginary and real. This song is a confrontation of post-human corporeality. This song is the dissonance in moments waking and time asleep. This song is the grid and the spaces in-between it. This song is your breath and the hot air you use to dry your hands. This song is a song.
About the HUMAN-AI PROCESS
Waveform synthesis using timbre-transfer models allow you to imprint an incoming signal with the timbre of another instrument in real-time. This means that you can generate sounds live which preserve features from your signal (in my case a synthesiser) like pitch and rhythm but transform your signal to sound like the training data (speech sounds, saxophone sounds, choir sounds). Something interesting when using this technique is that it can also hallucinate, making decisions that don't reflect the input at all. This unruliness can be very musically exciting. My experiments quickly allowed me to shape my sound in ways that more common software instruments (sample-based etc.) can't, but the compositional material still needed to be established. I had a bass line in my head that I wanted to bring to life which I imagined with a J Dilla-inspired drum pattern that combines both a straight and swung feel. I made a MIDI file of this kind of time-feel and used it to train a Markov chain in Max/MSP. This proved unsuccessful, the unpredictable nature of this kind of drum groove just wouldn't be generated by the models I built and I had to do it myself. I used ChatGPT to generate lyrics around the themes I wanted to explore - viscerality, post-humanism, corporeality - and got some interesting results after asking for it to be more abstract several times. For the singing voice I used a Sonarworks voice AI plug-in that could transform recordings of my voice into someone else's. I improvised melodies using the text generated by chatGPT and was happy with the results. With this, I started to have some structure for the song. I would improvise and use those improvisations to train a Markov chain and generate new MIDI data which I could listen to and get inspired by. Although the results were mostly unusable, this technique of generating music trained on my improvisations did serve as part of the creative process. As the composition was built, more timbre-transfers were performed and these would lead to new harmonic inspirations (timbre is after all essentially harmony on a sinusoidal scale).