Participants 2022

Aiphex Twins

AI Song Contest 2022 / participants

TEAM / Aiphex Twins
SONG / Noise to Water
TEAM MEMBERS /
Philipp Stolberg, Edgar Eggert

About the TEAM

Aiphex Twins is the AI electronic music project of Philipp Stolberg and Edgar Eggert. Philipp is an electronic music producer from Zurich with releases on Lukins, Futur8000 and Definition Music. Based in Lisbon, he is currently writing his Master’s Thesis on the impact of machine learning on music production. Edgar is using deep learning in computational cosmology to find out more about distant galaxies. Being passionate about all aspects of electronic music and ambient, he has long been fascinated by possibilities for computational composition. He is based in London and Berlin. The team originally met 4 years ago at Dekmantel festival in Amsterdam and stayed in contact ever since.

About the SONG

Our song may not be the catchiest. Our song still features a lot of background noise. Its lyrics do not make a whole lot of sense. 


However, this song represents a snapshot of our work to co-produce a song using artificial intelligence in the absence of the necessary computational hardware to train elaborate models. It represents an attempt to make sure that our musical inspirations, such as Aphex Twin or Steve Reich, could also “inspire” our co-creating algorithms, something that we felt was impossible when utilising the pretrained models provided by Google and Amazon. It represents an experiment to directly generate raw audio waveform focusing on the use of GAN’s rather than utilising the more proven and tested techniques to generate MIDI (which we still utilised occasionally). As such, while the resulting song may not sound as polished as we had envisioned when we set out for this project, it represents our current best effort to co-produce a song using artificial intelligence while retaining our creative independence and stylistic freedom.

About the HUMAN-AI PROCESS

Initially, we met up in Lisbon with the aim of producing a first version of the track in 2 days. With so little time, we utilised as many out of the box tools as we could find, making use of pre-trained advanced language models such as GPT2 and many of the big-tech company AI solutions such as Google Magenta. While technically advanced, the first iteration of the song showed us 2 things. Firstly, that co-creating music with AI is a whole lot of fun, and secondly, that using pre-trained models created songs that sounded nothing like the music we liked. As such, we decided to gradually implement parts of the AI ourselves by adapting open source implementations for our purpose. Of course, lacking hardware (and computational skill), our solutions were arguably worse, but at least we could train them on data we selected. For the synths, drums, and the glitchy sound effects, we generated one second long snippets using generative networks (an adapted version of WaveGAN from Chris Donahue, big thanks to his work!). The models were trained on thousands of kick drums from techno sample packs, seven hours of Aphex Twin, nine hours of Steve Reich, eight hours of Boards of Canada, or even combinations of all these. Occasionally, the engineer of the team would send sound snippets to the musician, who would arrange them into a track, before several iteration and feedback cycles polished a track. The lyrics were generated after the song was finished to suit the atmosphere. Here, we used a simple LSTM language model, which was trained on 20000 lyrics from billboard charts. We only used lyrics of songs that were classified as energetic and intense, but without popular appeal, while completely avoiding songs that were indexed as “explicit” to prevent sexist and racist lyrics.

Check out the other
songs of 2022