
Participants 2025
Nikki
AI Song Contest 2025 / participants
TEAM / Nikki
SONG / Thanks for being lifeless (Music for gamers)
About the TEAM
Nikki is a Buenos Aires-born visual artist and music producer with roots in Okinawa, Japan. Her interdisciplinary practice spans sculpture, performance, music production, and multimedia installation. She explores the poetics of engendered bodies, avatars, prosthetics, and affective technologies—moving between the objectification of the persona, the personification of the object, and the aesthetics of death. Her work embraces storytelling and speculative fiction, navigating themes of embodiment, digital spirituality, and the vulnerability of the voice in posthuman contexts. Inspired by 90s/00s anime and bass culture, she recently began a music career under the alias Nikki, blending happy melodies with sad lyrics. Her debut single Anger was released in 2025 with Berlin-based Chiqui Records. Her work has been exhibited internationally at institutions such as ZKM (Germany), Hangar Barcelona (Spain), EAC Espacio de Arte Contemporáneo (Uruguay), and La Becque (Switzerland), among others.
About the SONG
Thanks for being lifeless is part of an opera-installation where a inflatable sex doll learns to speak and sing through decaying generations of AI voice models. The piece investigates the entropic materiality of synthetic voices—not as perfected replicas, but as unstable constructions that collapse, drift and glitch under their own training biases. Instead of pursuing fidelity or realism, the work embraces failure, noise and hybridization as expressive strategies. By manipulating both state-of-the-art and obsolete architectures, I treat voice as a performative territory where identity, corporeality and machine learning intersect. Drawing from my intercultural background, I blend Spanish, Japanese and English voices, creating in-between states: human/object, adult/child, feminine/masculine. The performance unfolds as a study in decay-as-expression, evoking uncanny vocal textures that are at once familiar and artificial, questioning what remains when a machine sings with the ghosts of obsolete datasets.
About the HUMAN-AI PROCESS
The piece was created through an experimental process of voice cloning and AI-based synthesis, where my own recordings became the raw material for training several models. Instead of aiming for a perfect or “realistic” replica, I intentionally worked with different types of datasets—sung, spoken, whispered, even distorted or low-fidelity fragments—so that the models would not produce a single stable voice but rather a shifting one.
At this stage of the process, with no GPU available, I used open-source models like RVC (Retrieval-based Voice Conversion) as well as real-time voice changers, both from public notebooks on Google Colab. I've created eight experimental models that interpret sound differently, with glitches, hybridizations, or strange textures that would not appear in a clean or polished training. The key was not only the outputs, but the design of the dataset, the layering process and post production. The workflow unfolded as a dialogue between myself and the machine. I recorded, trained, listened back, and adjusted parameters, treating the errors, distortions, and collapses not as failures but as expressive material for composition. By moving across languages (Spanish, Japanese, English) and gendered registers (childlike, adult, feminine, masculine), I sought to reveal how the synthetic voice is never neutral—it always carries traces of cultural, technical, and affective bias.
In the end, the co-creation was less about the machine imitating me, and more about discovering what emerges when my voice is filtered through the ghosts of obsolete datasets and unstable models.