Participants 2023

Rubato LAB

AI Song Contest 2023 / participants

TEAM / Rubato LAB
SONG / 경, 敬, ɡjʌŋ
TEAM MEMBERS /
Junseop So, Jihyeon Park, Jongho Lee, Yonghee Kim, Hyunseo Seo, Changjun Yu

About the TEAM

We are a diverse research group composed of students, composers, engineers, DJs, and media artists, each with their own unique major. Yet, we have come together with a shared passion to create music.

We have consistently explored ways to integrate artificial intelligence into the fields of music and media. Established in 2018, Rubato Lab has been at the forefront of research in music technology. In 2022, we successfully hosted "The Prompt" exhibition at the Content Impact 2022, organized by the Korea Creative Content Agency (KOCCA).

In 2023, we will continue to challenge ourselves by participating in the AI Song Contest. Please enjoy!

About the SONG

Modern individuals, weary from reality, have a mind where rest and work coexist. We have reconstructed this state by blending Eastern philosophy with Western thought, using visuals and music. By combining traditional Korean music with Bass Music, we crafted a novel future bass track. Alongside this, we produced a video that incorporates various images of Korean culture as we perceive it.

This piece is a collaboration between humans and artificial intelligence, showcasing how AI can be utilized within modern music and video production. Instead of a completely new approach, we explored ways to use AI to enhance the sound and the production process of music and videos, drawing from the methods we've developed over the years. We're thrilled to share the outcome with all of you. Enjoy!

About the HUMAN-AI PROCESS

In the music segment, the models we predominantly used were MusicGen and Music Source Separation. We generated wave files using MusicGen, but instead of directly utilizing them as tracks, we employed them as samples. This approach applied AI tasks to music production based on sampling, exploring ways to produce various samples beyond traditional sources. The samples generated were then processed using the Music Source Separation model, allowing us to extract specific instruments we wanted and reincorporate them into our tracks.

For the visual segment, we crafted desired images using the widely adopted Stable Diffusion model. However, judging the quality of currently available models for video creation to be subpar, we employed RunwayML's Gen-1 and Gen-2 for production. While we didn't rely solely on these tools, we also utilized Premiere Pro and After Effects to achieve the specific effects we desired. When using the Stable Diffusion model, we faced challenges in obtaining the precise images we wanted through prompts alone. This issue was addressed using an image-to-image translation approach, enabling us to produce the desired visuals.

In conclusion, to create the music and visuals we envisioned, we leveraged AI models to generate and utilize data that traditionally would have required manual collection, processing, or purchase. Through this approach, we demonstrated the feasibility of producing the content we desired using AI-generated assets.

Rubato LAB

Check out the other
songs of 2023