Ars Electronica AIxMusic Hackathon
09 September 2020 – 13 September 2020
Worldwide | Online
Screenshot of face tracking emotions into music during AIxMUSIC hackathon
Ars Electronica Festival 2020 addresses the current feeling of uncertainty on how the covid-19 crisis will shape us as individuals, as societies, as humanity. The festival focuses on two tensions: AUTONOMY and DEMOCRACY as well as TECHNOLOGY and ECOLOGY.
Recent advances in AI have put within reach a world where art can be created and performed entirely by algorithms. In a series of panels, workshops, and live performances, AIxMUSIC explores the fine line and interactions between the artist and the machine.
For the occasion of the first online Ars Electronica Festival, Ars Electronica hosted its first International AIxMusic Hackathon as part of the AIxMusic Festival 2020. The hackathon took place online during the Ars Electronica Festival from 9-13 September 2020; presentations were streamed live on Ars Electronica New Worlds Sunday, September 13th 2020.
Artist Amy Karle participated in Group 2: AIxMUSICxHUMAN. The team explored designing user interactions when humans and machine learning models are together in the musical loop. Team members Amy Karle, Sergio Lecuona, Jing Dong, Pierre Tardif, Suyash Joshi considered how to interact through digital communication in ways that foster understanding, experimenting with inputting emotions through facial tracking to output sound and visuals.
“Life has been upended by the pandemic, we are forced to separate physically and our primary means of communication is through screens where we only have a limited range of expression. Our group explores ways to interact, add depth to communications, and expand perception in ways that only this medium can uniquely offer.
We ask: What if we could express our feelings through music and art to transcend the digital divide?
Collaborating remotely across the world, we demonstrate this experiment and expression through layers of interaction where humans, AI, and generative art are together in the musical loop.“ – Amy Karle
This video is sample of the experiment, formatted as team members worked remotely and saw each other over zoom.
Get the code and experiment yourself: Click Here
More about the Hackathon from Ars Electronica:
Teams of data scientists, computer programmers, graphic and interface designers, musicians and artists are brought together in this hackathon to creatively tackle music data problems and prototype new data solutions. The Hackathon evolves around a series of hand-on workshops where high profile researchers and artists who share new tools and research to offer an insight into the current development of AIxMusic. The Hackathon ends with a presentation of their outcomes by each group streamed live on the Ars Electronica TV channel.
The AIxMUSIC Hackathon has the following objectives:
- Engaging hackers with artistic and scientific institutions across the world
- Connecting international experts to share knowledge
- Developing prototypes that musicians will be able to integrate into their practice
- Promote partnerships through networking
- Produce innovative products and tools to stimulate the use of open data and public resources to engage with new audiences
Presentations live streamed on Ars Electronica New Worlds | Sunday 13th September 2020
00:03:22 AIxMUSIC Artificial Stupidity
00:55:29 Matthew Gardiner
01:02:07 Demystifying AI with Music
01:52:28 AIxMusic Stockholm KTH Music as Experiment
03:08:29 Adriatic Garden
03:34:14 AI x Music Paris IRCAM Frontiers of Music
04:31:38 AI x Music Hackathon Presentations
05:36:49 Lisboa Fem Panel Discussion
07:04:46 AIxMusic Brussels BOZAR discussion + performance
08:36:23 Amsterdam (nxt) Within a Latent Space
09:03:36 AI x Music Silicon Valley/ Open Austria Panel
10:15:05 AI LAB Journeys: Artificial Intelligence and ist false lies
10:33:10 AI x Uncertainty
Six Challenges = Six Teams = Six Research Groups
Topic/Group #1 “Developing lightweight deep AI” is based on how we could reverse the current trend in AI models to rely on humongous amounts of parameters and computation. It will be connected with Ircam´s Researcher Philippe Esling (FR).
Topic/Group #2 “Designing user interactions when Humans and Machine Learning models are together in the musical loop” It took us over 10 years to mature the human + smartphone interaction, what will the future of human + AI interactions look and feel like in the musical domain? How can we make this easier? In connection with Google Magenta Resident Lamtharn Hanoi Hantrakul (TH)
Topic/Group #3 “Generate. Interpolate. Orchestrate.” From generating drum beats to resynthesizing cats into flutes, machine learning models enable creative and musical expression not possible before. If the electric guitar gave birth to rock and roll and the modern laptop gave birth to EDM, what kinds of new music will AI technologies give birth to? What is this new AIesthetic? in connection with Google Magenta Resident Lamtharn Hanoi Hantrakul (TH)
Topic/Group #4 “Solving problems on Target-based orchestration” wants to resolve issues present in manipulating complex musical objects and structures. It is connected with Carmine Cella (IT), CNMAT-UC Berkeley
Topic/Group #5 “Complete famous unfinished pieces” Lacrimosa movement in Mozart’s Requiem which was only written till the 8th bar at the time of his passing. How can we use AI techniques to learn complete unfinished pieces by famous composers of the past? It will be connected with Edward Tiong, and Yishuang Chen (US), UC Berkeley / Microsoft AI.
Topic/Group #6 “Harmonize any music piece” Imagine composing a piano melody with your right hand, and have an AI complete the left hand chords. This project aims to use machine learning to generate accompanying musical chords that harmonize with any given custom melody. It will be connected with Edward Tiong, and Yishuang Chen (US), UC Berkeley / Microsoft AI.
WED 9.09 – 18:00 (CET) – 9AM (PST)
Google: Lamtharn Hanoi Hantrakul (TH), Making Music with Magenta
Hanoi from the Google Magenta team will be giving an overview of the group’s research and open source tools. He will be covering new developments in the Differentiable Digital Signal Processing (DDSP) library as well as other Magenta projects. These include overviews of the magenta.js libraries and how to build on existing demos such as DrumBot and other #MadeWithMagenta projects. As a music technology hackathon veteran himself, Hanoi will be framing these technologies in the context of a hackathon environment, giving integration tips and tricks along the way.
THU 10.09 – 18:00 (CET) – 9AM (PST)
CNMAT/UC Berkeley: Carmine Cella (IT), ORCHIDEA
Representing CNMAT of UC Berkeley, Lead Researcher, Professor Carmine Cella (IT) will show ORCHIDEA, a framework for static and dynamic assisted orchestration, an evolution of the Orch* family and made of several tools, including a standalone application, a Max package and a set of command line tools.
FRI 11.09 – 18:00 (CET) – 9AM (PST)
UC Berkeley: Edward Tiong (US), Maia
Edward Tiong, and Yishuang Chen (US) from UC Berkeley will be introducing Maia, a deep neural network tool created with the intention of creating an AI that could complete unfinished compositions. It can generate original piano solo compositions by learning patterns of harmony, rhythm, and style from a corpus of music. If you are interested in the AI techniques that brought Maia to life and other works in the generative music space, join them for this workshop!
Musical orchestration consists largely of choosing combinations of sounds, instruments, and timbres that support the narrative of a piece of music. The ORCHIDEA project assists composers during the orchestration process by automatically searching for the best combinations of orchestral sounds to match a target sound, after embedding it in a high-dimensional feature space. Although a solution to this problem has been a long-standing request from many composers, it remains relatively unexplored because of its high complexity, requiring knowledge and understanding of both mathematical formalization and musical writing.
SAT. 12.09 – 18:00 (CET) – 9AM (PST)
IRCAM: Philippe Esling (FR), AI in 64 Kb
Ircam´s Leader researcher on Artificial Intelligence and Music, Philippe Esling (FR) will be introducing the libraries of Ircams and techniques for lightweight AI, demonstration of embedded technologies and 64 Kb competition for a AIxMusic hackathon project trying to challenge the current limits of AI and inspired by the Demoscene and the 64Kb competitions. The project tries to challenge the current limits of AI and is inspired by the Demoscene and the 64Kb competitions. The theme will be a world-first hackathon on “Can we do the same with less – AI in 64 Kb” or how we could reverse the current trend in AI models to rely on humongous amounts of parameters and computation.
SUN 13.09 – 14:30 – 15:25 (CET) – 6:30AM – 7:25AM (PST)
Final presentations of the projects (watch at https://youtu.be/cYx3JX5KiTA?t=16297)
The final presentations of the hackathon will take place on Sunday 13th September 2020 14:30 – 15:25 with 5 min presentations of each team live online in our Festival TV Channel. This Panel will be moderated by Annelies Termeer (NL) from the Dutch TV channel VPRO.
Project Credits / Acknowledgements
Ars Electronica International wants to thank the following institutions and partners to help this Hackathon Happens: Ircam, UC Berkeley, CNMAT-UC Berkeley, Google Magenta, Exposure – Open Austria, VPRO Media Lab, Philippe Esling (FR), Eduard Tiong(US), Carmine Cella(IT), Lamtharn Hanoi Hantrakul (TH), Annelies Temeer (NL)
The AIxMUSIC FESTIVAL can also be experienced in “Kepler’s Garden” on the JKU campus. It is entirely dedicated to the networking of those innumerable centers that work on the applications and effects of AI research for the cultural sector and thus play an important role in communication and dialogue between research and society. Without a doubt, one of the festival’s highlights will be the BIG CONCERT NIGHT.
(text provided by Ars Electronica)