Keynotes

Keynotes

Keynote #1 – Conceptualizing the Chorus

David Zicarelli (Cycling ’74)

A few years ago I began playing with techniques involving making dozens or hundreds of copies of the same sound and treating each one slightly differently, creating a sort of infinite chorus of sources or effects. This work caused me realize how many practices and assumptions in our field are still concerned with minimizing computational resources, originating from a time when computers could barely compute a single stream of audio that you could control interactively.
 
Now that hundreds of audio channels are easily possible with today’s computers, we have the opportunity to explore conceptual models that treat these multiple channels as creative spaces for spontaneous interaction. In this talk I’ll share some initial experiments and findings in the space of the infinite chorus.

Zicarelli’s primary work has been in the development of the Max visual programming environment used by musicians, artists, and inventors. In the late 1990s he founded Cycling ’74 to support the development and distribution of Max. The company now employs around 30 people in seven different countries, all of whom work remotely. For Zicarelli and his co-workers, Cycling ’74 is both a software company and a vehicle for exploring the interrelated challenges of distributed work, individual development, and cultural impact. Zicarelli has developed software at IRCAM, Gibson Guitar, and AT&T and has been a visiting faculty member at Bennington College and Northwestern University. BA, Bennington College; PhD, Stanford University. Zicarelli was a visiting faculty member at Bennington for Spring 1984, and returned as recurring visiting faculty member from Fall 2019-Fall 2021.

Keynote #2 – Understanding Machine Learning as a Tool for Supporting Human Creators in Music and Beyond

Rebecca Fiebrink (University of the Arts London)

When technical researchers, creators, and the general public discuss the future of AI in music and art, the focus is usually on a few types of questions, including: How can we make content generation and processing algorithms better and faster? Will contemporary AI systems put human creators out of a job? Are algorithms really capable of being “creative”?
 
In this talk, I propose that we should be asking a different set of questions, beginning with the question of how we can use machine learning to better support fundamentally human creative activities in music and art. I’ll show examples from my research of how prioritising human creators—professionals, amateurs, and students—can lead to a new understanding of what machine learning is good for, and who can benefit from it. For instance, machine learning can aid human creators engaged in rapid prototyping of new interactions with sound and media. Machine learning can support greater embodied engagement in design, and it can enable more people to participate in the creation and customisation of new technologies. Furthermore, machine learning is leading to new types of human creative practices with computationally-infused mediums, in which a broad range of people can act not only as designers and implementors, but also as explorers, curators, and co-creators.

Rebecca Fiebrink is a Professor of Creative Computing at the UAL Creative Computing Institute. Together with her students and research assistants, she works on a variety of projects developing new technologies to enable new forms of human expression, creativity, and embodied interaction. Much of her current research combines techniques from human-computer interaction, machine learning, and signal processing to allow people to apply machine learning more effectively to new problems, such as the design of new digital musical instruments and gestural interfaces for gaming and accessibility. She is also involved in projects developing rich interactive technologies for digital humanities scholarship, exploring ways that machine learning can be used and appropriated to reveal and challenge patterns of bias and inequality, and advancing machine learning education.

Keynote #3 – Machine Learning and Digital Audio Effects 

Vesa Välimäki (Aalto University)

Many papers presented at the DAFX conference currently use machine learning techniques, but this was not the case just a few years ago. This talk will discuss the developments that led to the paradigm shift in our research field, which followed a few years behind some closely related fields, such as speech recognition and synthesis. It has been a nice surprise that machine learning can better solve many audio processing problems than our previous signal-processing methods. However, there are also counterexamples in which we have not found a perfect machine-learning-based solution. Audio time-scale modification is a problem for which ideal training data is unavailable, and the current best method is based on traditional signal processing. Generative machine learning, such as diffusion models, can provide excellent solutions to problems that seemed almost impossible earlier, such as the reconstruction of long gaps, or audio inpainting.

Vesa Välimäki’s research focuses on applying signal processing and machine learning to audio and music technology. He is a Full Professor of audio signal processing and the Vice Dean for Research in the Aalto University School of Electrical Engineering. His research group belongs to the Aalto Acoustics Lab, a multidisciplinary center of high competence with excellent facilities for sound-related research. He has recently published extensively with his students and colleagues on guitar effects modeling and music restoration through deep learning. Prof. Välimäki is a Fellow of the IEEE, Audio Engineering Society, and Asia-Pacific Artificial Intelligence Association. He is the Editor-in-Chief of the Journal of the Audio Engineering Society.