Keynotes
Keynote #1 – Conceptualizing the Chorus
David Zicarelli (Cycling ’74)
Zicarelli’s primary work has been in the development of the Max visual programming environment used by musicians, artists, and inventors. In the late 1990s he founded Cycling ’74 to support the development and distribution of Max. The company now employs around 30 people in seven different countries, all of whom work remotely. For Zicarelli and his co-workers, Cycling ’74 is both a software company and a vehicle for exploring the interrelated challenges of distributed work, individual development, and cultural impact. Zicarelli has developed software at IRCAM, Gibson Guitar, and AT&T and has been a visiting faculty member at Bennington College and Northwestern University. BA, Bennington College; PhD, Stanford University. Zicarelli was a visiting faculty member at Bennington for Spring 1984, and returned as recurring visiting faculty member from Fall 2019-Fall 2021.
Keynote #2 – Understanding Machine Learning as a Tool for Supporting Human Creators in Music and Beyond
Rebecca Fiebrink (University of the Arts London)
Rebecca Fiebrink is a Professor of Creative Computing at the UAL Creative Computing Institute. Together with her students and research assistants, she works on a variety of projects developing new technologies to enable new forms of human expression, creativity, and embodied interaction. Much of her current research combines techniques from human-computer interaction, machine learning, and signal processing to allow people to apply machine learning more effectively to new problems, such as the design of new digital musical instruments and gestural interfaces for gaming and accessibility. She is also involved in projects developing rich interactive technologies for digital humanities scholarship, exploring ways that machine learning can be used and appropriated to reveal and challenge patterns of bias and inequality, and advancing machine learning education.
Keynote #3 – Machine Learning and Digital Audio Effects
Vesa Välimäki (Aalto University)
Many papers presented at the DAFX conference currently use machine learning techniques, but this was not the case just a few years ago. This talk will discuss the developments that led to the paradigm shift in our research field, which followed a few years behind some closely related fields, such as speech recognition and synthesis. It has been a nice surprise that machine learning can better solve many audio processing problems than our previous signal-processing methods. However, there are also counterexamples in which we have not found a perfect machine-learning-based solution. Audio time-scale modification is a problem for which ideal training data is unavailable, and the current best method is based on traditional signal processing. Generative machine learning, such as diffusion models, can provide excellent solutions to problems that seemed almost impossible earlier, such as the reconstruction of long gaps, or audio inpainting.
Vesa Välimäki’s research focuses on applying signal processing and machine learning to audio and music technology. He is a Full Professor of audio signal processing and the Vice Dean for Research in the Aalto University School of Electrical Engineering. His research group belongs to the Aalto Acoustics Lab, a multidisciplinary center of high competence with excellent facilities for sound-related research. He has recently published extensively with his students and colleagues on guitar effects modeling and music restoration through deep learning. Prof. Välimäki is a Fellow of the IEEE, Audio Engineering Society, and Asia-Pacific Artificial Intelligence Association. He is the Editor-in-Chief of the Journal of the Audio Engineering Society.