Tutorials

Tutorials and Workshops

On the 3rd September, 2024, the first day of the conference, we will host three tutorials and one workshop. Each tutorial will be one hour and a half, and the workshop will be two hours. 

Tutorial #1 – Design strategies and techniques to better support collaborative, egalitarian and sustainable musical interfaces

Dr. Anna Xambó Sedó (Queen Mary University of London)

A common challenge in the community of designing audio effects and algorithms to synthesise musical instruments is what are the design considerations to accommodate interaction experiences relevant to musicians, particularly among a diverse community of practitioners. This hands-on tutorial will cover some theoretical and practical foundations for designing interfaces for digital sound instruments and effects looking at how best to support collaborative, egalitarian and sustainable spaces.

Anna Xambó is a researcher and an experimental electronic music producer. Her research and practice concern human-computer interaction (HCI), sound and music computing (SMC), new interfaces for musical expression (NIME), live coding, and web audio looking at designing and evaluating networked algorithmic spaces that support collaboration, participation, non-hierarchical structures and do-it-yourself (DIY) practices for SMC. She is currently a Senior Lecturer in Sound and Music Computing at the Centre for Digital Music (School of Electronic Engineering and Computer Science (EECS), Queen Mary University of London (QMUL)); as well as the Principal Investigator (PI) of the UK Research and Innovation’s (UKRI) Arts and Humanities Research Council funded project “Sensing the Forest: Let the Forest Speak using the Internet of Things, Acoustic Ecology and Creative AI” (2023-2025). She has also worked as the PI for the UKRI Engineering and Physical Sciences Research Council’s Human Data Interaction Network Plus funded project “MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding” (2020-2021). Since 2016, she has co-founded and taken leading roles in several organisations for promoting and improving the representation of women in music technology: Women in Music Tech (2016-17, Georgia Tech); WoNoMute (2018-2019, NTNU/UiO); WiNIME (2019-2022, NIME) and WHEN (2024-, EECS-QMUL). https://annaxambo.me

Tutorial #2 – Room acoustics rendering for immersive audio applications

Dr. Orchisama Das (SONOS)

This tutorial will focus on the fundamentals of room acoustics rendering for immersive audio applications. Rendering the acoustics of a space is known to increase the sense of immersion and envelopment, and enhances user experience in mixed reality applications. We will briefly discuss some fundamental models before delving into discuss delay line based parametric reverberators that are ideal for real-time applications. The tutorial will also cover aspects of spatial reverb for multichannel reproduction and touch on binauralisation for headphone reproduction. Code examples and introduction to open source toolboxes in Python will be provided.

Dr. Orchisama Das is a Senior Audio Research Scientist at Sonos. She received her PhD from the Center for Computer Research in Music and Acoustics at Stanford University in 2021, during which she interned at Tesla and Meta Reality Labs. She did a postdoc at the Institute of Sound Recording at University of Surrey. Her research interests are immersive audio, artificial reverberation and room acoustics modeling, with a focus on real-time room acoustics rendering with delay networks.

Tutorial #3 – Accessibility-Conscious design for audio hardware and software

Jason Dasent (Kingston University)

This tutorial will firstly start with an introduction into the world of accessible music technology, where Jason will provide his perspective and work as a visually impaired music producer and audio engineer. We will discuss the work being done with music equipment manufacturers to make their products accessible, as well as with companies and educational institutions interested in accessibility. Following this introduction, there will be a live and interactive demonstration showing all aspects of the creation of a music production, from recording to mastering using accessible hardware and software.

Jason Dasent is a Music Producer, Audio Engineer and Accessibility Consultant from Trinidad now based in Sheffield UK. In addition  to working as a producer in several areas of the music industry including, Advertising, artists Production and Music For Film, as well as in the live music arena for over 25 years, he now collaborates with several music equipment manufacturers from around the world to make their products and services accessible to differently abled music industry practitioners. Jason also works with several music education institutions in the UK, where he is always eager to share his knowledge with young up and coming music producers and audio engineers. Currently he is pursuing his PhD at Kingston University in the school of arts.

AI for Multitrack Music Mixing: Hands-On Workshop

Soumya Sai Vanka (Queen Mary University of London)
Dr. Marco A. Martínez-Ramírez (Sony AI)

Music Mixing is essential in post-production, demanding both technical expertise and creativity to achieve professional results. This workshop explores recent advancements in automatic mixing, particularly deep learning-based approaches utilizing large datasets, black-box processing and innovative techniques like differentiable mixing consoles. Through a hands-on session with code examples, participants will learn about building, training, and evaluating these systems. Topics include intelligent music production, contextual importance in mixing, and system design challenges. Aimed towards researchers and professionals in digital audio processing, this workshop serves as an entry point for those new to deep learning methods for music post-production methods. The participation of the DAFx community is crucial as we collectively shape the future of AI-driven music mixing.

Soumya Sai Vanka is a PhD researcher at the Centre for Digital Music, Queen Mary University of London under the AI and Music programme. She works in active collaboration with the R&D division at Steinberg Media Technologies GmbH and her research work predominantly focuses on intention-driven assistive smart mixing placing HCI at its forefront. Through her research, she aims to looks at developing smart and assistive mixing tools that enable co-creativity. Soumya holds a BSc[Hons](Gold Medal) and MSc in Physics and has trained as an audio engineer. She served as an alto saxophonist for a forty-woman brass band for three years. She also enjoys painting from time to time.

 

Marco A. Martínez-Ramírez is a music technology researcher at Sony AI in Tokyo, where he is part of the Music Foundation Model Team. His research interests lie at the intersection of machine learning, digital signal processing, and intelligent music production, with a primary focus on deep learning architectures for music processing tasks. Previously, he was an audio research intern at Adobe and received his PhD from the Centre for Digital Music at Queen Mary University of London. He has a MSc in digital signal processing from the University of Manchester, UK, and a BSc in electronic engineering from La Universidad de Los Andes, Colombia. Marco also has a background in music production and mixing engineering.