Proposal

📁 download the proposal pdf 📂

Abstract

This workshop aims to consolidate ongoing critical discussions on machine learning and artificial intelligence practices within the NIME community. In particular, rather than focusing on technical or theoretical discussions, we foreground first- and second- person methods to articulate insights and experiences that are otherwise often overlooked on the prevailing discourses surrounding such practices.

Workshop description

The NIME community typically repurposes emergent technologies for creative uses, ML being one that has received significant attention, especially for building gesture-based interactive systems (Françoise et al., 2014; Fiebrink et al., 2009; Bevilacqua et al., 2005). Recent developments in deep learning have widened the creative possibilities of ML (Caillon & Esling, 2021; Tahiroǧlu et al., 2021; Haki et al., 2023) and diversified its uses within the NIME community (Jourdan & Caramiaux, 2023).

These tools have become an important research interest for NIME, with numerous technical contributions in the form of datasets (Malloy & Tzanetakis, 2023; Wyse & Ravikumar, 2022), models (Nuttall et al., 2021; Tahiroǧlu et al., 2021), live performance software (Kerlleñevich et al., 2011; Proctor & Martin, 2020) and deployment pipelines (Pelinski et al., 2023). These advancements enabled the exploration of new forms of embodied interaction (Erdem et al., 2022).

However, contributions using participatory or ethnographic methods for the design of ML systems or examining the artists’ practices with such technologies are less frequent. Beyond technicalities, how practitioners build, appropriate and relate to such technologies is sparsely discussed (Jourdan & Caramiaux, 2023). Notably, Fiebrink and Sonami (2020) reflect on their long-term collaboration for instrument building and performance with ML through interviews.

Caramiaux and Donnarumma (2020) follow a practice research approach with a reflection on a long term collaboration between the researcher and the artist. They discussed the epistemological implications of their approach, and its influence on the involvement of ML, first as tool, then as an actor of the performance. Scurto et al. (2021) draw from Barad’s (2007) notion of diffraction (as opposed to reflection) to consider the social discourses and material configurations in which ML is embedded to frame a “diffractive practice of machine learning”.

This workshop is a partial continuation of the “Critical Perspectives on AI/ML in musical interfaces” workshop held at NIME in 2021 (Martin et al., 2020). Rather than focusing on technical implementations, this previous workshop emphasised critical discussions such as “Diversity and Ethics”, “Design and Research Methods”, “Social and Cultural Impact” and “Musicological Perspectives”. Likewise, we aim to bring to focus ongoing critical ML-related discussions in NIME. However, in this workshop, we intend to focus on autoethnographic methods in order to share and describe personal experiences and practices as artists and/or researchers. A previous workshop on “Querying experience for NIME” (Reed et al., 2023) presented some of these methods in the context of NIME: somaesthetics (Shusterman, 2008; Avila et al., 2020), microphenomenology (Petitmengin, 2006; Reed et al., 2022), dialogic design (Wright & McCarthy, 2018; Zayas-Garin & McPherson, 2022) and retrospective trioethnography (Howell et al., 2021). In HCI and design, first- and second-person methods enable researchers to articulate experiences of the design practice or concept from within, using themselves and their practices as the subject of study (Ellis et al., 2011; Devendorf et al., 2020). Moreover, first- and second-person methods allow articulating insights that are typically overlooked on design narratives (Howell et al., 2021) and particularly on the discourses surrounding ML and AI. We believe that giving the opportunity for artists/musicians/performers/researchers to give their own perspective about their practice is also a way of potentially bringing out new critical discussions, interaction strategies or design implications.

In order to attend the workshop, participants will be asked to submit a short position paper. The workshop will consist of a series of opening talks, followed by collective exercises and the participants’ short presentations, which will be organised in thematic discussion panels moderated by the authors. We will send an open call for submissions, open in terms of format (abstracts / short papers / position papers / progress reports / demos / posters / pictorials) with themes that could include (but not necessarily restricted to), in the context of making/performing with ML/AI: autoethnographies, reflective or diffractive accounts, practice-based research approaches, micro-phenomenology, critical incident technique, or on how values are inscribed in technical systems, on idiosyncratic interactive systems, on co- or participatory design practices. The accepted contributions will be shared in the workshop’s website, with the aim to encourage discussions surrounding the practice of ML in NIME.

References

  1. Avila, J. M., Tsaknaki, V., Karpashevich, P., Windlin, C., Valenti, N., Höök, K., McPherson, A., & Benford, S. (2020). Soma Design For Nime. International Conference on New Interfaces for Musical Expression. https://doi.org/10.5281/zenodo.4297715
  2. Bevilacqua, F., MĂŒller, R., & Schnell, N. (2005). MnM: A Max/MSP Mapping Toolbox. Proceedings of the International Conference on New Interfaces for Musical Expression. https://hal.science/hal-01161330
  3. Caillon, A., & Esling, P. (2021). RAVE: A Variational Autoencoder for Fast and High-Quality Neural Audio Synthesis.
  4. Devendorf, L., Andersen, K., & Kelliher, A. (2020). Making Design Memoirs: Understanding and Honoring Difficult Experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3313831.3376345
  5. Ellis, C., Adams, T. E., & Bochner, A. P. (2011). Autoethnography: An Overview. Historical Social Research / Historische Sozialforschung, 36(4 (138), 273–290. https://www.jstor.org/stable/23032294
  6. Erdem, \cedillaC., Simionato, R., Mojtaba Karbasi, S., & Refsum Jensenius, A. (2022). Embodied Perspectives on Musical AI (EmAI) - RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion.
  7. Fiebrink, R., Trueman, D., & Cook, P. R. (2009). A Meta-Instrument For Interactive, On-The-Fly Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, 280–285. https://zenodo.org/record/1177513
  8. Françoise, J., Schnell, N., Borghesi, R., & Bevilacqua, F. (2014). Probabilistic Models for Designing Motion and Sound Relationships. Proceedings of the 2014 International Conference on New Interfaces for Musical Expression, 287. https://hal.science/hal-01061335
  9. Haki, B., Pelinski, T., Nieto Giménez, M., & Jordà, S. (2023). Completing Audio Drum Loops with Symbolic Drum Suggestions. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.21428/92fbeb44.fe9a0d82
  10. Howell, N., Desjardins, A., & Fox, S. (2021). Cracks in the Success Narrative: Rethinking Failure in Design Research through a Retrospective Trioethnography. ACM Transactions on Computer-Human Interaction (TOCHI). https://doi.org/10.1145/3462447
  11. Jourdan, T., & Caramiaux, B. (2023). Machine Learning for Musical Expression: A Systematic Literature Review. Proceedings of the International Conference on New Interfaces for Musical Expression.
  12. Jourdan, T., & Caramiaux, B. (2023). Culture and Politics of Machine Learning in NIME: A Preliminary Qualitative Inquiry. Proceedings of the International Conference on New Interfaces for Musical Expression.
  13. Kerlleñevich, H., Eguía, M. C., & Riera, P. E. (2011). An Open Source Interface Based On Biological Neural Networks For Interactive Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.5281/ZENODO.1178063
  14. Malloy, C., & Tzanetakis, G. (2023). Steelpan-Specific Pitch Detection: A Dataset and Deep Learning Model. Proceedings of the International Conference on New Interfaces for Musical Expression. http://www.nime.org/proceedings/2023/nime2023_59.pdf
  15. Martin, C., Morreale, F., Wallace, B., & Scurto, H. (2020). Workshop in Critical Perspectives on AI/ML in Musical Interfaces. International Conference on New Interfaces for Musical Expression.
  16. Nuttall, T., Haki, B., & Jorda, S. (2021). Transformer Neural Networks for Automated Rhythm Generation. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.21428/92fbeb44.fe9a0d82
  17. Pelinski, T., DĂ­az, R., Benito Temprano, A. L., & McPherson, A. (2023). Pipeline for Recording Datasets and Running Neural Networks on the Bela Embedded Hardware Platform. Proceedings of the International Conference on New Interfaces for Musical Expression.
  18. Petitmengin, C. (2006). Describing One’s Subjective Experience in the Second Person: An Interview Method for the Science of Consciousness. Phenomenology and the Cognitive Sciences, 5(3), 229–269. https://doi.org/10.1007/s11097-006-9022-2
  19. Proctor, R., & Martin, C. P. (2020). A Laptop Ensemble Performance System Using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.5281/ZENODO.4813481
  20. Reed, C. N., Nordmoen, C., Martelloni, A., Lepri, G., Robson, N., Zayas-Garin, E., Cotton, K., Mice, L., & McPherson, A. (2022). Exploring Experiences with New Musical Instruments through Micro-phenomenology. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.21428/92fbeb44.b304e4b1
  21. Reed, C. N., Zayas-Garin, E., & McPherson, A. (2023). Querying Experience for NIME. International Conference on New Interfaces for Musical Expression.
  22. Shusterman, R. (2008). Body Consciousness: A Philosophy of Mindfulness and Somaesthetics. Cambridge University Press. https://doi.org/10.1017/CBO9780511802829
  23. Tahiroǧlu, K., Kastemaa, M., & Koli, O. (2021). AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model. Proceedings of the International Conference on New Interfaces for Musical Expression. https://nime.pubpub.org/pub/9zu49nu5/release/1
  24. Wright, P., & McCarthy, J. (2018). Bakhtin’s Dialogics and the “Human” in Human-Centered Design. In J. Bardzell, S. Bardzell, & M. Blythe (Eds.), Critical Theory and Interaction Design. The MIT Press.
  25. Wyse, L., & Ravikumar, P. T. (2022). Syntex: Parametric Audio Texture Datasets for Conditional Training of Instrumental Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.21428/92fbeb44.0fe70450
  26. Zayas-Garin, E., & McPherson, A. (2022). Dialogic Design of Accessible Digital Musical Instruments: Investigating Performer Experience. Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.21428/92fbeb44.2b8ce9a4