/page-list   /toc   /spec/1.0   /spec/1.1   /implementations   /publications   /about
(contents of this file: links to each section)

Combining Audio And Gestures For A Real-Time Improviser

Roberto Morales-Mazanares, Eduardo Morales, David Wessel. Combining Audio And Gestures For A Real-Time Improviser. 2005. International Computer Music Conference. Pages 813-816.


Download: files/icmc05fin.pdf

Abstract: Skilled improvisers are able to shape in real time a music discourse by continuously modulating pitch, rhythm, tempo and loudness to communicate high level information such as musical structures and emotion. Interaction between musicians, correspond to their cultural background, subjective reaction around the generated material and their capabilities to resolve in their own terms the aesthetics of the resultant pieces. In this paper we introduce GRI an environment, which incorporates music and movement gestures from an improviser to adquire precise data and react in a similar way as an improviser. GRI takes music samples from a particular improviser and learns a classifiers to identify different improvision styles. It then learns for each style a probabilistic transition automaton that considers gestures to predict the most probable next state of the musician. The current musical note, the predicted next state, and gesture information are used to produce adequate responses in real-time. The system is demonstrated with a flutist, with accelerometers and gyros to detect gestures with very promising results.

Context: This was a featured publication on the legacy (pre-2011) website, ported to the new site by Matt Wright in early 2021

Submitted to by Legacy at 03/26/2021 17:03:29

This page of OpenSoundControl website updated Mon May 24 11:20:32 PDT 2021 by matt (license: CC BY).