A Framework for Performer Identification in Audio Recordings
We present a general framework for the task of identifying performers from their playing styles. We investigate how musicians express and communicate their view of the musical content in pieces and how to use this information in order to automatically identify performers. We study note-level deviations of parameters such as timing and amplitude. Our approach to performer identification consists of inducing an expressive performance model for each of the interpreters (essentially establishing a performer dependent mapping of inter-note features to a timing and amplitude expressive transformations). We outline two successful performer identification case studies.