next up previous
次へ: Rhythm and Tempo 上へ: Automatic Rhythm Transcription from 戻る: Automatic Rhythm Transcription from

Introduction


We are investigating automatic transcription of MIDI (Musical Instrument Digital Interface) signals. As the MIDI format already includes the pitch information, the problem here is to recognize the note values, i.e., intended nominal lengths of notes as shown in Fig. 1, which we refer to ``rhythm recognition.''

Conventionally, it has been done by ``quantization'' of IOIs (Inter-Onset Intervals) of played notes. We used HMM (Hidden Markov Model) to solve this problem (Saito et al. 1999) by modeling both the fluctuating note lengths and the probabilistic constraint of note sequences. In this work, we also included multiple tempos in the HMM to find the best-matching tempo. In other works, tempo was included as hidden variables of probabilistic models (Cemgil et al, 2000; Raphael, 2001), or determined by clustering IOIs (Dixon, 2001), and rhythm was estimated based on the tempo.

In this paper, we treat rhythm recognition as a problem of probabilistically decomposing the observed IOIs into rhythm and tempo components.



next up previous
次へ: Rhythm and Tempo 上へ: Automatic Rhythm Transcription from 戻る: Automatic Rhythm Transcription from
平成16年3月25日