Section: New Results
Music Content Processing and Music Information Retrieval
Acoustic modeling, non-negative matrix factorisation, music language modeling, music structure
Music language modeling
Participants : Frédéric Bimbot, Dimitris Moreau, Stanisław Raczyński, Emmanuel Vincent.
Main collaboration: S. Fukayama (University of Tokyo, JP)
Music involves several levels of information, from the acoustic signal up to cognitive quantities such as composer style or key, through mid-level quantities such as a musical score or a sequence of chords. The dependencies between mid-level and lower- or higher-level information can be represented through acoustic models and language models, respectively.
We pursued our pioneering work on music language modeling, with a particular focus on the joint modeling of "horizontal" (sequential) and "vertical" (simultaneous) dependencies between notes by log-linear interpolation of the corresponding conditional distributions. We identified the normalization of the resulting distribution as a crucial problem for the performance of the model and proposed an exact solution to this problem  . We also applied the log-linear interpolation paradigm to the joint modeling of melody, key and chords, which evolve according to different timelines  . In order to synchronize these feature sequences, we explored the use of beat-long templates consisting of several notes as opposed to short time frames containing a fragment of a single note.
The limited availability of multi-feature symbolic music data is currently an issue which prevents the training of the developed models on sufficient amounts of data for the unsupervised probabilistic approach to significantly outperform more conventional approaches based on musicological expertise. We outlined a procedure for the semi-automated collection of large-scale multifeature music corpora by exploiting the wealth of music data available on the web (audio, MIDI, leadsheets, lyrics, etc) together with algorithms for the automatic detection and alignment of matching data. Following this work, we started collecting pointers to data and developing such algorithms.
Participants : Frédéric Bimbot, Gabriel Sargent, Emmanuel Vincent.
External collaboration: Emmanuel Deruty (as an independant consultant)
The structure of a music piece is a concept which is often referred to in various areas of music sciences and technologies, but for which there is no commonly agreed definition. This raises a methodological issue in MIR, when designing and evaluating automatic structure inference algorithms. It also strongly limits the possibility to produce consistent large-scale annotation datasets in a cooperative manner.
This year, our methodology for the semiotic annotation of music pieces has developed  and concretized into a set of principles, concepts and conventions for locating the boundaries and determining metaphoric labels of music segments   . The method relies on a new concept for characterizing the inner organization of music segments called the System & Contrast (S&C) model  . At the time of writing this text, the annotation of over 400 music pieces is being finalized and will be released to the MIR scientific community.
In parallel to this work aiming at specifying the task of music structure description, we have designed, implemented and tested new algorithms for segmenting and labeling music into structural units. The segmentation process is formulated as a cost optimization procedure, accounting for two terms : the first one corresponds to the characterization of structural segments by means of the fusion of audio criteria, whereas the second term relies on a regularity constraint on the resulting segmentation. Structural labels are estimated as a probabilistic automaton selection process. A recent development of this work has included the S&C model in the algorithm.
Different systems based on these principles have been tested in the context of the Quaero Project and the MIREX international evaluation campaigns in 2010, 2011 and 2012 (see for instance  , in 2012 ).