The research conducted in MuTant is devoted both to leveraging capabilities of musical interactions between humans and computers, and to the development of tools to foster the authoring of interaction and time in computer music. Our research program belongs to the field of Interactive music systems for computer music composition and performance introduced in mid-1980s at Ircam. Within this paradigm, the computer is brought into the workflow of musical creation as an intelligent performer and equipped with a listening machine capable of analyzing, coordinating and anticipating its own and other musicians' actions within a musically coherent and synchronous context. Figure illustrates this paradigm.
The use of Interactive Music Systems have become universal ever since and their practice has not ceased to nourish multidisciplinary research. From a research perspective, an interactive music systems deals with two problems: Real-time Machine Listening , and Synchronous and Timed Real-time Programming in Computer Music. The strong coupling and union (as opposed to an intersection) of the two field has become a necessity in music practices to provide temporal scenarios describing real-time interactions between computer environments and human musicians (in forms of programs or augmented music scores), and employ them in real-time on stage with a high degree of musical autonomy and competence, whilst ensuring the major issues of fault-tolerance and time-correctness.
Whereas each field has generated subsequent literature, few attempts have been made to address the global problem by putting the two domains in direct interaction.
MuTant's research program has developed new Real-time Machine Listening mechanisms (see Section ), new reactive and strongly timed real-time software architectures (see Section ), as well as contributions to the field of verification and test on dynamic setups and work-flows such as those observed in Music (see Section ). The major incarnation of our research is the award winning Antescofo language and real-time system, deployed since our inception in major international festivals with more than 100 known repertoire pieces regularly played throughout the world.
When human listeners are confronted with musical sounds, they rapidly and automatically find their way in the music. Even musically untrained listeners have an exceptional ability to make rapid judgments about music from short examples, such as determining music style, performer, beating, and specific events such as instruments or pitches. Making computer systems capable of similar capabilities requires advances in both music cognition, and analysis and retrieval systems employing signal processing and machine learning.
Machine listening in our context refers to the capacity of our computers to understand “non-speech sound” by analyzing the content of music and audio signals and combining advanced signal processing and machine learning. The major focus of MuTant has been on Real-time Machine listening algorithms spanning Real-time Recognition Systems (such as event detection) and also Information Retrieval (such as structure discovery and qualitative parameter estimation). Our major achievement lies in our unique Real-time Score Following (aka Audio-to-Score Alignment) system that are featured in the Antescofo system (cf. Section ). We also contributed to the field of On-line Music Structure Discovery in Audio Processing, and lately to the problem of off-line rhythmic quantization on Symbolic Data.
This is a continuation of prior work of team-founder which proved the utility of strongly-timed probabilistic models in form of Semi-Markov Hidden States. Our most important theoretical contribution is reported in , that introduced Time-coherency criteria for probabilistic models and led to general robustness of the Antescofo listening machine, and allowed its deployment for all music instruments and all setups around the world. We further studied the integration of other recognition algorithms in the algorithm in form of Information Fusion and for singing voice based on Lyric data in . Collaboration with our japanese counterparts led to extensions of our model to the symbolic domain reported in . Collaboration with the SIERRA team created a joint research momentum for fostering such applications to weakly-supervised discriminative models reported in . Our Real-time Audio-to-Score alignment is a major component of the Antescofo software described in Section .
To extend our listening approach to general sound, we envisioned dropping the prior information provided by music scores and replacing it by the inherent structure in general audio signals. Early attempts by the team leader employed Methods of Information Geometry, an attempt to join Information Theory, Differential Geometry and Signal Processing. We were among the first teams in the world advocating the use of such approaches for audio signal processing and we participated in the growth of the community. A major break-through of this approach is reported in and the PhD Thesis that outline a general real-time change detection mechanism. Automatic structure discovery was further pursued in a MS thesis project in 2013 . By that time we realized that Information Manifolds do not necessarily provide the invariance needed for automatic structure discovery of audio signals, especially for natural sounds. Following this report, we pursued an alternative approach in 2014 and in collaboration with the Inria SIERRA Team . The result of this joint work was published in IEEE ICASSP 2015 and won the best student paper award . We are currently studying massive applications of this approach to natural sounds and in robotics applications in the framework of Maxime Sirbu's PhD project.
Rhythmic data are commonly represented by tree structures (rhythms trees) due to the theoretical proximity of such structures with the proportional representation of time values in traditional musical notation. We are studying the application to rhythm notation of techniques and tools for symbolic processing of tree structures, in particular tree automata and term rewriting.
Our main contribution in that context is the development of a new framework for rhythm transcription , , , addressing the problem of converting a sequence of timestamped notes, e.g. a file in MIDI format, into a score in traditional music notation. This problem is crucial in the context assisted music composition environments and music score editors. It arises immediately as insoluble unequivocally: in order to fit the musical context, the system has to balance constraints of precision and readability of the generated scores. Our approach is based on algorithms for the exploration and lazy enumeration of large sets of weighted trees (tree series), representing possible solutions to a problem of transcription. A side problem concerns the equivalent notations of the same rhythm, for which we have developed a term rewrite approach, based on a new equational theory of rhythm notation , , .
The research presented here aims at the development of a programming model dedicated to authoring of time and interaction for the next generation of interactive music systems. Study, formalization and implementation of such programming paradigm, strongly coupled to recognition systems discussed in the previous section, constitutes the second objective of the MuTant project.
The tangible result of this research is the development of the Antescofo system (cf. Section ) for the design and implementation of musical scenarios in which the human and computer actions are in constant real-time interaction. Through such development, Antescofo has already made itself into the community; it serves as the backbone of temporal organization of more than 100 performances since 2012 and used both for preexisting pieces and new creations by music ensembles such as Berliner Philharmoniker, Los Angeles Philharmonic, Ensemble Intercontemporain or Orchestre de Paris to name a few.
Compared to programmable sequencers or interactive music systems (like Max or PureData) the Antescofo DSL offers a rich notion of time reference and provides explicit time frame for the environment with a comprehensive list of musical synchronization strategies and proposes and predictable mechanisms for controlling time at various timescales (temporal determinism) and across concurrent code modules (time-mediated concurrency).
Audio and music often involve the presence and cooperation of multiple notions of time: an ideal time authored by the composer in a score and also a performance time produced jointly by the performers and the real-time electronics; where instant and duration are expressed both in physical time (milliseconds), in relative time (relative to an unknown dynamic tempo) or through logical events and relations (“at the peak of intensity”, “at the end of the musical phrase”, “twice faster”).
Antescofo is the first languages that addresses this variety of temporal notions, relying on the synchronous approach for the handling of atomic and logical events and an anticipative notion of tempo for the handling of relative duration , . A first partial model of time at work in Antescofo (single time, static activities) has been formalized relying on parametric timed automata and constitutes the reference semantics for tests (cf. section ). A denotational semantics of the complete language (multiple times and dynamic constructions including anticipative synchronization strategies) has been published in .
Antescofo introduces the notion of temporal scope to formalize relationships between temporal information specified in the score and their realization during a performance . A temporal scope is attached to a sequence of actions, can be inherited or dynamically changed as a result of a computation. A synchronization strategy is part of a temporal scope definition. They use the performer's position information and its tempo estimation from the listening module, to drive the passing of time in a sequence of atomic and durative actions.
Synchronization strategies have been systematically studied to evaluate their musical relevance in collaboration with Orchestre de Paris and composer Marco Stroppa. Anticipative strategies enable handling of uncertainties inherent in musical event occurrence, exhibiting a smooth musical rendering whilst preserving articulation points and target events .
Several constructions dedicated to the expression of the temporal organization of musical entities and their control have enriched the language from the start of the project. These construction have been motivated by composer's research residences in our team: representation of open scores (J. Freeman); anticipative synchronization strategies (C. Trapani); adaptive sampling of continuous curve in relative time for the dynamic control of sound synthesis (J.-M. Fernandez); musical gesture (J. Blondeau); first class processes, actors and continuation combinators for the development of libraries of reusable parametric temporal behaviors (M. Stroppa, Y. Maresz); etc.
The reaction to a logical event is a unique feature in the computer music system community . It extend the well known when operator in synchronous languages with process creation. Elaborating on this low-level mechanism, temporal patterns enable expression of complex temporal constraints mixing instant and duration. The problem of online matching where the event are presented in real time and the matching is computed incrementally as well, has received a recent attention from the model-checking community, but with less constrained causal constraints.
The authoring of complex temporal organization can be greatly improved through adapted visual interfaces, and has led to the development of AscoGraph, a dedicated user interface to Antescofo. Ascograph is used both for edition and monitoring interface of the system during performances . This project was held from end 2012 to end 2014 thanks to Inria ADT and ANR support.
An information visualisation perspective has been taken for the design of timeline-based representation of action items, looking for information coherence and clarity, facility of seeking and navigation, hierarchical distinction and explicit linking while minimizing the information overload for the presentation of the nested structure of complex concurrent activities .
We address the questions of functional reliability and temporal predictability in score-based interactive music systems such as Antescofo. On the one hand, checking these properties is difficult for these systems involving an amount of human interactions as well as timing constraints (for audio computations) beyond those of many other real-time applications such as embedded control. On the other hand, although they are expected to behave properly during public concerts, these systems are not safety critical, and therefore a complete formal certification is not strictly necessary in our case.
Our objective in this context is to provide techniques and tools to assist both programmers of scores (i.e. composers) and the developers of the system itself. , . It should be outlined that the former are generally not experts in real-time programming, and we aim at giving them a clear view of what will be the outcome of the score that they are writing, and what are the limits of what is playable by the system. To help the development of Antescofo, we have built a framework for automated timed conformance testing. , , , , .
In both cases, it is important to be able to predict statically the behavior of the system in response to every possible musician input. This cannot be done manually and requires first a formal definition of the semantics of scores, and second using advanced symbolic state exploration techniques (model checking) .
Startup Creation
Arshia Cont with José Echeveste and Philippe Cuvillier (former PhD students) are
creating a Startup around Antescofo to bring the product to greater
public starting March 2016 http://
It was awarded the “Emergence Award” in 2015 that help emerging new technology companies to study the project, and an i-LAB prize in 2016, supported by the French Ministry of Culture and Bpifrance, and it has been a finalist of the Midemlab 2016.
Anticipatory Score Following and Real-time Language
Functional Description. Antescofo is a modular polyphonic Score Following system as well as a Synchronous Programming language for musical composition. The first module allows for automatic recognition of music score position and tempo from a realtime audio Stream coming from performer(s), making it possible to synchronize an instrumental performance with computer realized elements. The synchronous language (DSL) within Antescofo allows flexible writing of time and interaction in computer music.
Participants: Arshia Cont, Jean-Louis Giavitto, Florent Jacquemard and José Echeveste
Contact: Arshia Cont
The design of the Antescofo DSL clearly benefits of a strong and
continuous involvement in the production of world-class composer pieces
and their continuous recreation throughout the world. These
interactions motivate new developments, challenge the state of the art
and in return, opens new creative dimensions for composers and
musicians. The maturity of the system is assessed by the generalization
of its use in a large proportion of Ircam new productions, and its use
outside Ircam all around the world (Brasil, Chile, Cuba, Italy, China,
US, etc.). Antescofo enjoys an active community of 150 active users:
http://
Library for rhythm transcription integrated in the assisted composition environment OpenMusic.
Functional Description. Rhythm transcription is the conversion of sequence of timed events into the structured representations of conventional Western music notation. Available as a graphical component of OpenMusisc, the library OMRQ privileges user interactions in order to search for an appropriate balance between different criteria, in particular the precision of the transcription and the readability of the musical scores produced.
This system follows a uniform approach, using hierarchical representations of timing notations in the form of rhythm trees, and efficient parsing algorithms for the lazy enumeration of solutions of transcription.
Its implementation is carried out via a dedicated interface allowing interactive exploration of the solutions space, their visualization and local editing, with particular attention to the processing of grace notes and rests.
Participants: Florent Jacquemard and Adrien Ycart
Contact: Florent Jacquemard
URL: http://
Timed testing plateform for Antescofo.
Functional Description. The frequent use of Antescofo in live and public performances with human musicians implies strong requirements of temporal reliability and robustness to unforeseen errors in input. To address these requirements and help the development of the system and authoring of pieces by users, we are developing a platform for the automation of testing the behavior of Antescofo on a given score, with of focus on timed behavior. It is based on state of the art techniques and tools for model-based testing of embedded systems , and makes it possible to automate the following main tasks:
offline and on-the-fly generation of relevant input data for testing (i.e. fake performances of musicians, including timing values), with the sake of exhaustiveness,
computation of the corresponding expected output, according to a formal specification of the expected behavior of the system on a given mixed score,
black-box execution of the input test data on the System Under Test,
comparison of expected and real output and production of a test verdict.
The input and output data are timed traces (sequences of discrete events together with inter-event durations). Our method is based on formal models (specifications) in an ad hoc medium-level intermediate representation (IR). We have developed a compiler for producing automatically such IR models from Antescofo high level mixed scores.
Then, in the offline approach, the IR is passed, after conversion to Timed Automata, to the model-checker Uppaal, to which is delegated the above task (1), following coverage criteria, and the task (2), by simulation. In the online approach, tasks (1) and (2) are realized during the execution of the IR by a Virtual Machine developed on purpose. Moreover, we have implemented several tools for Tasks (3) and (4), corresponding to different boundaries for the implementation under test (black box): e.g. the interpreter of Antescofo's synchronous language alone, or with tempo detection, or the whole system.
Participants: Clément Poncelet, Florent Jacquemard, Pierre Donat-Bouillud
Contact: Clément Poncelet
These implementations have been conducted as a part of Clément Poncelet's PhD Thesis.
The Antescofo graphical score editor.
Functional Description. AscoGraph, released in 2013, provides a autonomous Integrated Development Environment (IDE) for the authoring of Antescofo scores. Antescofo listening machine, when going forward in the score during recognition, uses the message passing paradigm to perform tasks such as automatic accompaniment, spatialization, etc. The Antescofo score is a text file containing notes (chord, notes, trills, ...) to follow, synchronization strategies on how to trigger actions, and electronic actions (the reactive language).
This editor shares the same score parsing routines with Antescofo core, so the validity of the score is checked on saving while editing in AscoGraph, with proper parsing errors handling.
Graphically, the application is divided in two parts (Figure ). On the left side, a graphical representation of the score, using a timeline with tracks view. On the right side, a text editor with syntax coloring of the score is displayed. Both views can be edited and are synchronized on saving. Special objects such as "curves", are graphically editable: they are used to provide high-level variable automation facilities like breakpoints functions (BPF) with more than 30 interpolations possible types between points, graphically editable.
Audio processing has been integrated in the Antescofo language. This experimental extension aims at providing sample-accurate control and dynamic audio graphs directly in Antescofo. Currently, FAUST (through a native embedding of the in-core compiler) and a few specific signal processors (notably FFT) can be defined. The tight integration enable specification of multiple-timed signal processing in conjunction with control programs. One example of this integration is the use of symbolic curve specification to specify variations of control parameters at sample rate, a task whose correctness in real-time is not at the scope of competing systems. Our approach has proven to provide such mechanisms at a lower computational cost; for example a factor of two in the remaking of Boulez' piece Antheme 2 compared to the original version with the audio effects managed in Max. We will further pursue such optimizations while extending sample accuracy, by developing a type-system to preserve block computations in case of preemptive audio processing .
The reduced footprint enable the embedding of an Antescofo engine with internal audio processing on Raspberry PI and UDOO nano-computers (early results are reported in ).
Rhythmic data are commonly represented by tree structures (rhythms trees) in assisted music composition environments, such as OpenMusic, due to the theoretical proximity of such structures with traditional musical notation. We are studying the application in this context of techniques and tools for processing tree structures, which were originally used in natural language processing. We are particularly interested in two well established formalisms with solid theoretical foundations: weighted automata for trees and dags and term rewriting.
Our main contribution in that context is the development of a new framework for rhythm transcription, the problem of the generation, from a sequence of timestamped notes, e.g. a file in MIDI format, of a score in traditional music notation) – see Section . This problem arises immediately as insoluble unequivocally: we shall calibrate the system to fit the musical context, balancing constraints of precision, or of simplicity / readability of the generated scores. In collaboration with Jean Bresson (Ircam) and Slawek Staworko (LINKS), we are developing an approach based on algorithms for the enumeration of large sets of weighted trees (tree series), representing possible solutions to a problem of transcription. The implementation work is performed by Adrien Ycart, under a research engineer contract with Ircam. This work has been presented in , .
Moreover, in collaboration with Prof. Masahiko Sakai (Nagoya University), we are working on symbolic processing of music notation, based on the above models. We proposed a structural theory (equational system on rhythm trees) defining equivalence on rhythm notations , , and use this approach, for instance, to generate, by transformation, different possible notations of the same rhythm, with the ability to select either alternative notation in accordance with certain constraints, e.g. in the context of transcription.
Related results on the property of confluence of term rewriting systems were presented in (invited talk), and other work on data tree processing, in collaboration with Luc Segoufin and Jeremie Dimino, have published in .
We have been pursuing in 2016 our applications of model-based timed testing techniques to the interactive music system Antescofo, in the context of the Phd of Clément Poncelet and in relation with the developments presented in Section .
Several formal methods have been developed for automatic conformance testing of critical embedded software, with the execution of a real implementation under test (IUT, or black-box) in a testing framework, where carefully selected inputs are sent to the IUT and then the outputs are observed and analyzed. In conformance model-based testing (MBT), the input and corresponding expected outputs are generated according to formal models of the IUT and the environment. The case of IMS presents important originalities compared to other applications of MBT to realtime systems. On the one hand, the time model of IMS comprises several time units, including the wall clock time, measured in seconds, and the time of music scores, measured in number of beats relatively to a tempo. This situation raises several new problems for the generation of test suites and their execution. On the other hand, we can reasonably assume that a given mixed score of Antescofo specifies completely the expected timed behavior of the IMS, and compile automatically the given score into a formal model of the IUT’s expected behavior, using an intermediate representation. This give a fully automatic test method, which is in contrast with other approaches which generally require experts to write the specification manually.
We have developed online and offline approaches to MBT for Antescofo. The offline approach relies on tools of the Uppaal suite , , using a translation of our models into timed automata. The online approach is based on a virtual machine executing the models of score in Intermediate Representation (IR).
To this respect, the transformation of Antescofo's mixed scores (in DSL) into IR, described in Section , can be seen as the premise of a compiled approach for Antescofo.
These results have been published this year in the Journal of New Music Research , the journal Science of Computer Programming , and in the PhD of Clément Poncelet, defended in November 2016.
Mutant was the PI of the ANR INEDIT project, ended in october 2015. The INEDIT project aims to provide a scientific view of the interoperability between common tools for music and audio productions, in order to open new creative dimensions coupling authoring of time and authoring of interaction.
Mutant participates also actively in the Efficace ANR Project. This project explores the relations between computation, time and interactions in computer-aided music composition, using OpenMusic and other technologies developed at IRCAM and at CNMAT (UC Berkeley).
The MuTant team is also an active member of the ANR CHRONOS Network by Gérard Berry, Collège de France).
Program: PHC Amadeus (France-Austria)
Project acronym: LETITBE
Project title: Logical Execution Time for Interactive And Composition Assistance Music Systems
Duration: 01/2015 - 01/2017
Coordinator: Florent Jacquemard, Christoph Kirsch
Other partners: Department of Computer Sciences University of Salzburg, Austria
Abstract: The objective of the LETITBE project is to contribute to the development of computer music systems supporting advanced temporal structure in music and advanced dynamics in interactivity. For this purpose we are proposing to re-design and re-engineer computer music systems (from IRCAM at Paris) using advanced notions of time and their software counterparts developed for safety-critical embedded systems (from University of Salzburg). In particular, we are applying the so-called logical execution time paradigm as well as its accompanying time safety analysis, real-time code generation, and portable code execution to computer music systems. Timing in music is obviously very important. Advanced treatment of time in safety-critical embedded systems has helped address extremely challenging problems such as predictability and portability of real-time code. We believe similar progress can be made in computer music systems potentially enabling new application areas. The objective of the project is ideally suited for a collaboration of partners with complementary expertise in computer music and real-time systems.
This year, Pierre Donat-Bouillud has spent 5 months in the University of Salzburg and one month in the University of California Berkeley, in the context of the LETITBE project, before starting his PhD in Mutant. Several other student exchanges and scientists visits between Salzburg and Paris have been funded this year by the LETITBE projetc
We are collaborating with Slawek Staworko (LINKS and Algomus, Lille – on leave at U. Edinburgh in 2016), and the Algomus group at Lille, in the context of our projects on rhythm transcription described at Sections and . This collaboration led this year to the following publications: , .
We are pursuing a long term collaboration with Masahiko Sakai (U. Nagoya) on term rewriting techniques and applications (in particular applications related to rhythm notation) , .
MuTant team collaborates with Bucharest Polytechnic University, in the framework of Grig Burloiu's PhD Thesis on AscoGraph UIX design which has resulted in a the new design of AscoGraph (see ) and publications , , .
MuTant team collaborated with researchers at National Institute of Informatics of Tokyo on real-time Symbolic Alignment of music data .
Jean-Louis Giavitto has participated in the program committee of the 42st International Computer Music Conference (ICMC), the 10th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2016), the Digital Entertainment Technologies and Arts(DETA) of GECCO-2016, the 15th International Conference on the Synthesis and Simulation of Living Systems (ALIFE XV) and the 2nd International Conference on Technologies for Music Notation and Representation (TENOR 2016).
Florent Jacquemard has been involved in the Program Committees of the 2d International Conference on Technologies for Music Notation and Representation (TENOR 2016), the 8th International Symposium on Symbolic Computation in Software (SCSS 2017), the National Conference Journées d'Informatique Musicale (JIM 2016), and the special issue of the journal Information and Computation for the 10th International Conference on Language and Automata Theory and Applications (LATA 2016).
Jean-Louis Giavitto is associate redactor (former redactor-in-chief) of TSI (Technique et Science Informatiques) published by Lavoisier. He has coorganized with Antoine Spicher (Univ. Paris Est), Stefan Dulman (Univ. Twente) and Mirko Viroli (Univ. of Milano) a special issue of The Knowledge Engineering Review on Spatial Computing published in november 2016.
The members of the team contributed as reviewers for the journal Information and Computation, IEEE Transactions on Multimedia, IEEE Transactions on Audio and Speech Signal Processing, ACM Transactions on Intelligent Systems, Theoretical Computer Science, IEEE ICASSP, ICMC, SMC, Formal Methods, LATA and more...
Jean-Louis Giavitto was invited to the seminar @SystemX at Saclay.
Florent Jacquemard gave an invited talk at the 5th International Workshop on Confluence (IWC 2016), hosted by Innsbruck University Center at Obergurgl, Austria .
Jean-Louis Giavitto was a member of the Prix de thèse du GDR GPL as well as a member of the Faust Award 2016 (the Faust Open-Source Software Competition is intended to promote innovative high-quality free audio software developed with the Faust programming language, as well as development tools build around the Faust compiler itself).
Jean-Louis Giavitto is in scientific board of the GDR GPL (Genie de la programmation et du logiciel). He is also a reviewer for FET projects for the UC.
PhD defended: José Echeveste, Accorder le temps de la machine et celui du musicien, started in October 2011, supervisor: Arshia Cont and Jean-Louis Giavitto.
PhD defended (November 2016): Clément Poncelet, Formal methods for analyzing human-machine interaction in complex timed scenario. Started in October 2013, supervisor: Florent Jacquemard.
PhD defended (December 2016): Philippe Cuvillier, Probabilistic Decoding of strongly-timed events in realtime, supervisor: Arshia Cont.
PhD in progress: Julia Blondeau, Espaces compositionnels et temps multiples : de la relation forme/matériauq (thèse en art), supervisor: Jean-Louis Giavitto, co-director Dominique Pradelle (Philosophy, Sorbonne), started October 2015.
PhD in progress: Maxim Sirbu, Online Interaction via Machine Listening. Supervisors: Arshia Cont (MuTant) and Mathieu Lagrange (IrCyNN), started October 2015.
PhD in progress: Pierre Donat-Bouillud, Modeling, analysis and execution of cyber-temporal systems. Supervisor: Florent Jacquemard, co-director: Jean-Louis Giavitto, started October 2016.
Jean-Louis Giavitto was Chairman of the jury of Clément Poncelet. He was reviewer of the PhD thesis of Jaime Arias (University of Bordeaux, Sémantique Formelle et Vérification Automatique de Scénarios Hiérarchiques Multimédia avec des Choix Interactifs), and examiner of the PhD of Mattia Bergomi (Università di Milano and UPMC, Dynamical and Topological Tools for (Modern) Music Analysis).
Florent Jacquemard was reviewer of the PhD thesis of Etienne Dubourg (University of Bordeaux, Contributions to the theory of tile languages). He is reviewer of the PhD thesis of Nicolas Guiomard-Kagan (Université de Picardie Jules Verne, Traitement de la polyphonie pour l’analyse informatique de partitions musicales). He has been examiner of the PhD of Emil-Mircea Andriescu (UPMC, MiMove, Dynamic Data Adaptation for the Synthesis and Deployment of Protocol Mediators) and examiner of the PhD of Carles Creus López (UPC Barcelona, Tree Automata with Constraints and Tree Homomorphisms).