EN FR
EN FR


Section: New Results

Learning Transducers

We have pursued the work on learning finite state tree-to-word transducers. Tree-to-word transformations are ubiquitous in computer science. They are the core of many computation paradigms from the evaluation of abstract syntactic trees to modern programming languages XSLT. We have extended the results obtained last year on the study of a class of sequential top-down tree-to-word transducers, called STWs. Transducers in STWs are capable of: concatenation in the output, producing arbitrary context-free languages, deleting inner nodes, and verifying that the input tree belongs to the domain even when deleting parts of it. These features are often missing in tree-to-tree transducers, and for instance, make STWs incomparable with the class of top-down tree-to-tree transducers. The class of STWs has several interesting properties, in particular we proposed in 2011 a normal for STWs.

In [4] , we present a Myhill-Nerode characterization of the corresponding class of sequential tree-to-word transformations. Next, we investigate what learning of STWs means, identify fundamental obstacles, and propose a learning model with abstain. Finally, we present a polynomial learning algorithm.