Section: New Results
The interpretation of natural language utterances relies on two complementary elements of natural language modeling. On the one hand, the description of the combinatorics of natural language expresses how elementary units, or lexical units (typically the word), combine in order to build more complex elements, such as sentences or discourses. On the other hand, the description of these elementary units specifies how they contribute to the meaning of the whole by their lexical meaning. This specification should also take into account how the different parts of the lexical meanings combine during the composition process and how they relate to their underlying meaning concepts. For instance, the verbs buy and sell should refer to a common conceptual representation. However, their syntactic arguments (e.g., the subject) play a different (semantic) role with respect to the transaction concept that they share.
The modeling of these concepts and how they relate to each other gave rise to Frames Semantics as a representation format of conceptual and lexical knowledge  ,  ,  ,  . Frames consists of directed graphs where nodes correspond to entities (individuals, events, ...) and edges correspond to (functional or non-functional) relations between these entities. Providing a fine-grained representation of the internal concept structure allows both for a decomposition of the lexical meaning and for a precise description of the sub-structural interactions in the semantic composition process  .
Frames can be formalized as extended typed feature structures  ,  and specified as models of a suitable logical language. Such a language allows for the composition of lexical frames on the sentential level by means of an explicit syntax-semantics interface  . Yet, this logical framework does not provide a direct link between Frames and truth-conditional semantics, where natural language utterances are considered with respect to the conditions under which they are true or false. In particular, it does not provide means for the lexical items to introduce explicit quantification over entities or events.
To overcome these limitations, we proposed use Hybrid Logic (HL)  ,  . HL is an extension of modal logic. As such, it is well-suited to the description of graph structures. Moreover, HL introduce nominals, that allow the logical formulas to refer to specific nodes of the graph. It is then possible, for example, to specify when two edges should meet. Moreover, it introduces variables for nodes, and the associated quantifiers, that can appear in the logical formulas. We used this framework to model quantification in Frame Semantics  , 
Compositionality and Modularity
One says that a semantics is compositional when it allows the meaning of a complex expression to be computed from the meaning of its constituents. One also says that a system is modular if it is made of relatively independent components. In the case of a semantic system (e.g, a Montague grammar), we say that it is modular if the ontology on which it is based (including notions such as truth, entities, events, possible worlds, time intervals, state of knowledge, state of believe, ...) is obtained by combining relatively independent simple ontologies.
The intensionalization procedure introduced in  provides a first step towards modularity. It allows the extensional interpretation of a language to be transformed into an intensionalized interpretation that offers room for accommodating truly intensional phenomena. Moreover, this procedure is conservative in the sense that it preserves the truth conditions of sentences. Another instance of such a procedure is provided by the dynamization procedure described in  , which allows a static interpretation to be turned into a dynamic one capable of accommodating phenomena related to discourse dynamics.
In  , we showed that both the intensionalization and dynamization procedures are instances of an abstract general scheme for which conservativity results may be established using the notion of logical relation.
Abstract Categorial Parsing
Kanazawa  ,  has shown how parsing and generation may be reduced to datalog queries for a class of grammars that encompasses mildly context-sensitive formalisms. These grammars, which he calls context-free -term grammars, correspond to second-order abstract categorial grammars.
In  , we showed how Kanazawa's reduction may be carried out in the case of abstract categorial grammars of a degree higher than two. To this end, we reduced the parsing problem for general Abstract Categorial Grammars to a provability problem in Multiplicative Exponential Linear Logic.