EN FR
• Legal notice
• Accessibility - non conforme

Overall Objectives
Application Domains
New Software and Platforms
Partnerships and Cooperations
Bibliography

## Section: New Results

#### Complex Event Processing

Complex event processing can be seen as the problem is to answer queries on data graphs, for graphs that arrive on streams. These queries may contain aggregates, so this work subscribes to the ANR project Aggreg.

In his PhD thesis, T. Sebastian [12] developed with his supervisor J. Niehren streaming algorithms covering all of XPath 3.0 queries on XML streams. For this, they proposed a higher-order query language $\lambda$XP, showed how to give a formal semantics of all of XPath 3.0 by compilation to $\lambda$XP, and then how to evaluate $\lambda$XP queries on XML streams. These algorithms were implemented in the QuiXPath tool.

At SOFSEM, they proposed a new technique to speed up the evaluation of navigational XPath queries on XML streams based on document projection. The idea is to skip those parts of the stream that are irrelevant for the query. This speeds up the evaluation of navigation XPath queries by a factor of 4 in usual Xpath benchmarks.

M. Sakho started his PhD project on hyperstreaming query answering algorithms for graphs under the supervision of J. Niehren and I. Boneva. Part of this work will be continued with out visitor D. Vrgoc from Santiago di Chili.

#### Data Centric Workflows

Data-centric workflows are complex programs that can query and update a database. The usage of data-centric workflows for crowd sourcing is the topic of the ANR Project HeadWork.

In collaboration with ENS Cachan and San Diego, P. Bourhis presented at ICDT [18] techniques on collaborative access control in a distributed query and data exchange language (Webdamlog). The goal of this work was to provide a semantic to data exchange rules defined by Webdamlog. It also allowed to prove that it is possible to formally verify whether there are data leakages.

P. Bourhis with Tel Aviv defined at ICDE [25] a notion of provenance for data-centric workflows, and proved that it can be used to explain the provenance of fact in the final instance of an execution. This provenance is used to answer three main questions: why does a specific tuple appear in the answer of a query, what if the initial database is changed (Revision problem), and how to change the query to obtain a missing tuple.