- A1.3. Distributed Systems
- A1.3.3. Blockchain
- A1.3.4. Peer to peer
- A1.3.5. Cloud
- A1.3.6. Fog, Edge
- A2.5.1. Software Architecture & Design
- A2.6.2. Middleware
- A3.1.3. Distributed data
- A3.1.5. Control access, privacy
- A5.1.1. Engineering of interactive systems
- A5.1.2. Evaluation of interactive systems
- B6.1.1. Software engineering
- B6.3.1. Web
- B6.5. Information systems
- B8.4. Security and personal assistance
- B8.4.1. Crisis management
- B9.1.1. E-learning, MOOC
- B9.6.1. Psychology
- B9.8. Reproducibility
- B9.10. Privacy
1 Team members, visitors, external collaborators
- Mohammed Riyadh Abdmeziem [Univ de Lorraine, Researcher, until Aug 2020]
- Anis Ahmed Nacer [Univ de Lorraine, Researcher, from Sep 2020]
- Claudia-Lavinia Ignat [Inria, Researcher]
- Matthieu Nicolas [Univ de Lorraine, Researcher, from Sep 2020]
- Linda Ouchaou [Univ de Lorraine, Researcher, from Sep 2020]
- François Charoy [Team leader, Univ de Lorraine, Professor, HDR]
- Khalid Benali [Univ de Lorraine, Associate Professor, HDR]
- Gérôme Canals [Univ de Lorraine, Associate Professor]
- Claude Godart [Univ de Lorraine, Professor]
- Gérald Oster [Univ de Lorraine, Associate Professor]
- Olivier Perrin [Univ de Lorraine, Professor, HDR]
- Samir Youcef [Univ de Lorraine, Associate Professor]
- Anis Ahmed Nacer [Inria, until Aug 2020]
- Clelie Amiot [CNRS]
- Alexandre Bourbeillon [Inria, from Oct 2020]
- Victorien Elvinger [Univ de Lorraine]
- Abir Ismaili-Alaoui [Univ de Lorraine]
- Quentin Laporte Chabasse [Univ de Lorraine]
- Beatrice Linot [Univ de Lorraine, until Sep 2020]
- Hoai Le Nguyen [Univ de Lorraine]
- Matthieu Nicolas [Univ de Lorraine, until Aug 2020]
- Pierre Antoine Rault [Inria, from Oct 2020]
Interns and Apprentices
- Souhila Ait Hacene [Université Abderrahmane Mira de Béjaïa - Algérie, until Nov 2020]
- Alexandre Bourbeillon [Univ de Lorraine, until Sep 2020]
- Malaury Keslick [Inria, from Jun 2020 until Jul 2020]
- Oscar Leclerc [Inria, from Aug 2020]
- Tom Mendez-Porcel [Univ de Lorraine, from Apr 2020 until Jul 2020]
- Linda Ouchaou [Université des sciences et de la technologie Houari Boumédiène - Alger, until Aug 2020]
- Charly Pierrat [Univ de Lorraine, from Jun 2020 until Jul 2020]
- Mukhayyo Tashpulatova [Univ de Lorraine, from Mar 2020 until Jul 2020]
- Frederic Vaz [Inria, from Jun 2020 until Jul 2020]
- Abdul Wahab [Univ de Lorraine, from Mar 2020 until Aug 2020]
- Sophie Drouot [Inria]
- Sylvie Musilli [Univ de Lorraine, until Oct 2020]
2 Overall objectives
The advent of the Cloud, smart mobile devices and service-based architecture has opened a field of possibilities as wide as the invention of the Web 25 years ago. Software companies now deliver applications and services using the Web as a platform. From text to video editing, from data analytic to process management, they distribute business applications to users within their web browser or on some mobile appliance 1. These services are deployed on sophisticated infrastructures that can cope with very demanding loads. The Software as a Service approach (SaaS) highlights their cooperative nature, by enabling the storage of data in cloud infrastructures that can be easily shared among users. Thus, clients consume applications through service API (web services), available on delivery platforms, called stores or markets. This approach of software distribution outstrips the traditional software distribution channels, in both scale and opportunity. Scale has different dimensions: the number of users (communities rather than groups), the size of data produced and managed (billions of documents), the number of services and of organizations (tens of thousands). Opportunity refers to the infinite number of combinations between these services and the many ways to consume and use them. This fast-paced evolution challenges research because the creation of applications from the composition of services must incorporate new content and context based constraints. From a socio-technical perspective, the behaviour of users is evolving constantly as they get acculturated to new services and ways to cooperate. Mere enhancement of current existing solutions to cope with these challenges is insufficient. We conduct a dedicated research effort to tackle the problems arising from the evolution of contemporary technologies and of those we can anticipate. For this purpose, we explore three directions: large scale collaborative data management, data centred service composition and above all, a foundation for the construction of trustworthy collaborative systems. Large scale collaborative data management concerns mostly the problem of allowing people to collaborate on shared data, synchronously or not, on a central server or on a peer to peer network. This research has a long history referring back to a paper by Ellis 12. Users acculturation to online collaboration triggers new challenges. these refer to the number of participants to a collaboration (a crowd), to sharing among different organizations and to the nature of documents that are shared and produced. The problem is to design new algorithms and to evaluate them under different usage conditions and constraints and for different kinds of data. Data centred service composition deals with the challenge of creating applications by composing services from different providers. Service composition has been studied for some time now but the technical evolution and the growing availability of public API oblige us to reconsider the problem 10. Our goal here is, taking into account this evolution, like the advent of the Cloud, the availability at a large scale of public API based on the REST 2 architectural style, to design models, methods and tools to help developers to compose these services in a safe and effective way. Based on the work that we do in the two first topics, our main research direction aims at providing support to build trustworthy collaborative applications. We base it on the knowledge that we can gather from the underlying algorithms, the composition of services and the quality of services that we can deduce and monitor. The complexity of the context in which applications are executed does not allow to provide proven guarantees. Our goal is to base our work on a contractual and monitored approach to provide users with confidence in the service they use. Surprisingly, people rely today on services with very little knowledge about the amount of confidence they can put in these services. They are based on composition of other unknown services. Thus, it becomes very difficult to understand the consequences of the failure of a component of the composition. We follow a path that portrays a ruptured continuum, to underscore both the endurance of the common questions along with the challenge of accommodating a new scale. We regard collaborative systems as a combination of supportive services, encompassing safe data management and data sharing. Trustworthy data centred services are an essential support for collaboration at the scale of communities and organizations. We will combine our results and expertise to achieve a new leap forward toward the design of methods and techniques to enable the construction of usable large scale collaborative systems.
3 Research program
Our scientific foundations are grounded on distributed collaborative systems supported by sophisticated data sharing mechanisms and on service oriented computing with an emphasis on orchestration and on non-functional properties. Distributed collaborative systems enable distributed group work supported by computer technologies. Designing such systems requires an expertise in Distributed Systems and in Computer-supported collaborative Work research area. Besides theoretical and technical aspects of distributed systems, the design of distributed collaborative systems must take into account the human factor to offer solutions suitable for users and groups. The Coast team vision is to move away from a centralized authority based collaboration toward a decentralized collaboration. Users will have full control over their data. They can store them locally and decide with whom to share them. The Coast team investigates the issues related to the management of distributed shared data and coordination between users and groups. Service oriented Computing 17 is an established domain on which the ECOO, Score and now the Coast teams have been contributing for a long time. It refers to the general discipline that studies the development of computer applications on the web. A service is an independent software program with a specific functional context and capabilities published as a service contract (or more traditionally an API). A service composition aggregates a set of services and coordinates their interactions. The scale, the autonomy of services, the heterogeneity and some design principles underlying Service Oriented Computing open new research questions that are at the basis of our research. They span the disciplines of distributed computing, software engineering and computer supported collaborative work (CSCW). Our approach to contribute to the general vision of Service Oriented Computing is to focus on the issue of the efficient and flexible construction of reliable and secure high-level services. We aim to achieve it through the coordination/orchestration/composition of other services provided by distributed organizations or people.
3.2 Consistency Models for Distributed Collaborative Systems
Collaborative systems are distributed systems that allow users to share data. One important issue is to manage consistency of shared data according to concurrent access. Traditional consistency criteria such as serializability, linearizability are not adequate for collaborative systems. Causality, Convergence and Intention preservation (CCI) 21 are more suitable for developing middleware for collaborative applications. We develop algorithms for ensuring CCI properties on collaborative distributed systems. Constraints on the algorithms are different according to the kind of distributed system and to the data structure. The distributed system can be centralized, decentralized or peer-to-peer. The type of data can include strings, growable arrays, ordered trees, semantic graphs and multimedia data.
3.3 Optimistic Replication
Replication of data among different nodes of a network promotes reliability, fault tolerance, and availability. When data are mutable, consistency among the different replicas must be ensured. Pessimistic replication is based on the principle of single-copy consistency while optimistic replication allows the replicas to diverge during a short time period. The consistency model for optimistic replication 19 is called eventual consistency, meaning that replicas are guaranteed to converge to the same value when the system is idle. Our research focuses on the two most promising families of optimistic replication algorithms for ensuring CCI:
- operational transformation (OT) algorithms 12
- algorithms based on commutative replicated data types (CRDT) 18.
Operational transformation algorithms are based on the application of a transformation function when a remote modification is integrated into the local document. Integration algorithms are generic, being parametrised by operational transformation functions which depend on replicated document types. The advantage of these algorithms is their genericity. These algorithms can be applied to any data type and they can merge heterogeneous data in a uniform manner. Commutative replicated data types is a new class of algorithms initiated by WooT 16, the first algorithm designed WithOut Operational Transformations. They ensure consistency of highly dynamic content on peer-to-peer networks. Unlike traditional optimistic replication algorithms, they can ensure consistency without concurrency control. CRDT algorithms rely on natively commutative operations defined on abstract data types such as lists or ordered trees. Thus, they do not require a merge algorithm or an integration procedure.
3.4 Process Orchestration and Management
Process Orchestration and Management is considered as a core discipline behind Service Management and Computing. It includes the analysis, the modelling, the execution, the monitoring and the continuous improvement of enterprise processes and is for us a central domain of studies. Many efforts have been devoted establishing standard business process models founded on well-grounded theories (e.g. Petri Nets) that meet the needs of business analysts, software engineers and software integrator. This led to heated debate in the Business Process Management (BPM) community as the two points of view are very difficult to reconcile. On one side, business people in general require models that are easy to use and understand and that can be quickly adapted to exceptional situations. On the other side, IT people need models with an operational semantic in order to be able transform them into executable artifacts. Part of our work has been an attempt to reconcile these points of view. This resulted in the development of the Bonita BPM system. It resulted also more recently on our work in crisis management where the same people are designing, executing and monitoring the process as it executes. More generally, and at a larger scale, we have been considering the problem of processes spanning the barriers of organizations. This leads to the more general problem of service composition as a way to coordinate inter organizational construction of applications. These applications provide value, based on the composition of lower level services 9.
3.5 Service Composition
Recently, we started a study on service composition for software architects where services are coming from different providers with different plans (capacity, degree of resilience...). The objective is to support the architects to select the most accurate services (wrt. to their requirements, both functional and non-functional) and plans for building their software. We also compute the properties that we enforce for the composition of these services.
4 Application domains
4.1 Crisis Management
Crisis management research investigates all the dimensions regarding the management of unexpected catastrophic events like floods, earthquake, terrorist attacks or pandemics. All the phases of a crisis, from preparedness to recovery require collaboration between people from many organizations. This provides opportunities to study inter-organizational collaboration at a large scale and to propose and evaluate mechanisms that ensure secure and safe collaboration. The work of Béatrice Linot provides us with a deep understanding of the factors that encourage collaboration and help to maintain trustworthy collaboration between stakeholders. This work is continued by Clélie Amiot who studies the effects of human chat-bot collaboration in this kind of setting.
4.2 Collaborative Editing
Collaborative editing is a common application of optimistic replication in distributed settings. The goal of collaborative editors, irrespective of the kind of document, is to allow a group of users to update a document concurrently while ensuring that they eventually get all the same copy at the end. Our algorithm allows to implement collaborative editor in a peer to peer way. It avoids the need for a central server ensuring a higher level of privacy among collaborators. In this context, it requires to consider the problem of authentication and authorization of participants 8 and of trust between them 13.
5 New software and platforms
5.1 New software
- Name: Multi-User Text Editor
- Keyword: Collaborative systems
- Scientific Description: MUTE is a peer 2 peer collaborative editing platform that is used to evaluate replication algorithms in editing situations regarding their performances and to understand how it affects user experience.
- Functional Description: Existing collaborative systems generally rely on a service provider that stores and has control over user data which is a threat for privacy. MUTE (Multi-User Text Editor) is a web-based real-time collaborative editor that overcomes this limitation by using a peer-to-peer architecture relying on WebRTC. Several users may edit in real-time a shared document and their modifications are immediately sent to the other users without transiting through a central server. Our editor offers support for working offline while still being able to reconnect at a later time, which gives it a unique feature. Data synchronisation is achieved by using the LogootSplit algorithm developed by team Coast.
- News of the Year: In 2019 we implemented a new algorithm, dotted logoot-split. We integrated a group key management algorithm to evaluate a secure version of the algorithm in dynamic situation. We also incorporated probes to evaluate collaboration situation.
github. com/ coast-team/ mute
- Publications: hal-00903813, hal-01655438
- Contact: Gérald Oster
- Participants: Claudia Ignat, François Charoy, Gérald Oster, Luc André, Matthieu Nicolas, Victorien Elvinger
6 New results
6.1 Time-position characterization of conflicts in collaborative editing
Participants: François Charoy, Claudia-Lavinia Ignat, Hoai Le Nguyen.
We studied collaborative editing behavior in terms of collaboration patterns users adopt and in terms of a characterisation of conflicts, i.e. edits from different users that occur close in time and position in the document 3. The process of collaborative editing can be split into several editing sessions which are performed by a single author (single-authored sessions) or several authors (co-authored sessions). This fragmentation process requires a pre-defined maximum time gap between sessions which is not yet well defined in previous studies. We analysed collaboration logs of 108 documents collaboratively edited by groups of students using Overleaf. We show how to establish a suitable maximum time gap to split collaboration activities into sessions by evaluating the distribution of the time distance between two adjacent sessions. We studied editing activities inside co-authored sessions in order to define potential conflicts in terms of time and position dimensions before they occur in the document. We also analysed how many of these potential conflicts become real conflicts. Findings show that potential conflicting cases are few. However, they are more likely to become real conflicts.
6.2 Conflict-Free Replicated Relations for multi-synchronous access to relational databases
Participants: Claudia-Lavinia Ignat.
In a cloud-edge environment, edge devices may not always be connected to the network. However, a multi-synchronous access of data on edge devices needs to be ensured. This multi-synchronous access includes an asynchronous mode where the user can always access the data on the device even when the device is off-line, and a synchronous mode where as long as the device is online the data on the device is kept synchronous with the data stored in the cloud.
We proposed augmenting existing relational database schema with a Conflict-free Replicated Relations (CRRs) layer in order to support a multi-synchronous access to relational databases in a cloud-edge environment 5. The underlying CRR layer uses CRDTs to allow immediate data access at edge and to guarantee data convergence when the edge devices are online. It also resolves violations of integrity constraints at merge by undoing offending updates. Since a relation instance is a set of tuples, the basic building block of CRR is a set CRDT, more specifically a delta-state set CRDT.
The key issue that a general-purpose set CRDT must address is how to identify the causality between the different insertion and deletion updates. CLSet, our novel set CRDT suitable for relational databases identifies causality relations by using the abstraction of causal length, which is based on two observations. First, the insertions and deletions of a given element occur in turns, one causally dependent on the other. A deletion is an inverse of the last insertion it sees. Similarly, an insertion is an inverse of the last deletion it sees (or none, if the element has never been inserted). Second, two concurrent executions of the same mutation of a set CRDT fulfill the same purpose and therefore are regarded as the same update. Seeing one means seeing both. Two concurrent inverses of the same update are also regarded as the same one.
We implemented and reported on performances of a CRR prototype on top of a data mapping library that is independent of any specific database management system..
6.3 Mitigating the Cost of Identifiers in Sequence CRDT
Participants: Matthieu Nicolas, Gérald Oster, Olivier Perrin.
To achieve high availability, large-scale distributed systems have to replicate data and to minimise coordination between nodes. The literature and industry increasingly adopt Conflict-free Replicated Data Types (CRDTs) to design such systems. CRDTs are data types which behave as traditional ones, e.g. the Set or the Sequence. However, compared to traditional data types, they are designed to support natively concurrent modifications. To this end, they embed in their specification a conflict-resolution mechanism.
To resolve conflicts in a deterministic manner, CRDTs usually attach identifiers to elements stored in the data structure. Identifiers have to comply with several constraints such as uniqueness or being densely ordered according to the kind of CRDT. These constraints may prevent the identifiers’ size from being bounded. As the number of the updates increases, the size of identifiers grows. This leads to performance issues, since the efficiency of the replicated data structure decreases over time.
To address this issue, we propose a new CRDT for Sequence which embeds a renaming mechanism. It enables nodes to reassign shorter identifiers to elements in an uncoordinated manner. Obtained experiment results demonstrate that this mechanism decreases the overhead of the replicated data structure and eventually limits it.
To validate the proposed renaming mechanism, we performed an experimental evaluation to measure its performances on several aspects: (i) the size of the data structure ; (ii) the integration time of the rename operation ; (iii) the integration time of insert and remove operations. In cases (i) and (iii), we use LogootSplit as the baseline data structure to compare results. The results we obtained are very encouraging, as the integration time is far shorter with the renaming mechanism, even with the time spent to apply the rename operation.
6.4 Social Networks as Collaboration Support
Participants: Quentin Laporte Chabasse, Gérald Oster, François Charoy.
Safe peer to peer collaborative services requires a trusted peer to peer network in order to be effective. We started to investigate how to leverage social networks underlying inter organizational collaboration to support such collaboration. To reach this goal, we need to analyze collaborative graphs. They are a relevant sources of information to understand behavioural tendencies of groups of individuals. Exponential Random Graph Models (ERGMs) are commonly used to analyze such social processes including dependencies between members of the group. Our approach considers a modified version of ERGMs, modeling the problem as an edge labelling one. The main difficulty is inference since the normalizing constant involved in classical Markov Chain Monte Carlo approaches is not available in an analytic closed form.
The main contribution is to use the recent ABC Shadow algorithm 20. This algorithm is built to sample from posterior distributions while avoiding the previously mentioned drawback. The proposed method is illustrated on real data sets provided by the HAL 3 platform and provides new insights on self-organized collaborations among researchers7. In 2020, we applied this method in a longitudinal way to identify patterns regarding the evaluation of collaboration on several years for the same teams.
6.5 Identification and Selection of Services from Cloud Providers
Participants: Anis Ahmed Nacer, François Charoy, Olivier Perrin.
We continued our work on providing a framework to compare plans for services from cloud providers in order to help architects to select the best composition given the required criteria (both functional and non-functional requirements) for a micro service architecture. This year, we have made progress in two directions: the first is the identification of the key elements to be considered when architects want to compare the different plans, and the second one is a methodology to compute the best composition of services, given partial information provided in service description.
In order to gather the key elements of the comparison that met the architects’ requirements and the relationship between these key elements of the comparison, we reviewed the service providers’ plans and previous works on benchmarks. Finally, to ensure that the list of key elements of the comparison and their relationship was complete for the service selection process, we conducted an empirical study with the architects6
Regarding the second part, we use the WOWA (Weighted Ordered Weighted Averaging) operator to solve this decision problem. This operator provides an aggregation function that uses both the simultaneous advantage of the OWA method to allow compensation between high and low values and the weighted average method to consider the importance of the suppliers who provide the information. WOWA uses two sets of weights: one corresponds to source significance, and the other corresponds to value significance.
6.6 Combining data analysis and event processing for a proactive business process management
Participants: Khalid Benali, Abir Ismaili-Alaoui.
Facing competitive and continuous changing environments, organizations, now more than ever, tend to have recourse to agile methods and continuous improvement practices such as Lean Six Sigma method, Activity Based Costing (ABC) methods and Business Process Management (BPM) in order to have a transversal view of their business, to identify opportunities for enhancement, to reduce costs and wastes, to improve customer satisfaction and overall profitability, and to overcome the problem of isolation and non-communication between the different hierarchical levels within the organization. Among all these existing approaches, Business Process Management (BPM) is still considered as the more appropriate solution that help organizations to adapt to strategic, tactical and operational changes, and also to have more visibility and control over their business processes. This approach allows companies to gain in terms of agility, efficiency and performance; it also enables an effective communication and collaboration between their different stakeholders. Recently, with this new digitized era and the rise of several new technologies such as of Big Data, Internet of Things, Cloud Computing, etc., organizations face many factors and challenges that generate real changes in the traditional BPM. Among these challenges, we have the huge amount of data and event data that are continuously gathered within the organization, especially with the ubiquity of IoT devices in all domains. These data/event data represent a real engine of growth for organizations, and must be adequately exploited to extract high added value that can assist the organization in its decision making process. However, traditional BPM systems present different limits, as they do not facilitate the use of knowledge extracted from this data by business processes, because they do not benefit from statistical functionalities and data analysis and manipulation techniques. Over the past decade and till now, a lot of efforts have been done in both academic and industrial research environments, to improve all the aspects of Business Process and Business Process Management, because Business Processes were completely isolated within the organization thus they don’t benefit from the different added values that could be created from all these tremendous data.
Learning from data and event data that are gathered from past operations and process execution is an effective approach to improve the performance of business processes, especially those that perform repetitive tasks and activities. In fact, insights that are obtained from these data represents a valuable support for decision making. The main objective of our research consists on proposing new approaches for enhancing business processes, in order to achieve a core objective that both academia and industry are striving for, which is proactivity. As a first step towards this proactivity, we have tackled the business process instances scheduling problem, based on the priority of events that trigger these instances, taking into consideration historical data gathered from previous business process instances. We proposed a clustering approach using a set of event sources, so that we can classify these sources on different clusters using a score calculated for each event source. This score is based on the frequency and the criticality of previous events. The main objective of this approach was to create clusters of priorities. These latter are used to estimate in a proactive way the criticality level of incoming events, and then the priority level of incoming process instances. However, there is always a degree of uncertainty regarding the criticality/priority level of events generated from sources that belong to the same cluster. We tried to work on this issue as a second step by using fuzzy logic. In fact, the integration of a Fuzzy Inference System (FIS) in our IoT-BPM architecture, helps us to handle uncertainties regarding the criticality level of events, especially when these events are generated by sources that may have the same characteristics.
7 Bilateral contracts and grants with industry
7.1 Bilateral contracts with industry
- Company: Open Group
- Dates: 2017-2020
Participants: Claudia-Lavinia Ignat, François Charoy, Gérald Oster, Olivier Perrin, Anis Ahmed Nacer.The objective of the project is to propose and validate a model of service composition for middleware services for software as a service architecture. The composition must take into account middleware service quality attributes and service plan in order to optimise the operational cost while ensuring a level of quality of service.
Fair & Smart
- Company: Fair & Smart
- Dates: 2020-2024
Participants: Claudia-Lavinia Ignat, Gérald Oster, Olivier Perrin, François Charoy.The goal of this project is the development of a platform for the management of personal data according to General Data Protection Regulation (GDPR). Other partners of this project are CryptoExperts and team READ from LORIA. The computational personal trust model that we proposed for repeated trust game 11 and its validation methodology 13 will be adapted for the Fair&Smart personal data management platform for computing trust between the different users of this platform. Our decentralised mechanism for identity certification relying on a blockchain 14, 15 will be transfered to Fair& Smart for user identification for their personal data management platform.
8 Partnerships and cooperations
8.1 International initiatives
Informal international partners
In 2020, we continued our collaboration with IBM research Almaden on Federated learning. A joint internship and a visit were scheduled but have been cancelled due to the pandemic.
9.1 Promoting scientific activities
9.1.1 Scientific events: organisation
General chair, scientific chair
Claudia-Lavinia Ignat is Workshops & Masterclasses Co-Chair at ECSCW 2021 conference.
9.1.2 Scientific events: selection
Member of the conference program committees
Khalid Benali was member of the following conference program committee of I3E 2020 (IFIP Conference on e-Business, e-Services and e-Society), 6-8 April 2020, Skukuza, Kruger National Park, South Africa, ICCCI 2020 (12th International Conference on Computational Collective Intelligence), July 27th-29th, 2020, Da Nang, Vietnam MEDES 2020 (12th International ACM Conference on Management of Digital EcoSystems), organized in-cooperation with ACM, ACM SIGAPP and IFIP WG 2.6, November 2-4, 2020, Abu Dhabi, UAE, SoEA4EE’2020 (The 12th Workshop on Service oriented Enterprise Architecture for Enterprise Engineering) in conjunction with EDOC 2020, October 5, 2020, Eindhoven, The Netherlands
François Charoy was a PC Member of ICEBE (International Conference on Business Engineering) 2020, ICSOC 2020 (International Conference on Service Oriented Computing), IEEE International Conference on Business Information Systems 2020, Inforsid 2020 and of several workshops.
Claudia-Lavinia Ignat was a PC member of ACM CHI (Computer Human Interaction) 2021, ECSCW (European Conference on Computer-Supported Cooperative Work: The International venue on Practice-centred computing and the Design of cooperation technologies) 2020, CDVE (International Conference on Cooperative Design, Visualization and Engineering) 2020 and 2021, CollabTech (International Conference on Collaboration Technologies and Social Computing) 2020.
Olivier Perrin was PC member of various conferences and workshops, and reviewer for journal papers.
Member of the editorial boards
François Charoy is member of the editorial board of the Service Oriented Computing and Application Journal (Springer)
Claudia-Lavinia Ignat is an Associate editor for the Journal of Computer Supported Cooperative Work (JCSCW) since 2011. She was a member of the editorial board of ECSCW 2020.
Reviewer - reviewing activities
Claudia-Lavinia Ignat reviewed articles for Theoretical Computer Science journal in 2020.
9.1.4 Invited talks
Francois Charoy was invited for a talk at IRISA, Rennes in october 2020 (online)
9.1.5 Leadership within the scientific community
François Charoy is steering committee member of European Society for Socially Embedded Technologies (EUSSET).
9.1.6 Scientific expertise
François Charoy was a member of the HCERES committee for the evaluation of the CITI Lab from INSA Lyon
9.1.7 Research administration
Francois Charoy is an elected member of the CNU 27. He is a member of the board as assessor.
Claudia-Lavinia Ignat is member of the Inria Evaluation Commission. She is a member of the Inria Nancy-Grand Est COMIPERS committee. She is a member of the organisation committee of the Security Seminar at LORIA. In 2020, she was a member of national CRCN Inria recruitment jury and of the CRCN recruitment jury at Inria Nancy-Grand Est and at Inria Paris.
Gérald Oster is an elected member at AM2I scientist council of University of Lorraine
9.2 Teaching - Supervision - Juries
Permanent members of the Coast project-team are leading teachers in their respective institutions. They are responsible of lectures in disciplines like software engineering, database systems, object oriented programming and design, distributed systems, service computing and more advanced topics at all levels and in different of departments in the University. Most PhD Students have also teaching duties in the same institutions. Claudia-Lavinia Ignat teaches the lecture and the exercises on data replication and consistency at master level (M2 SIRAV) at University of Lorraine and exercises on object-oriented programming at Telecom Nancy. As a whole, the Coast team accounts for more than 2500 hours of teaching. Members of the Coast team are also deeply involved in the pedagogical and administrative life of their departments.
- Claude Godart is responsible for the Computer Science Department of the Polytech Nancy engineering school .
- Khalid Benali is responsible for the professional master degree speciality “Distributed Information Systems” of MIAGE and of its international branch in Morocco.
- François Charoy is responsible for the Software Engineering specialisation at the TELECOM Nancy Engineering School of University of Lorraine.
- Gérald Oster is responsible for the 3rd (last) year of study at the TELECOM Nancy Engineering School of University of Lorraine.
- PhD defended: Hoai Le Nguyen, Study of group performance and behaviour in collaborative editing, defended on January 2021, Claudia-Lavinia Ignat and François Charoy
- PhD in progress: Victorien Elvinger, Secured Replication for Peer-to-Peer Collaborative Infrastructures, started in October 2015, François Charoy and Gérald Oster
- PhD in progress: Abir Ismaïli-Alaoui, started in September 2016, Khalid Benali and Karim Baïna (Université Mohammed V, Rabat, Morocco)
- PhD defended: Quentin Laporte-Chabasse, Federation of Organisations over Peer to Peer Collaborative Network, defended in January 2021, François Charoy and Gérald Oster
- PhD defended: Béatrice Linot, Trust in cooperative systems, defended in February 2021, Jérome Dinet and François Charoy
- PhD in progress: Anis Ahmed Nacer, Safe Service Composition, started in March 2017, Olivier Perrin and François Charoy
- PhD in progress: Matthieu Nicolas, Optimisation of Replication Algorithms, started in October 2017, Olivier Perrin and Gérald Oster
- PhD in progress: Jean Philippe Eisenbarth, Securing the future blockchain-based security services, started in May 2019, Olivier Perrin and Thibault Cholez.
- PhD in progress: Clélie Amiot, Trust and Human/Chatbot collaboration, started in October 2019, Jérome Dinet and François Charoy
- PhD in progress: Alexandre Bourbeillon, Trust among users in collaborative systems, started in November 2020, Claudia-Lavinia Ignat and François Charoy
- PhD in progress: Pierre-Antoine Rault, Security mechanisms for decentralised collaborative systems, started in October 2020, Claudia-Lavinia Ignat and Olivier Perrin
- Claudia-Lavinia Ignat was member of the PhD committee of Grigorios Piperagkas, PhD student in Inria project-team MiMove (Inria Paris) since October 2018
- Claudia-Lavinia Ignat is member of the PhD committee of William Aboucaya, PhD student in Inria project-team MiMove (Inria Paris) since October 2019
Coast members were members of the following PhD defence committees:
- Mahmoud Barhamgi, HDR , Université de Lyon 1, Juillet 2020 (François Charoy, rapporteur)
- Mamadou Lakhassane Cissé, PhD, Université de Dakar et Université de Toulouse II, Juillet 2020 (François Charoy, rapporteur)
Claudia-Lavinia Ignat presented her research works for the first year students at École des Mines de Nancy while they were visiting Loria.
10 Scientific production
10.1 Publications of the year
International peer-reviewed conferences
Scientific book chapters
Reports & preprints
10.2 Cited publications
- 8 article Securing IoT-based Groups: Efficient, Scalable and Fault-tolerant Key Management Protocol Ad Hoc & Sensor Wireless Networks October 2019
- 9 article An Object-Oriented Metamodel For Inter-Enterprises Cooperative Processes Based on Web ServicesJournal of Integrated Design and Process Science82004, 37--55
- 10 incollectionPromises and Failures of Research in Dynamic Service CompositionSeminal Contributions to Information Systems EngineeringSpringer Berlin Heidelberg2013, 235-239URL: http://dx.doi.org/10.1007/978-3-642-36926-1_18
- 11 inproceedings Computational Trust Model for Repeated TrustGames Proceedings of the 15th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2016) Tianjin, ChinaAugust 2016, URL: https://hal.inria.fr/hal-01351250
- 12 inproceedingsConcurrency Control in Groupware SystemsProceedings of the ACM SIGMOD Conference on the Management of Data - SIGMOD 89Portland, Oregon, USAMay 1989, 399--407URL: http://doi.acm.org/10.1145/67544.66963
- 13 articleThe Influence of Trust Score on Cooperative BehaviorACM Transactions on Internet Technology194September 2019, 1-22
- 14 inproceedingsBlockchain-Based Auditing of Transparent Log ServersThe 32nd Annual IFIP WG 11.3 Conference on Data and Applications Security and Privacy (DBSec 2018)Proceeding of Data and Applications Security and Privacy XXXII - 32nd Annual IFIP WG 11.3 ConferenceBergamo, ItalyJuly 2018, 21-37URL: https://hal.archives-ouvertes.fr/hal-01917636
- 15 inproceedingsTrusternity: Auditing Transparent Log Server with BlockchainCompanion of the The Web Conference 2018Lyon, FranceApril 2018, 79-80URL: https://hal.inria.fr/hal-01883589
- 16 inproceedings Data Consistency for P2P Collaborative EditingACM Conference on Computer-Supported Cooperative Work - CSCW 2006Banff, Alberta, CanadaACM Press11 2006, 259 - 268URL: http://hal.inria.fr/inria-00108523/en/
- 17 articleService-Oriented Computing: State of the Art and Research ChallengesComputer402007, 38-45
- 18 inproceedings A commutative replicated data type for cooperative editing29th IEEE International Conference on Distributed Computing Systems (ICDCS 2009)Montreal, Québec CanadaIEEE Computer Society2009, 395-403
- 19 articleOptimistic ReplicationComputing Surveys371March 2005, 42--81URL: http://doi.acm.org/10.1145/1057977.1057980
- 20 articleABC Shadow algorithm: a tool for statistical analysis of spatial patternsStatistics and computing2752017, 1225--1238
- 21 articleAchieving Convergence, Causality Preservation, and Intention Preservation in Real-Time Cooperative Editing SystemsACM Transactions on Computer-Human Interaction51March 1998, 63--108URL: http://doi.acm.org/10.1145/274444.274447