Section: New Results
Emergent Middleware Supporting Interoperability in Extreme Distributed Systems
Participants : Emil Andriescu, Nelly Bencomo, Amel Bennaceur, Luca Cavallaro, Nikolaos Georgantas, Sneha-Sham Godbole, Valérie Issarny, Rachid Saadi, Daniel Sykes.
Interoperability is a fundamental challenge for today’s extreme distributed systems. Indeed, the high-level of heterogeneity in both the application layer and the underlying infrastructure, together with the conflicting assumptions that each system makes about its execution environment hinder the successful interoperation of independently developed systems. A wide range of approaches have been proposed to address the interoperability challenge [31] . Solutions that require performing changes to the systems are usually not feasible since the systems to be integrated may be legacy systems, COTS (Commercial Off-The-Shelf) components or built by third parties; neither are the approaches that prune the behavior leading to mismatches since they also restrict the systems' functionality. Therefore, many solutions that aggregate the disparate systems in a non-intrusive way have been proposed. These solutions use intermediary software entities, called mediators, to interconnect systems despite disparities in their data and/or interaction models by performing the necessary coordination and translations while keeping them loosely-coupled. However, creating mediators requires a substantial development effort and a thorough knowledge of the application-domain, which is best understood by domain experts. Moreover, the increasing complexity of today's distributed systems, sometimes referred to as Systems of Systems, makes it almost impossible to develop `correct' mediators manually. Therefore, formal approaches are used to synthesize mediators automatically.
In light of the above, we have introduced the notion of emergent middleware for realizing mediators. Our research on enabling emergent mediators is done in collaboration with our partners of the Connect project (§ 8.1.1 ). Our work during the year has more specifically focused on:
-
Supporting architecture. We have been working together with our partners in the Connect project on the refinement of an overall architecture supporting emergent middleware, from the discovery of networked systems to the learning of their respective behavior, and synthesis of emergent middleware enabling them to interoperate [30] .
-
Affordance inference. We have proposed an ontology-based formal model of networked systems based on their affordances, interfaces, behavior, and non-functional properties, each of which describes a different facet of the system [2] . However, legacy systems do not necessarily specify all of the aforementioned facets. Therefore, we are currently exploring techniques to infer the affordance by using textual descriptions of the interface of networked systems. More specifically, we rely on machine learning techniques to automate the inference of the affordance from the interface description by classifying the natural-language text according to a predefined ontology of affordances [17] .
-
Mediator synthesis for emergent connectors. We focus on systems that have compatible functionality, i.e., semantically matching affordances, but are unable to interact successfully due to mismatching interfaces or behaviors. We propose two approaches to enable communication between such systems:
-
A mapping based approach, whose goal is to automatically synthesize a mediator model that ensures their safe interaction, i.e., deadlock-freedom and the absence of unspecified receptions. Our approach combines semantic reasoning and constraint programming to identify the semantic correspondence between networked systems' interfaces, i.e., interface mapping. Unlike existing approaches that only tackle the one-to-one correspondence between actions, this approach handles the more general cases of one-to-many and many-to-many mappings.
-
A goal based approach, which enables the communication of two networked systems, so that the communication satisfies a given user goal. It aligns their actions using ontology matching. The aligned processes as well as the user goal are encoded as a satisfiability problem. It relies on model checking to determine if a feasible communication trace exists that satisfies the user goal. The model checking process is reiterated so as to discover all the feasible satisfying traces, which are finally concatenated to build the mediator.
The feasibility of both of our approaches has been demonstrated through prototype tools and real-world scenarios involving heterogeneous systems.
-
-
Mediator synthesis for streaming connectors. In the context of dynamic mediator synthesis, we have targeted the domain of mobile multimedia streaming, resulting in a first step that statically solves the hard problem of streaming interoperability across heterogeneous smartphone multimedia platforms. With the recent evolution of mobile phones, multimedia streaming is now commonly used in smartphones for purposes such as video broadcast, video conferencing and place shifting, which in turn highlights the importance of multimedia enabled applications. However, peer-to-peer solutions are difficult to implement because of increased node heterogeneity and their low processing power. Furthermore, existing mobile platforms such as Android, iOS, Blackberry and Windows Phone 7 support multimedia streaming (as resource consumers) either through platform specific APIs or system services. However, they use heterogeneous protocols and data formats, thus compromising interoperability.
Given the challenges above, we designed AmbiStream [11] , a lightweight middleware for heterogeneous mobile devices, capable of “on the fly" adaptation. AmbiStream relies on the highly-optimized multimedia software stacks provided by smartphone platforms and adds the necessary layers to solve interoperability. More specifically, the middleware targets: a) Streaming of prerecorded or live audio/video using an intermediary real-time protocol; b) Managing streaming protocol translation and multimedia container format adaptation to the ones supported natively by each device; and c) Extensibility in order to support new multimedia streaming protocols and multimedia container formats given its plug-in based architecture. We have used a model-driven approach to generate multi-platform plug-ins from higher level descriptions in the form of a Domain Specific Language (DSL). The defined DSL takes into account multimedia specific operations such as timing, fragmenting, multiplexing, congestion control and buffering.
-
Models@run.time. We have recently integrated the notion of Models@run.time (Models@run.time Dagstuhl Seminar, http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=11481 ) in our research towards emergent middleware. We use Models@run.time to extend the applicability of models and abstractions to the runtime environment. As is the case for software development models, a run-time model is often created to support reasoning. However, in contrast to development models, run-time models are used to reason about the operating environment and runtime behavior, and thus these models must capture abstractions of runtime phenomena. Different dimensions need to be balanced, including resource-efficiency (time, memory, energy), context-dependency (time, location, platform), as well as personalization (quality-of-service specifications, profiles). The hypothesis is that because Models@run.time provide meta-information for these dimensions during execution, run-time decisions can be facilitated and better automated. Thus, we anticipate that Models@run.time will play an integral role in the management of extremely distributed systems. Our work on the use of Models@run.time has two aspects:
-
We have used Models@run.time to tackle the crucial problem of uncertainty in extremely distributed systems that are aware of their own requirements. Requirements awareness helps optimize requirements satisfaction when factors that were uncertain at design time are resolved at runtime. Using our approach, we are able to maintain goal-based models in memory while the system is running. The executing system, therefore, is able to introspect and consult it goals during runtime. Crucially, at runtime we use the notion of claims to represent assumptions that cannot be verified with confidence at design time. Such claims are attached to the goal-based runtime models. By monitoring claims at runtime, their veracity can be tested. If falsified, the effect of claim negation can be propagated to the system's goal model and an alternative means of goal realization can be selected automatically, allowing the dynamic adaptation of the system to the prevailing environmental context [14] , [15] , [16] .
-
In a complementary way to the mediator synthesis approaches discussed above, we further promote the use of Models@run.time to support the runtime synthesis of software that will be part of the executing system. Specifically, we focus on the use of runtime models to support the realization of emergent middleware, i.e., the synthesis of mediators that define a sequences of actions to translate semantic actions of one system developed using a particular middleware protocol to the semantic actions of another system developed using an alternate middleware built with no prior knowledge on the former. Discovery and learning enablers capture the required knowledge of the context and environment during runtime. Supported by that knowledge, a runtime model of the mediator-to-be is reified. Reification means that the knowledge is explicitly formulated and made available for computational manipulation. The form of the runtime models is based on labeled transition systems (LTSs) which offer the behavioral semantics needed to model the interaction protocols. Ontologies complement the LTSs providing semantic reasoning about the mapping between protocols. Specifically the LTS of each protocol is annotated using ontologies to support the subsequent mapping between the protocols. From the LTS-based runtime models, mediators are synthesized.
-