Section: New Results
Parallel Data-Flow Programming
A. Pop defended his PhD thesis in September at MINES ParisTech, on a data-flow, streaming extension of OpenMP. Semantical, compilation and runtime system aspects have been covered in depth. Early results were published at HiPEAC 2011 and presented by A. Pop, in collaboration with A. Cohen. Follow-up work include the maturation of the proposed semantics and language extensions, with a thorough implementation and experimentation.
In parallel, F. Li, in collaboration with A. Pop and A. Cohen, explores the automatic compilation of SSA programs into dynamic data-flow parallelism, and the integration of streaming dependences to extend the method to non-scalar data flow. A paper has been published at WIR 2011 (workshop associated to CGO 2011), and a comprehensive, modular compilation method for arbitrary control flow, will be presented at MULTIPROG 2012 (associated with HiPEAC 2012).
Classical compilation techniques, found in Lustre, Scade, Lucid Synchrone, and all the dataflow synchronous languages, generate very efficient sequential code. Thus our main goal is to allow parallel code generation without changing the generation of the sequential code. To this matter, we introduced in the dataflow synchronous setting the famous asynchronous calls bundled with futures, which date back to MultiLisp designed by R. Halstead in the early 1980. It allows to separate the request of a computation from the actual use of its result. This approach has two main advantages. First, the compilation of these asynchronous calls is implemented by a simple wrapper encapsulating the called sequential code. It permits full compatibility with existing generated code. The futures are treated like usual values, so except for the asynchronous calls, we use the known sequential code generation. Second, this asynchronous calls and futures are only annotations and may be fully erased without changing the semantics of the program.
L. Gérard implements this proposition in our Heptagon compiler. The first backend was done in Java, as a proof of concept, using the threads and futures of the Java language. More efficient back-ends are being explored, using OpenMP stream-computing extensions and the TStar data-flow primitives.