Section: New Results
Polyhedral Compilation
L.-N. Pouchet presented fundamental advances in the construction and linear optimization of multidimensional affine transformation spaces at the POPL 2011 conference, in collaboration with A. Cohen and colleagues from Paris Sud University, Louisiana State University and Ohio State University.
Our team is actively integrating the polyhedral optimization framework in two production compilers: the Graphite framework in GCC and the Polly framework in LLVM. We are also working towards using the polyhedral framework to target GPGPU and manycore architectures, and to generate aggressively optimized code starting from high level languages. New isl-based version of Graphite and Polly have been contributed, enabling state-of-the-art affine transformations in GCC and LLVM, respectively. Dramatic performance improvements are expected in 2012, impacting the upcoming GCC 4.8 and LLVM 3.1, respectively.
K. Trifunovic, F. Li and A. Cohen, in collaboration with R. Ladelsky from IBM Research Haifa, presented a paper about some of these progresses at the GROW 2011 workshop [7] .
Among the challenges that arise when adapting the polyhedral
framework to production compilers, Riyadh Baghdadi has been working
on memory-based dependences. Part of it is a practical compiler
construction issue, where upstream passes such as the transformation
to three-address code and PRE/CSE introduce new scalar variables
leading to additional memory-based dependences. The other difficulty
is to identify a profitable tradeoff between memory expansion
(privatization, renaming) and parallelism. Memory-based dependences
not only increase the complexity of the optimization but most
importantly, they reduce the degree of freedom available to express
effective loop nest transformations, limiting the overall
effectiveness of the polyhedral framework. We designed and
implemented a technique that solves this problem by allowing a
compiler to relax the constraint of memory-based dependences on loop
nest transformations and that does not incur the memory footprint
overhead of scalar and array expansion. The proposed technique is
based on the concept of polyhedral live range interval interference.
While previous polyhedral optimization techniques could not achieve
any speedups in benchmarks with scalar variables. This technique
enabled a speedup of up to