FR

EN

Homepage Inria website
  • Inria login
  • The Inria's Research Teams produce an annual Activity Report presenting their activities and their results of the year. These reports include the team members, the scientific program, the software developed by the team and the new results of the year. The report also describes the grants, contracts and the activities of dissemination and teaching. Finally, the report gives the list of publications of the year.

  • Legal notice
  • Cookie management
  • Personal data
  • Cookies


Section: New Results

JavaScript Implementation

We have pursued the development of Hop.js and our study on efficient JavaScript implementation. We have followed three main axes.

Implementing Hop.js

Hop.js supports full ECMAScript 5 but it still lack many of the new features ECMAScript 2016 has introduced and now that are now well established in ECMAScript 2017. During the year, we have implemented many of these features (iterators, destructuring assignments, modules, etc.). Few constructs remain missing and hopefully will be added to the system by the end of the year (map, set, and proxies). Completing full ECMAScript 2017 is important as we now see more and more packages using these new features being made available and we consider that maintaining the ability to use all these publicly available resources is a prerequisite to a wide Hop.js adoption. We also consider that this is an important asset for Hop.js users, in particular, for the Denimbo company, an Inria startup using Hop.js extensively.

Ahead-of-time JavaScript compilation

Hop.js differs from most JavaScript implementations by many aspects because contrary to all fast and popular JavaScript engines that use just-in-time compilation, Hop.js relies on static compilation, a.k.a., ahead-of-time (AOT) compilation. It is an alternative approach that can combine good speed and lightweight memory footprint, and that can accommodate read-only memory constraints that are imposed by some devices and some operating systems. Unfortunately the highly dynamic nature of JavaScript makes it hard to compile statically and all existing AOT compilers have either gave up on good performance or full language support.

Indeed, JavaScript is hard to compile, much harder than languages like C, Java, and even harder than Scheme and ML two other close functional languages. This is because a JavaScript source code accepts many more possible interpretations than other languages do. It forces JavaScript compilers to adopt a defensive position by generating target codes that can cope with all the possible, even unlikely, interpretations because general compilers can assume very little about JavaScript programs. The situation is worsened further by the raise as little errors as possible principle that drives the design of the language. JavaScript functions are not required to be called with the declared number of arguments, fetching an unbound property is permitted, assigning undeclared variables is possible, etc.

All these difficulties are considered serious enough to prevent classic static compilers to deliver efficient code for a language as dynamic and as flexible as JavaScript. We do not share this point of view. We think that by carefully combining classical analyses, by developing new ones when needed, and by crafting a compiler where the results of the high-level analyses are propagated up to the code generation, it is possible for AOT compilation to be in the same range of performance as fast JIT compilers. This is what we attempt to demonstrate with this study. Of course, our ambition is not to produce a compiler strictly as fast as the fastest industrial JavaScript implementations. This would require much more engineering strength than what we can afford. Instead, we only aim at showing that static compilation can have performances reasonably close to those of fastest JavaScript implementations. Reasonably close is of course a subjective notion, that everyone is free to set for himself. For us, it means a compiler showing half the performances of the fastest implementations.

The version of the Hop.js AOT compiler we have developed during the year contains new typing analyses and heuristics that compensate for the lack of information JavaScript source codes contain. A first analysis, named occurrence typing, that elaborates on top of older techniques developed for the compilation of the Scheme programming language, extracts as much as possible syntactic information directly out of the source code. This analysis alone would give only rough approximations of the types used by the program but its main purpose is to feed the compiler with sufficient information so that it can deploys more efficient supplemental analyses. Probably the most original one is the analysis that we have named hint typing or which typing that consists in assigning types to variables and to function arguments according to the efficiency of the generated code. In other words, the which typing assign types for which the compiler will be able to deliver its best code instead of assigning types that might denote all the possible values variables and arguments may have during all possible executions. We have shown that these whiched types correspond very frequently to the implicit intentional types programmers had in mind when they wrote their programs. These analyses and the optimizations they enable are implemented in Hop.js version 3.2.0 available on the Inria pages and from Github. They are described in [17] paper.

Property caches: Property caches are a well-known technique invented over 30 years ago to improve dynamic object accesses. They have been adapted to JavaScript, which they have greatly contributed to accelerate. However, this technique is applicable only when some constraints are satisfied by the objects, the properties, and the property access sites. We have started a study to try to improve it on two common usage patterns: prototype accesses and megamorphic accesses. We have built a prototypical implementation in Hop.js that has let us measure the impact of the technique we propose. We have observed that they effectively complement traditional caches and that they reduce cache misses and consequently accelerate execution. Moreover, they do not cause a slowdown in the handling of the other usage patterns. We are now at completing this study by polishing the implementation and by publishing a paper exposing and evaluating the new techniques.