Section: Research Program

Deep learning

Deep Learning is at the root of quite a few breakthroughs in machine learning and sequential decision making, albeit requiring gigantic resources [72]. Some reasons for these performance jumps are clear (more data, more computational power, more complex search space). Still, the nature of the dynamical system made of training a deep NN yet remains an open question, at the crossroad of information geometry and non-convex optimization. A related open question concerns the neural architecture design. Deep Learning recent developments regarding generative adversarial networks [68] and domain adaptation [67] are relevant to optimal design applications.

The challenges addressed by TAU range from theoretical ML issues (characterization of learnable problems w.r.t the ratio of the data size/neural architecture size) to functional issues (how to encode information invariance and deal with higher order logic beyond convolutional architectures) to societal issues (how to open the black-box of a deep NN and ensure the fairness of the process).

The TAU team has a unique international expertise in three aspects relevant to deep learning, respectively regarding Riemannian geometry [76], [77] (in order to efficiently navigate in the search manifold), statistical physics [66] (to apprehend the learnability region as the architecture size goes to infinity with the data size), and Genetic Programming [57] and neuro-evolution (that provide original avenues for DNN architecture learning). Related industrial contracts involve ADAMME (FUI 2016) and RTE (Energy Management).