Section: New Results
Deep Reinforcement Learning for end-to-end driving
Participants : Maximilian Jaritz, Raoul de Charette, Fawzi Nashashibi.
We conducted works on a very new research field that is end-to-end driving, where an artificial intelligence learns to drive directly from RGB images, without the use of any mediated perception (object recognition, scene understanding). Using a recent rally game with realistic physics and graphics we have trained a car in a simulator to drive. Several approaches were attempted. The most successful one uses an Asynchronous Actor Critic (A3C) trained in an end-to-end fashion and propose new strategies that improve training and generalization. The network was trained simultaneously on tracks with various road structures (sharp turns, etc.), graphics (snow, mountain, and coast) and physics (road adherence). As for other problems, we have shown that learning in a simulated environment (here a racing car game) can be transposed to other tracks and even real driving. Despite complex and varying dynamics of the car and road the trained agent learns to drive in challenging scenarios using only RGB image and vehicle speed. To prove its generalization the algorithm is also tested in unseen tracks, under legal speed limit and with real images. Initial work was published in  and recent works were submitted. The work was conducted in cooperation with Etienne Perot and Marin Toromanoff from Valeo.