Section: New Results
Interactive Virtual Cinematography
Participants : Marc Christie [contact] , Christophe Lino.
The domain of Virtual Cinematography explores the operationalization of rules and conventions pertained to camera placement, light placement and staging in virtual environments. In 2011, two major challenges were tackled (i) the proposition of intelligent interactive assistants integrating users in the process of selecting viewpoints and editing a virtual movie, with the capability of adapting to the user choices, and (ii) the design and implementation of evaluation functions for precisely ranking the quality of viewpoints of a virtual 3D environment.
Our intelligent assistant is designed around (i) an intelligent cinematography engine that can compute, at the request of the filmmaker, a set of suitable camera placements (called suggestions) for starting a shot, representing semantically and cinematically distinct choices for visualizing the current narrative, considering established cinema conventions of continuity and composition along with the filmmaker's previous selected suggestions, and also his or her manually crafted camera compositions, by a machine learning component that adapts shot editing preferences from user-created camera edits, (ii) a user interface, where the suggestions are presented as small movie frames, arranged in a grid whose rows and columns correspond to visual composition properties of the suggested cameras, and (iii) a motion-tracked camera system that makes the user able of modifying the low-level parameters of the camera in shots in the same way a real operator would. The result of this work [16] is a novel workflow based on interactive collaboration of human creativity with automated intelligence that enables efficient exploration of a wide range of cinematographic possibilities, and rapid production of computer-generated animated movies. A full prototype has been built and demonstrated at ACM Multimedia conference [15] as well as ParisFX conference. A patent protecting this technology is currently under evaluation [25] .
The second challenge is related to the design of efficient and precise metrics for measuring the quality of viewpoints. For efficiency, we have proposed parallel GPU-based evaluation techniques for the estimation of multiple viewpoints [8] coupled within a Genetic Algorithm (Swarm Particle Optimization) to rapidly explore the space of possible viewpoints. For preciseness, we have designed a large range of quality functions relative to screen composition and transition between shots, and employed these functions to either automatically generate movies from actions occuring in the virtual environment [13] or interactively generating movies by letter the users select best shots and best transitions between shots [14] .
Finally we have been exploring the use of tactile devices to the interactive construction of narratives following Prop's computational model of stories [10] .