EN FR
EN FR


Section: Scientific Foundations

Scientific Foundations

In 2011, the EVASION project team adopted a new research strategy which is described in full detail in the IMAGINE research proposal, and can be summarized as follows.

User-centered models for geometry and animation

The first axis of our research consists in developing the fundamental tools required to achieve expressive digital design, namely revisiting models for geometry and animation. By models, we mean both the representation of the object of interest, and the development of the associated algorithms for data generation and editing. Although unusual, thinking about shapes, motion and stories in a similar way enables cross fertilization and helps validating our design principles by applying them to different cases. Thinking about models in a user-centered way led us to the following principles, developed below:

  1. Develop high-level models embedding a priori knowledge.

  2. Allow these models to generate detailed shapes or motion from minimal, intuitive input.

  3. Set up advanced editing and transfer tools.

Firstly, making models user-centered means that they should behave the way a human would have predicted. This is indeed the only way to advance towards the suggestive but predictable interaction we are seeking for. Users’ expectations are typically based on the cognitive meaning they give to a model, combined with their experience on the way similar objects would behave in the real world. We must thus step away from standard low-level representations, but develop high-level models expressing a priori knowledge. For example, this includes knowledge about developable geometry to model folded paper or cloth; about constant volume deformation to edit virtual clay or to animate plausible organic shapes; about appropriate physical laws to control passive objects; or about film editing rules to model semi-autonomous camera with planning abilities, synthetic vision and cinematographic knowledge, that can receive higher-level instructions from the user.

Secondly, the basic role of these high-level models is to generate detailed content from minimal user input, thus saving user's time on predictable or repetitive aspects. Achieving this goal requires the development of efficient procedural generation algorithms. For instance, designing with highlevel models for cloth or paper should free the user from having to manually design plausible geometric folds; setting a human settlement over a terrain should mean giving the strategy for land occupation, but leaving a procedural village model spread according to local resources and to terrain geometry; animating complex, composite objects should only require the specification of their material composition map, but leave physically-based animation – the most popular example of procedural method - generate plausible, detailed deformation under interaction.

Lastly, advanced transfer tools and editing techniques should be developed to enable quick, intuitive setting and fine tuning of the models. Transfer tools are aimed at allowing re-use of existing content, either in terms of global setting, details, or style. This is indeed a challenge when dynamic, composite models are concerned: for instance, how can a garment automatically be transferred to a creature of different morphology, while making it look the same? Similarly, to be intuitive, editing should maintain the main cognitive features of a model. For instance, deforming the bounding volume of crumpled paper should increase the amount of paper while maintaining its developable nature; and stretching a table with plates and glasses should ideally stretch the table geometry, but duplicate the objects on it.

Creating and experimenting with interactive virtual prototypes

Our second focus is the development of real-time environments where users can seamlessly create models, play with them and edit them, ideally without or with a very short learning stage. Developing this axis will provide the necessary test-bed for the high-level models and algorithms we just presented. It will also give us the opportunity to develop and validate general design principles for intuitive creation. Lastly, it will enable us to apply our work to a variety of practical cases, listed in the application section below.

The principles that will drive this part of our work, detailed below, are the following:

  1. Allow to design animated prototypes and experiment with them within the same system.

  2. Enable intuitive, gesture-based interaction.

  3. Ensure real-time response from the system in both “editing” and “play” modes.

In current modeling pipelines, animated 3D content typically goes through several different digital or nondigital media, requiring user time and efforts. In contrast, we believe that these stages should ideally be performed within the same system, from early draft, through shape and motion refinement, processing, andpost-processing. This will enable users to iteratively refine their design thanks to immediate visual feedback and to the ability to interact with their prototypes at any stage, before further editing and refining them.

Our second principle is to design our new generation of interaction tools from a user-centred perspective: the idea is to conduct preliminary studies of spontaneous design and editing gestures, and to transparently drive the model and tool parameters from this interaction, in order to best suit users’ expectations. We are already started to experiment with this paradigm for intuitive shape editing in 2D and 3D. There is indeed much to do to extend this approach to the intuitive design of motions and stories.

Lastly, one of the most important features towards effective creation tools is to provide real-time response at every stage. Creative design is indeed a matter of trial and error, and we even believe that creation more easily takes place when users can immediately see and play with a first version of what they have in mind, serving as support for refining their thoughts. For maybe the first time, the goal is to provide such interactive sculpting media not only for static, but also for dynamic shapes. In previous years, EVASION has developed a methodology that combines layered models, adaptive degrees of freddom and GPU processing for achieving that goal.