EN FR
EN FR


Section: Partnerships and Cooperations

European Initiatives

FP7 & H2020 Projects

D3: Drawing Interpretation for 3D Design

Participants : Yulia Gryaditskaya, Tibor Stanko, Bastien Wailly, David Jourdan, Adrien Bousseau, Felix Hähnlein.

Line drawing is a fundamental tool for designers to quickly visualize 3D concepts. The goal of this ERC project is to develop algorithms capable of understanding design drawings. The first 30 months of the project allowed us to make significant progress in our understanding of how designers draw, and to propose preliminary solutions to the challenge of reconstructing 3D shapes from design drawings.

To better understand design sketching, we have collected a dataset of more than 400 professional design sketches [17]. We manually labeled the drawing techniques used in each sketch, and we registered all sketches to reference 3D models. Analyzing this data revealed systematic strategies employed by designers to convey 3D shapes, which will inspire the development of novel algorithms for drawing interpretation. In addition, our annotated sketches and associated 3D models form a challenging benchmark to test existing methods.

We proposed several methods to recover 3D information from drawings. A first family of method employs deep learning to predict what 3D shape is represented in a drawing. We applied this strategy in the context of architectural design, where we reconstruct 3D building by recognizing their constituent components (building mass, facade, window). We also presented an interactive system that allows users to create 3D objects by drawing from multiple viewpoints [14]. The second family of methods leverages geometric properties of the lines drawn to optimize the 3D reconstruction. In particular, we exploited properties of developable surfaces to reconstruct sketches of fashion items.

A long-term goal of our research is to evaluate the physical validity of a concept directly from a drawing. We obtained promising results towards this goal for the particular case of mechanical objects. We proposed an interactive system where users design the shape and motion of an articulated object, and our method automatically synthesize a mechanism that animates the object while avoiding collisions [18]. The geometry synthesized by our method is ready to be fabricated for rapid prototyping.

ERC FunGraph

Participants : George Drettakis, Thomas Leimkühler, Sébastien Morgenthaler, Rada Deeb, Stavros Diolatzis, Siddhant Prakash, Simon Rodriguez, Julien Philip.

The ERC Advanced Grant FunGraph proposes a new methodology by introducing the concepts of rendering and input uncertainty. We define output or rendering uncertainty as the expected error of a rendering solution over the parameters and algorithmic components used with respect to an ideal image, and input uncertainty as the expected error of the content over the different parameters involved in its generation, compared to an ideal scene being represented. Here the ideal scene is a perfectly accurate model of the real world, i.e., its geometry, materials and lights; the ideal image is an infinite resolution, high-dynamic range image of this scene.

By introducing methods to estimate rendering uncertainty we will quantify the expected error of previously incompatible rendering components with a unique methodology for accurate, approximate and image-based renderers. This will allow FunGraph to define unified rendering algorithms that can exploit the advantages of these very different approaches in a single algorithmic framework, providing a fundamentally different approach to rendering. A key component of these solutions is the use of captured content: we will develop methods to estimate input uncertainty and to propagate it to the unified rendering algorithms, allowing this content to be exploited by all rendering approaches.

The goal of FunGraph is to fundamentally transform computer graphics rendering, by providing a solid theoretical framework based on uncertainty to develop a new generation of rendering algorithms. These algorithms will fully exploit the spectacular – but previously disparate and disjoint – advances in rendering, and benefit from the enormous wealth offered by constantly improving captured input content.

Emotive

Participants : Julien Philip, Sebastiàn Vizcay, George Drettakis.

https://emotiveproject.eu/

  • Type: COOPERATION (ICT)

  • Instrument: Reseach Innovation Action

  • Objectif: Virtual Heritage

  • Duration: November 2016 - October 2019

  • Coordinator: EXUS SA (UK)

  • Partner: Diginext (FR), ATHENA (GR), Noho (IRL), U Glasgow (UK), U York (UK)

  • Inria contact: George Drettakis

  • Abstract: Storytelling applies to nearly everything we do. Everybody uses stories, from educators to marketers and from politicians to journalists to inform, persuade, entertain, motivate or inspire. In the cultural heritage sector, however, narrative tends to be used narrowly, as a method to communicate to the public the findings and research conducted by the domain experts of a cultural site or collection. The principal objective of the EMOTIVE project is to research, design, develop and evaluate methods and tools that can support the cultural and creative industries in creating Virtual Museums which draw on the power of 'emotive storytelling'. This means storytelling that can engage visitors, trigger their emotions, connect them to other people around the world, and enhance their understanding, imagination and, ultimately, their experience of cultural sites and content. EMOTIVE did this by providing the means to authors of cultural products to create high-quality, interactive, personalized digital stories. The project was evaluated in December with very positive initial feedback.

    GRAPHDECO contributed by developing novel image-based rendering techniques to help museum curators and archeologists provide more engaging experiences. We developed a mixed reality plugin for Unity that allows the use of IBR and we developed, in collaboration with ATHENA, a VR experience used in one of the EMOTIVE user experiences using a VIVE HMD. This demo was presented at a public event in November in Glasgow, and used by over 25 museum professionals with very positive feedback.