Today, in every industrial activity, numerical geometry has a dramatic impact on the way designer and engineer's work. In the aerospace industry for example, sketch and design, aerodynamic
simulations, mechanical and structural engineering, manufacturing, as well as project review, pilot training, ergonomic studies or maintenance operations, all these works are based on geometric
models and numerical simulations. Moreover, despite of the efficient commercial offer in this domain, the market for this new technology is growing since the recent years in the world. Because
of the new international concurrency, the industrial production requires more adaptability and flexibility, more design and engineering cycles in a given time, more precision and robustness.
Geometry modeling and Computing, is a key point in the all
*Computer Aided Design*applications required in the industrial process. The main objectives the CAD Project are to propose
*efficient mathematical representations*as well as
*robust algorithms*, that allow to creating complex shapes and high precision geometric operations.

Moreover, from this year, the Project also focuses on some
*Computer Graphics*applications, mainly rendering and Computer Animation.

The new young Computer Graphics members who joined us this year, obtained high level papers (see New Results) and obtain two major success:

Organize the international conference of ACM SIGGRAPH VRCAI 2011 in Hongkong;

A cooperation project with CAS-BEGCL Imaging Technology Corporation on 3D computer animation movies.

During CAD processes one uses a myriad of tolerances, many of which are not directly related to the actual manufacturing process. Some interesting questions here include: What are the most relevant machining tolerances?

How to set the army of computational tolerances, e.g. those of systems of equations, to guarantee machining within the required accuracy? How tolerances in different spaces, e.g. in model space and in parameter space, are related. That is, how to set the parameter space tolerance in order to guarantee model space accuracy? How tolerances in different arrangements, e.g. parallel and perpendicular lines, are related? That is, should we set different tolerances for parallel and perpendicular lines, and if yes, then how to relate them to one another?

Numerical instabilities also account for the majority of computational errors in commercial CAD systems. The problems related to robustness haunt every programmer who has ever worked on commercial systems. Fixing numerical bugs can be very frustrating, and often times results in patching up the code simply because no solution exists to remedy the problem. Current efforts in interval arithmetic and fuzzy logic may look nice, however, they may open up new problems in the process of solving some old ones. What would be a significant help is to know as much as possible about the entities to be computed on? For example, if we know how far we are from the root, i.e. where is the guess point in relation to the root, and then the method can be adjusted to guarantee convergence?

Lastly but not least, algorithms also are the places in any CAD process where the inherently "dirty" data, e.g. point cloud, gets received and converted into another set of data, e.g. an STL file, for manufacturing. The issue of data processing will be looked at later, however, what must be understood at the outset is that proper algorithm design begins with the selection of the right solution.

Although geometric uncertainties are related to robustness and tolerance, there are a number of extra issues well worth deeper investigations.

Geometric arrangements are full of special cases. The most notable ones are: cases of touch, overlapping, containment, etc.; cases of parallelism, perpendicularity, coincidence, etc.; axes of symmetrical data, data clustering, dense or sparse data, etc.; cases of degeneracy, discontinuity, inconsistencies, etc.; problems with cracks, excess material, lack of detail, etc. In just about any code that deals with geometry, the number of special cases is significantly larger than the general ones.

Data explosion is the result of careless selection of the methods, e.g. parameter space-based sampling, and improper implementation, e.g. recursive algorithms. Some of the relevant issues are: sampling: over sampling, sampling in incorrect places, etc; procedural definitions, e.g. lofting a large set of curves may result in an explosion of control points; excess data on input may get magnified further to fill available memory; improper data structures, e.g. arrays of fixed size holding very little data; and non-compacted data bases used for further processing.

Last but not least, although CAD processes are supposed to produce valid and "made to order" models, the reality is that most (if not all) models are rough and require post-processing, i.e.
beautification. Some of the most frequently needed tasks are: removing unwanted edges, corners, cracks, etc.; removing bumps, oscillations, curvature extremes, etc.; healing incorrect models,
e.g. removing holes in triangulations; smoothing, fairing, re-shaping, etc.
*Computer-Aided Design*, 2007, Elsevier Publisher.

The application domains of Computer Aided Design are Aircraft Industry, Car Industry, Oil and Gas Industry, Architecture and Civil Engineering, NC Simulation, etc. The applications of Computer Graphics mainly are video games and film Industry.

T-GEMS is a Geometric Kernel for modeling curves and surfaces.

We have built 3D ink simulation plug-in in Autodesk Maya.

A new approach for cubic B-spline curve approximation is presented. The method produces an approximation cubic B-spline curve tangent to a given curve at a set of selected positions, called tangent points, in a piecewise manner starting from a seed segment. A heuristic method is provided to select the tangent points. The first segment of the approximation cubic B-spline curve can be obtained using an inner point interpolation method, least-squares method or geometric Hermite method as a seed segment. The approximation curve is further extended to other tangent points one by one by curve unclamping. New tangent points can also be added, if necessary, by using the concept of the minimum shape deformation angle of an inner point for better approximation. Numerical examples show that the new method is effective in approximating a given curve and is efficient in computation .

This paper presents a geometric pruning method for computing the Hausdorff distance between two B-spline curves. It presents a heuristic method for obtaining the one-sided Hausdorff distance in some interval as a lower bound of the Hausdorff distance, which is also possibly the exact Hausdorff distance. Then, an estimation of the upper bound of the Hausdorff distance in an sub-interval is given, which is used to eliminate the sub-intervals whose upper bounds are smaller than the present lower bound. The conditions whether the Hausdorff distance occurs at an end point of the two curves are also provided. These conditions are used to turn the Hausdorff distance computation problem between two curves into a minimum or maximum distance computation problem between a point and a curve, which can be solved well. A pruning technique based on several other elimination criteria is utilized to improve the efficiency of the new method. Numerical examples illustrate the efficiency and the robustness of the new method .

A novel and efficient quasi-Monte Carlo method for estimating the surface area of digitized 3D objects in the volumetric representation is presented. It operates directly on the original digitized objects without any surface reconstruction procedure. Based on the Cauchy-Crofton formula from integral geometry, the method estimates the surface area of a volumetric object by counting the number of intersection points between the object's boundary surface and a set of uniformly distributed lines generated with low-discrepancy sequences. Using a clustering technique, we also propose an effective algorithm for computing the intersection of a line with the boundary surface of volumetric objects. A number of digitized objects are used to evaluate the performance of the new method for surface area measurement .

Reverse engineering for NC Machining simulation is becoming an important component of NC simulation and verification. Design engineers need more accurate and complete CAD model of the simulated machined part for finite element analysis or parametric feature-based modeling for design modification or update. The as-cut or in process geometry should be correctly accessed in the CAD/CAM environment at any stage of the machining process. Few commercial software are addressing the reverse engineering issue and provide robust solutions. Until now, in process CAD models for NC simulation have been created with many drawbacks and inaccurate methods are proposed. Reverse engineering for NC machining simulation based on polyhedral in-process geometry is addressed. Two complementary approaches are presented here. An enriched representation embedded in ”Spring technologies Reverse engineering” or SRE file format enables to convert the polyhedral model to STEP file and a discrete shape recognition and segmentation approach provides a promising issue thanks to discrete differential geometry .

Curves on surfaces play an important role in computer aided geometric design. In this paper, we present a hyperbola approximation method based on the quadratic reparameterization of Bezier
surfaces, which generates reasonable low degree curves lying completely on the surfaces by using iso-parameter curves of the reparameterized surfaces. The Hausdorff distance between the
projected curve and the original curve is controlled under the user-specified distance tolerance. The projected curve is

An approach for a kind of parametric transform is presented for trimmed parametric surfaces. Firstly, the characteristics of trimmed surface before and after parametric transform are evaluated. Then, an algorithm is proposed to adjust the geometric and topological data of a trimmed surface in order to achieve consistency. At last, the trimmed sphere surface is taken as example to further illustrate the algorithm .

Directional distance is commonly used in geographical information systems as a measure of openness. In previous works, the sweep line method and the interval tree method have been employed to evaluate the directional distances on vector maps. Both methods require rotating original maps and study points in every direction of interest. In this article, we propose a cell-based algorithm that pre-processes a map only once; that is, it subdivides the map into a group of uniform-sized cells and records each borderline of the map into the cells traversed by its corresponding line segment. Based on the pre-processing result, the neighbouring borderlines of a study point can be directly obtained through the neighbouring cells of the point, and the borderlines in a definite direction can be simply acquired through the cells traversed by the half line as well. As a result, the processing step does not need to enumerate all the borderlines of the map when determining whether a point is on a borderline or finding the nearest intersection between a half line and the borderlines. Furthermore, we implement the algorithm for determining fetch length in coastal environment. Once the pre-processing is done, the algorithm can work in a complex archipelago environment such as to calculate the fetch lengths in multiple directions, to determine the inclusion property of a point, and to deal with the singularity of a study point on a borderline .

This paper presents a numerically stable solution to a point-in-polygon problem by combining the orientation method and the uniform subdivision technique. We define first a quasi-closest point that can be locally found through the uniform subdivision cells, and then we provide the criteria for determining whether a point lies inside a polygon according to the quasi-closest point. For a large number of points to be tested against the same polygon, the criteria are employed to determine the inclusion property of an empty cell as well as a test point. The experimental tests show that the new method resolves the singularity of a test point on an edge without loss of efficiency. The GIS case study also demonstrates the capability of the method to identify which region contains a test point in a map .

Background: Many molecules are flexible and undergo significant shape deformation as part of their function, and yet most existing molecular shape comparison (MSC) methods treat them as rigid bodies, which may lead to incorrect shape recognition. In this paper, we present a new shape descriptor, named Diffusion Distance Shape Descriptor (DDSD), for comparing 3D shapes of flexible molecules. The diffusion distance in our work is considered as an average length of paths connecting two landmark points on the molecular shape in a sense of inner distances. The diffusion distance is robust to flexible shape deformation, in particular to topological changes, and it reflects well the molecular structure and deformation without explicit decomposition. Our DDSD is stored as a histogram which is a probability distribution of diffusion distances between all sample point pairs on the molecular surface. Finally, the problem of flexible MSC is reduced to comparison of DDSD histograms .

;

A wide range of applications in computer intelligence and computer graphics require computing geodesics accurately and efficiently. The fast marching method (FMM) is widely used to solve
this problem, of which the complexity is

;

Point projection on an implicit surface is essential for the geometric modeling and graphics applications of it. This paper presents a method for computing the principle curvatures and principle directions of an implicit surface. Using the principle curvatures and principle directions, we construct a torus patch to approximate the implicit surface locally. The torus patch is second order osculating to the implicit surface. By taking advantage of the approximation torus patch, this paper develops a second order geometric iterative algorithm for point projection on the implicit surface. Experiments illustrate the efficiency and less dependency on initial values of our algorithm .

;

This paper proposes a new shape similarity assessment approach for CAD models in Boundary Representation (Brep) based on graph edit distance. A suboptimal computational procedure is performed to find the best alignment between local structures sets of attributed graphs derived from models. Assuming that only a minority of local structures characterize the functionality, we figure out the weight of every local structure in the query model through a training phase, and then evaluate the similarity between two models by calculating the weighted graph edit distance of corresponding attributed graphs. Experiment results show that our method provides solid retrieval performance on a real-world CAD model database .

Sectional views are widely used in engineering practice due to their clear and concise expression. However, it is difficult for computers to understand because of the large numbers of omitted entities and their diversified representations. This paper aims at reconstructing 3D models from 2D sectional views by improving the traditional volume based method. First, we present a two-stage loop searching algorithm to extract desired loops from sectional views. Then, sub-objects are identified by the hint-based feature identification algorithm with an intuitive loop-matching criterion. After that, a model-directed algorithm is proposed to guide the generation of sub-objects which are assembled together to form the final objects. The algorithm can handle full sections, partial sections and offset sections, as well as orthographic views. Multiple sectional views are supported in our algorithm. Moreover, the domain of objects is extended to inclined quadric surfaces and intersecting quadric surfaces with higher order curves. Experiment results show its practicability .

This paper presents a necessary and sufficient condition to judge whether two cubic Beziér curves are coincident. For two cubic Beziér curves whose control points are not collinear, they are coincident if and only if their corresponding control points are coincident or one curve is the reversal of another curve. However, this is not true for the degree that is higher than 3. This paper provides a set of counterexamples of degree 4 .

Registration of point clouds is a fundamental problem in shape acquisition and shape modeling. In this paper, a novel technique, the sample-sphere method, is proposed to register a pair of
point clouds in arbitrary initial positions. This method roughly aligns point clouds by matching pairs of triplets of points, which are approximately congruent under rigid transformation. For
a given triplet of points, this method can find all its approximately congruent triplets in

This paper presents a new algorithm for torus/torus intersection. The pre-image of the intersection in the parametric space of one torus is represented by an implicit equation. The pre-image is divided into one-valued function curve segments by characteristic points. The topological feature of these characteristic points is analyzed to obtain the structure of the pre-image. Intersection curves satisfying required precision are found by a self-adaptive refinement method. Experiment results are presented to illustrate the stability and efficiency of the method .

This paper presents a practical polyline approach for approximating Hausdorff distance between planar free-form curves. After the input curves are approximated with polylines using recursively splitting method, the precise Hausdorff distance between polylines is computed as approximation of Hausdorff distance between free-form curves, and the error of approximation is controllable. The computation of Hausdorff distance between polylines is based on an incremental algorithm that computes directed Hausdorff distance from a line segment to a polyline. Furthermore, not every segment on polylines contributes to the final Hausdorff distance. Based on the bound properties of Hausdorff distance and continuity of polylines, two pruning strategies are applied in order to prune useless segments. R-Tree structure is employed as well to accelerate the pruning process. We experimented our algorithm on sets of Bezier, B-Spline and NURBS curves respectively, and there are 95% segments pruned on approximating polylines in average. Two comparisons are also presented: One is with an algorithm computing directed Hausdorff distance on polylines by building Voronoi diagram of segments, the other is with equation solving method for computing Hausdorff distance between free-form curves .

This paper proposes an algorithm for calculating the orthogonal projection of parametric curves onto B-spline surfaces. It consists of a second order tracing method with which we construct
a polyline to approximate the pre-image curve of the orthogonal projection curve in the parametric domain of the base surface. The final 3D approximate curve is obtained by mapping the
approximate polyline onto the base surface. The Hausdorff distance between the exact orthogonal projection curve and the approximate curve is controlled under the user-specified distance
tolerance. And the continuity of the approximate curve is

Compared with the CSG-based approach, the Brep-based approach has several advantages to construct 3D models from 2D engineering drawings, such as the structure is simpler and the domain of objects that can be handled is wider. However, this approach cannot handle sectional views directly. In this paper, a new method of converting sectional views to three orthographic views is presented. Firstly, the views which have the same projection direction are merged into one view. If the number of views is two, then a new view will be added according to the coordinate relations. Secondly, elements which have been omitted in sectional views are recovered according to the matching information of the existing edges. Finally, the existing Brep-based approach is used to reconstruct the 3D models. The algorithm can handle full sections, broken-out sections, offset sections as well as two orthographic views. The algorithm has been validated by experiments .

The minimal distance computing between two tori is the basis of their collision detection and intersection. A method is proposed for discriminating three types o f position relationship ( i. e. , inclusion, disjunction and intersection) between two tori, and for computing their minimal distance. This paper proves that the Hausdorff distance between two circles in three-dimensional space can be obtained by computing their collinear normal points, which can be calculated by solving an equation of degree 8. With classification and comparison of the collinear normal points, the minimum distance and the Hausdorff distance between these two circles are obtained. In addition, this paper proves that the position relationship between two tori relates to not only the minimum distance but also the directed Hausdorff distance between their central circles. And then the minimum distance between two tori is calculated. Numerical results are presented to illustrate the stability and efficiency of the method .

A new visibility graph based algorithm is presented for computing the inner distances of a 3D shape represented by a volumetric model. The inner distance is defined as the length of the shortest path between landmark points within the shape. The inner distance is robust to articulation and can reflect well the deformation of a shape structure without an explicit decomposition. Our method is based on the visibility graph approach. To check the visibility between pairwise points, we propose a novel, fast, and robust visibility checking algorithm based on a clustering technique which operates directly on the volumetric model without any surface reconstruction procedure, where an octree is used for accelerating the computation. The inner distance can be used as a replacement for other distance measures to build a more accurate description for complex shapes, especially for those with articulated parts .

Assembly data exchange and reuse play an important role in CAD and CAM in shortening the product development cycle. However, current CAD systems cannot transfer mating conditions via neutral file format, and their exported IGES files are heterogeneous. In this paper, a schema for the full data exchange of assemblies is presented based on IGES. We first design algorithms for the pre-and-post processors of parts based on solid model, in which the topologies are explicitly specified and will be referred by mating conditions, and then extend the IGES schema by introducing the Associativity Definition Entity and Associativity Instance Entity defined in IGES standard, so as to represent mating conditions. Finally, a production rule-based method is proposed to analyze and design the data exchange algorithms for assemblies. Within this schema, the heterogeneous representations of assemblies exported from different CAD systems can be processed appropriately, and the mating conditions can be properly exchanged. Experiments on the prototype system verify the robustness, correctness, and flexibility of our schema .

3D shape normalization is a common task in various computer graphics and pattern recognition applications. It aims to normalize different objects into a canonical coordinate frame with respect to rigid transformations containing translation, rotation and scaling in order to guarantee a unique representation. However, the conventional normalization approaches do not perform well when dealing with 3D articulated objects. To address this issue, we introduce a new method for normalizing a 3D articulated object in the volumetric form. We use techniques from robust statistics to guide the classical normalization computation. The key idea is to estimate the initial normalization by using implicit shape representation, which produces a novel articulation insensitive weight function to reduce the influence of articulated deformation. We also propose and prove the articulation insensitivity of implicit shape representation. The final solution is found by means of iteratively reweighted least squares. Our method is robust to articulated deformation without any explicit shape decomposition. The experimental results and some applications are presented for demonstrating the effectiveness of our method .

The summation of floating-point numbers is ubiquitous in computer systems, while computation implemented in fixed length floating-point arithmetic may lead to inaccurate result due to rounding error. This paper presents an efficient algorithm which produces a faithful result by combining splitting the mantissa and error-free accumulation. Each summand is split into several parts with limited significant bits, which ensures these parts can be accumulated without rounding error under certain conditions. In the implementation, we discuss how to get exponent of floating-point number quickly, which is key to decide how to split summand. Our method works on computers complying with IEEE 754 standard. The running time of our algorithm is proportional to the size of the input data, according to both analysis and numerical tests .

3DMolNavi is a web-based visualized navigation system developed for intuitively exploring flexible molecular shape retrieval. This system is based on the histogram of Inner Distance Shape Signature (IDSS) for fast retrieving molecules that are similar to a query molecule, and uses dimensionality reduction to navigate the retrieved results in 2D and 3D spaces .

Geometric algorithms are widely used in many scientific fields like computer vision, computer graphics. To guarantee the correctness of these algorithms, it’s important to apply formal method to them. In this paper, we propose an approach to proving the correctness of geometric algorithms. The main contribution of the paper is that a set of proof decomposition rules is proposed which can help improve the automation of the proof of geometric algorithms. We choose TLA+2, a structural specification and proof language, as our experiment environment. The case study on a classical convex hull algorithm shows the usability of the method .

This paper presents a new multi-resolution mesh fitting algorithm, extending the adaptive patch-based fitting scheme where each underlying quadrilateral is recursively subdivided into four
sub-patches. In this paper, the

In this paper, we present an example-driven symbol recognition algorithm based on its key features in CAD engineering drawings. When user provides an example of a specific symbol, the input symbol is analyzed and its features are extracted automatically. Based on the relation representation, the constrained tree with key feature priority can be established for this type of symbol. By this means, the symbol library can be built and expanded automatically in order to handle variety engineering drawings. In the next stage of the recognition processes, we first locate the key feature nodes in drawings, and then find other elements around which satisfy the topology structure of constrained tree. If all the elements and constrains in the tree are found, the symbol object will be recognized. Because of the accurate position, unnecessary matching calculations are greatly reduced. Experimental results validate that our approach is effective .

Selecting the best views for 3D objects is useful for many applications. However, with the existing methods applied in CAD models, the results neither exhibit the 3D structures of the models fairly nor conform to human’s browsing habits. In this paper, we present a robust method to generate the canonical views of CAD models, and the above problem is solved by considering the geometry and visual salient features simultaneously. We first demonstrate that for a CAD model, the three coordinate axes can be approximately determined by the scaled normals of its faces, such that the pose can be robustly normalized. A graph-based algorithm is also designed to accelerate the searching process. Then, a convex hull based method is applied to infer the upright orientation. Finally, four isometric views are selected as candidates, and the one whose depth image owns the most visual features is selected. Experiments on the Engineering Shape Benchmark (ESB) show that the views generated by our method are pleasant, informative and representative. We also apply our method in the calculation of model rectilinearity, and the results demonstrate its high performance .

This paper proposes a method of

Filling n-sided regions is an essential operation in shape and surface modeling. Positional and tangential continuities are highly required in designing and manufacturing. We propose a
method for filling n-sided regions with untrimmed triangular Coons B-spline patches, preserving

The orbicular N-sided hole filling problem is usually introduced by filleting an end-point of a part with large radius. The existing methods based on quadrilateral partition or
constrained-optimization can rarely generate high-order continuous blending surfaces under these circumstances. This paper first reparameterizes the boundary of the specified orbicular
N-sided hole to ensure the compatibility of neighboring cross-boundary derivatives on the connecting points, preserving their

Extracting structural information from mesh models iscrucial for Simulation Driven Design (SDD) in industrialapplications. Focusing on thinplate CAD mesh models (the most commonly used parts in electronic productssuch as PCs, mobile phones and so on), we present an algorithm based on primitive fitting for segmentingthinplate CAD mesh models into parts of three different types, twoof which are extruding surfaces andthe other is a lateral surface. This method can be used for solid model reconstruction in the SDD proces.Our approach involves two steps. First, a completely automatic method for accurate primitive fitting onCAD meshes is proposed based on the hierarchical primitive fitting framework. In the second step, a novelprocedure is proposed for splitting thinplate CAD mesh models by detecting parallel extruding surfaces and lateralsurfaces. The method presented here has been proved to work smoothly in applications of real product design .

IGES is a widely used standard for mechanical data exchange. In this paper, we present a new method for the retrieval task of IGES surface model. Based on this method, a novel distinctive face selection strategy is proposed and evaluated. In the training database, each model is treated as a set of disordered faces, and their features are extracted and stored respectively. The Discounted Cumulative Gain (DCG) value of each face is then calculated and stored for later utilization. To retrieve models in the testing database, we first forecast each face’s DCG value by searching its most similar face’s DCG value in training database, and then the top k faces with highest forecasted DCGs are selected as query input. A greedy algorithm is finally applied to get the total similarity. Experimental results show that our algorithm is superior or at least comparable to some of the most powerful methods in finding parts with similar functionality in most cases .

Generating smooth B-spline curves is a fundamental operation of computer aided geometric design. This paper presents a method to calculate unknown control points using specified control points and knots to generate a smooth B-spline curve. It is based on the basis-function-maximum-value parameterization introduced in this paper. This method first parameterizes all control points; then regards given control points as data points to create a fit curve by interpolation; and finally obtains the unknown control points by evaluating the corresponding parameters directly, which ensures the continuity and smoothness of the generated B-spline curve. The examples in the last section illustrate the feasibility of this method .

In this paper, we propose a new functionality-based benchmark for CAD model retrieval. Our benchmark contains 1968 frequently-used CAD models which are divided into training set and test set. The models are carefully classified by their functionalities in industry. Eight different shape descriptors are then compared using four famous evaluation measurements. The results show that models having the same functionalities do not necessarily share the same or similar shapes, hence the functionality-based retrieval methods are encouraged, which we believe will be of great help for the improvements of design reusability. Some possible future work for 3D model retrieval in mechanical domain are also proposed based on the observation of our experiments .

This paper proposes a new shape similarity assessment approach for CAD models in Boundary Representation (Brep) based on graph edit distance. A suboptimal computational procedure is performed to find the best alignment between local structures sets of attributed graphs derived from models. Assuming that only a minority of local structures characterize the functionality, we figure out the weight of every local structure in the query model through a training phase, and then evaluate the similarity between two models by calculating the weighted graph edit distance of corresponding attributed graphs. Experiment results show that our method provides solid retrieval performance on a real-world CAD model database .

N-sided hole filling plays an important role in vertex blending. Piegl and Tiller presented an algorithm to interpolate the given boundary and cross-boundary derivatives in B-spline form.
To deal with the incompatible cases that their algorithm cannot handle, we propose an extension method to manipulate the transition between sharp and rounded features. The algorithm first
patches n crescent-shaped extended surfaces to the boundary with

This paper proposes a method to construct a B-spline surface that interpolates the specified four groups of boundary derivative curves in the B-spline form. The discontinuity can be bounded by an arbitrary geometric invariant as the tolerance. The method first handles the six types of the compatibility problems by continuity-preserving reparameterization, knot-insertion and local control-point tuning. The transformed boundary conditions are then parametrically compatible, so the Coons strategy can be applied to construct the final interpolant. Not only can it be used in the reliable geometric modeling, but the approach also can be applied to many other algorithms that require compatibility guarantee .

This paper focuses on interpolating vertices and normal vectors of a closed quad-dominant mesh 1G2-continuously using regular Coons B-spline surfaces, which are popular in industrial CAD/CAM systems. We first decompose all non-quadrangular facets into quadrilaterals. The tangential and second-order derivative vectors are then estimated on each vertex of the quads. A least-square adjustment algorithm based on the homogeneous form of G2 continuity condition is applied to achieve curvature continuity. Afterwards, the boundary curves, the first- and the second-order cross-boundary derivative curves are constructed fulfilling G2 continuity and compatibility conditions. Coons B-spline patches are finally generated using these curves as boundary conditions. In this paper, the upper bound of the rank of G2 continuity condition matrices is also strictly proved to be 2n−3, and the method of tangent-vector estimation is improved to avoid petal-shaped patches in interpolating solids of revolution .

Feature identification is one of the key steps for 3D solids reconstruction from 2D vector engineering drawings using the volume-based method. In this paper, we propose a novel method to identify and validate features from sectional views. First, features are classified as explicit features (EPFs) and implicit features (IPFs), which are then identified in an order of priority using heuristic hints. We show that the problem of constructing EPFs can be formulated as a 0-1 integer linear program (ILP), and the IPFs are generated based on the understanding of semantic information of omitted projections in sectional views. Then, the Loop-Relation Graph (LRG) is introduced as a multi-connected-subgraph representation for describing the relations between loops and features. According to the LRG, a reasoning technique based on confidence is implemented to interactively validate features. This method can recover features without complete projections, and the level of understanding sectional views is improved. Full sections, partial sections, offset sections as well as revolved sections can be handled by our method. Several examples are provided to demonstrate the practicability of our approach .

This paper presents a general scheme to compute ridges on a smooth 2-manifold surface from the standpoint of a vector field. A ridge field is introduced. Starting with an initial ridge, which may or may not be umbilical, a ridge line is then traced by calculating an associated integral curve of this field in conjunction with a new projection procedure to prevent it from diverging. This projection is the first that can optimize a ridge guess to lie on a ridge line uniquely and accurately. In order to follow this scheme, we not only develop practical ridge formulae but also address their corresponding computational procedures for an analytical surface patch, especially for an implicit surface. In contrast to other existing methods, our new approach is mathematically sound and characterized by considering the full geometric structures and topological patterns of ridges on a generic smooth surface. The resulting ridges are accurate in the numerical sense and meet the requirement of high accuracy with complete topology. Although the objective of this paper is to develop a mathematically sound framework for ridges on a smooth surface, we give a comprehensive review of relevant works on both meshes and smooth surfaces for readers .

Manifold-ranking is a powerful method in semi-supervised learning, and its performance heavily depends on the quality of the constructed graph. In this paper, we propose a novel graph
structure named

The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm for computing the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation .

Selecting the best views for 3D objects is useful for many applications. However, with the existing methods applied in CAD models, the results neither exhibit the 3D structures of the models fairly nor conform to human’s browsing habits. In this paper, we present a robust method to generate the canonical views of CAD models, and the above problem is solved by considering the geometry and visual salient features simultaneously. We first demonstrate that for a CAD model, the three coordinate axes can be approximately determined by the scaled normals of its faces, such that the pose can be robustly normalized. A graph-based algorithm is also designed to accelerate the searching process. Then, a convex hull based method is applied to infer the upright orientation. Finally, four isometric views are selected as candidates, and the one whose depth image owns the most visual features is selected. Experiments on the Engineering Shape Benchmark (ESB) show that the views generated by our method are pleasant, informative and representative. We also apply our method in the calculation of model rectilinearity, and the results demonstrate its high performance .

This paper presents a face-based retrieval algorithm to search mechanical parts with similar partial features. The method makes it easier to retrieve models with partial features so as to support early stage reusability. In the training phase, all the faces in the database are trained and assigned with a value indicating their distinction. Trivial faces and atypical ones are removed in this phase to improve online retrieval efficiency. In the query phase, we evaluate the distinction of the input faces by aligning them with faces in the database. A greedy algorithm is finally applied to match the input faces and the faces in the database according to their similarity order. Experimental results show that our method can provide a favorable performance when applied to retrieve the models with common partial features comparing to some other mesh-based methods .

We propose a canonical form of the curved-knot B-spline surface called the regular curved-knot B-spline. On one hand it allows the transition of the knot vectors so that the continuity configurations of the two opposite boundaries can be different. On the other hand, the regular form achieves the simplicity in storage, evaluation and the construction algorithms, and that makes it possible to be applied in the industrial geometric modeling systems. The applications: bridging, multi-sided hole filling and irregular feature modeling, show that it is well suited for modeling complicated objects, such as a transition between sharp and rounded features. Compared with patching numbers of B-splines, it not only increases the inter-surface continuity of the shape, but also reduces the complexity of algorithms .

Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus .

Image space photon mapping has the advantage of simple implementation on GPU without pre-computation of complex acceleration structures. However, existing approaches use only a single image for tracing caustic photons, so they are limited to computing only a part of the global illumination effects for very simple scenes. In this paper we fully extend the image space approach by using multiple environment maps for photon mapping computation to achieve interactive global illumination of dynamic complex scenes. The two key problems due to the introduction of multiple images are 1) selecting the images to ensure adequate scene coverage; and 2) reliably computing raygeometry intersections with multiple images. We present effective solutions to these problems and show that, with multiple environment maps, the image-space photon mapping approach can achieve interactive global illumination of dynamic complex scenes. The advantages of the method are demonstrated by comparison with other existing interactive global illumination methods .

The synthesis quality is one of the most important aspects in solid texture synthesis algorithms. In recent years several methods are proposed to generate high quality solid textures. However, these existing methods often suffer from the synthesis artifacts such as blurring, missing texture structures, introducing aberrant voxel colors, and so on. In this paper, we introduce a novel algorithm for synthesizing high quality solid textures from 2D exemplars. We first analyze the relevant factors for further improvements of the synthesis quality, and then adopt an optimization framework with the k-coherence search and the discrete solver for solid texture synthesis. The texture optimization approach is integrated with two new kinds of histogram matching methods, position and index histogram matching, which effectively cause the global statistics of the synthesized solid textures to match those of the exemplars. Experimental results show that our algorithm outperforms or at least is comparable to the previous solid texture synthesis algorithms in terms of the synthesis quality .

We present a real-time algorithm for rendering translucent objects of arbitrary shapes.We approximate the scattering of light inside the objects using the diffusion equation, which we solve on-the-fly using the GPU. Our algorithm is general enough to handle arbitrary geometry, heterogeneous materials, deformable objects and modifications of lighting, all in real-time. In a pre-processing step, we discretize the object into a regular 4-connected structure (QuadGraph). Due to its regular connectivity, this structure is easily packed into a texture and stored on the GPU. At runtime, we use the QuadGraph stored on the GPU to solve the diffusion equation, in real-time, taking into account the varying input conditions: Incoming light, object material and geometry. We handle deformable objects, provided the deformation does not change the topological structure of the objects .

We present a novel algorithm to address the above issues. Our method establishes a tight connection between the local color statistics of the source and target images. All the obvious color features can be presented in the result .

This paper presents a real-time approach to render 3D scenes with the effects of watercolor on GPU. Most processes of the approach are implemented with image-space techniques. Our algorithm renders detail layer, ambient layer and stroke layer separately, and then combines them into final result. During the rendering processes, we use screen space ambient occlusion and shadow mapping to compute shadow in much shorter time, and we use image filter approach to simulate important effects of watercolor. Because our approach is mainly implemented with image-space techniques, it is convenient to use GPU to accelerate the rendering processes and finally our approach achieves real-time speed .

We present a novel hierarchical grid based method for fast collision detection (CD) for deformable models on GPU architecture. A two-level grid is employed to accommodate the non-uniform distribution of practical scene geometry. A bottom-to-top method is implemented to assign the triangles into the hierarchical grid without any iteration while a deferred scheme is introduced to efficiently update the data structure. To address the issue of load balancing, which greatly influences the performance in SIMD parallelism, a propagation scheme which utilizes a parallel scan and a segmented scan is presented, distributing workloads evenly across all concurrent threads. The proposed method supports both discrete collision detection (DCD) and continuous collision detection (CCD) with self-collision. Some typical benchmarks are tested to verify the effectiveness of our method. The results highlight our speedups over prior algorithms on different commodity GPUs .

This paper presents an improvement to the stochastic progressive photon mapping (SPPM), a method for robustly simulating complex global illumination with distributed ray tracing effects. Normally, similar to photon mapping and other particle tracing algorithms, SPPM would become inefficient when the photons are poorly distributed. An inordinate amount of photons are required to reduce the error caused by noise and bias to acceptable levels. In order to optimize the distribution of photons, we propose an extension of SPPM with a Metropolis-Hastings algorithm, effectively exploiting local coherence among the light paths that contribute to the rendered image. A well-designed scalar contribution function is introduced as our Metropolis sampling strategy, targeting at specific parts of image areas with large error to improve the efficiency of the radiance estimator. Experimental results demonstrate that the new Metropolis sampling based approach maintains the robustness of the standard SPPM method, while significantly improving the rendering efficiency for a wide range of scenes with complex lighting .

Depth-of-field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise-free results using Monte Carlo integration. This paper introduces an efficient adaptive depth-of-field rendering algorithm that achieves noise-free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur-size’ map and ‘pixel-variance’ map computed in the initialization. In the image reconstruction phase, based on the blur-size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near-reference quality depth-of-field images with significantly fewer samples than previous techniques .

Existing methods for real-time illumination of complex lights, either require long time pre-computation, or only focus on some special types of illumination. Because computing different kinds of illumination requires different sampling strategies, this paper introduces a novel efficient framework for rendering illumination of complex light sources, in which the pre-computation is not necessary. We divide the rendering equation into three parts: high-frequency term, low-frequency term and occlusion term. Each term is computed by a proper sampling strategy. High-frequency term is solved by importance sampling the BRDF, while low-frequency term is computed by importance sampling the light sources. Occlusion term is computed with depth information in screen-space, and the required number of samples is greatly reduced by interleaved sampling. Our framework is easy to implement on GPU and can solve many real-time rendering problems. We take real-time environment-map-lighting as an example for demonstrating applications of this framework. The results show that our technique can handle complete light effects with higher quality than previous works .

Current multi-operator image resizing methods succeed in generating impressive results by using image similarity measure to guide the resizing process. An optimal operation path is found in the resizing space. However, their slow resizing speed caused by inefficient computation strategy of the bidirectional patch matching becomes a drawback for practical use. In this paper, we present a novel method to address this problem. By combining seam carving with scaling and cropping, our method can realize content-aware image resizing very fast. We define cost functions combing image energy and dominant color descriptor for all the operators to evaluate the damage to both local image content and global visual effect. Therefore our algorithm can automatically find an optimal sequence of operations to resize the image by dynamic programming or greedy algorithm. We also extend our algorithm to indirect image resizing which can protect the aspect ratio of the dominant object in an image .

Rendering of volume caustics in participating media is often expensive, even with different acceleration approaches. Basic volume photon tracing is used to render such effects, but rather slow due to its massive quantity of photons to be traced. In this paper we present an image-based volume photon tracing method for rendering volume caustics at real-time frame rates. Motivated by multi-image based photon tracing, our technique uses multiple depth maps to accelerate the intersection test procedure, achieving a plausible and fast rendering of volume caustics. Each photon dynamically selects the depth map layer for intersection test, and the test converges to an approximate solution using image space methods in a few recursions. This allows us to compute photon distribution in participating media while avoiding massive computation on accurate intersection tests with scene geometry. We demonstrate that our technique, combined with photon splatting techniques , is able to render volume caustics caused by multiple refractions .

We present a fast collision detection method for deformable surfaces with parallel spatial hashing on GPU architecture. The efficient update and access of the uniform grid are exploited to accelerate the performance in our method. To deal with the inflexible memory system, which makes the building of stream data a challenging task on GPU, we propose to subdivide the whole workload into irregular segments and design an efficient evaluation algorithm, which employs parallel scan and stream compaction, to build the stream data in parallel. The load balancing is a key aspect that needs to be considered in the SIMD parallelism. We break the heavy and irregular collision computation down into lightweight part and heavyweight part, ensuring the later perfectly run in load balancing manner with each concurrent thread processes just a single collision. In practice, our approach can perform collision detection in tens of milliseconds on a PC with NVIDIA GTX 260 graphics card on benchmarks composed of millions of triangles. The results highlight our speedups over prior CPU-based and GPU-based algorithms .

Color transfer is a practical image editing technology which is useful in various applications. An ideal color transfer algorithm should keep the scene in the source image and apply the color styles of the reference image. All the dominant color styles of the reference image should be presented in the result especially when there are similar contents in the source and reference images. We propose a robust color transfer framework to address the above issues. Our method can establish a soft connections between the local color statistics of the source and reference images. All the obvious color features can be presented in the result image, as well as the spatial distribution of the reference color pattern .

Extraction and re-rendering of real materials give large contributions to various image-based applications. As one of the key properties of modeling the appearance of an object, materials mainly focus on the effects caused by light transportation. Therefore, understanding the characteristics of a complex material from a single photograph and transferring it to an object in another image becomes a very challenging problem. In this paper, we present a novel framework to transfer real translucent materials such as fruits and flowers between single images. We define a group of information which can model the attributes during the extraction and transfer process. Once we extract this information from both the source and target images, we can easily produce a realistic photograph of an object with target-like materials and suitable shading effects in the environment of sources .

As a linear blending method, cage-based deformation is widely used in various applications of image and geometry processing. In most cases especially in the interactive mode, deformation based on embedded cages does not work well as some of the coefficients are not continual and make the deformation discontinuous, which means existing “spring up” phenomenon. However, it’s common for us to deform the ROI(Region of Interest) while keeping local part untouched or with only small adjustments. In this paper, we design a scheme to solve the above problem. A multicage can be generated manually or automatically, and the image deformation can be adjusted intelligently according to the local cage shape to preserve important details. On the other hand, we don’t need to care about the pixels’ position relative to the multicage. All the pixels go through the same process, and this will save a lot of time. We also design a packing method for cage coordinates to pack all the necessary coefficents into one texture. A vertex shader can be used to accelerate the deformation process, leading to realtime deformation even for large images .

Rendering of volume caustics in participating media is often expensive, even with different acceleration approaches. Basic volume photon tracing is used to render such effects, but rather slow due to its massive quantity of photons to be traced. In this paper we present an image-based volume photon tracing method for rendering volume caustics at real-time frame rates. Motivated by multi-image based photon tracing, our technique uses multiple depth maps to accelerate the intersection test procedure, achieving a plausible and fast rendering of volume caustics. Each photon dynamically selects the depth map layer for intersection test, and the test converges to an approximate solution using image space methods in a few recursions. This allows us to compute photon distribution in participating media while avoiding massive computation on accurate intersection tests with scene geometry. We demonstrate that our technique, combined with photon splatting techniques , is able to render volume caustics caused by multiple refractions .

This paper presents an effective method that simulates the ink diffusion process with visual plausible effects and real-time performance. Our algorithm updates the dynamic ink volume with a hybrid grid-particle representation: the fluid velocity field is calculated with a low-resolution grid structure, while the highly detailed ink effects are controlled and visualized with the particles. We propose an improved ink rendering method with particle sprites and motion blur techniques. The simulation and the rendering processes are efficiently implemented on graphics hardware for interactive frame rates. Compared to traditional simulation methods that treat water and ink as two mixable fluids, our method is simple but effective: it captures various ink effects such as pinned boundary [Chu and Tai 2005] and filament pattern [Shiny et al. 2010] with real-time performance; it allows easy interaction with the artists; it includes basic solid-fluid interaction. We believe that our method is attractive for industrial animation and art design .

We cooperate with EADS on geometric representation and FEM.

We cooperate with Sony Corporation on the research of image-based translucent material transfer.

We cooperate with Samsung Advanced Institute of Technology (China) on the research of stereo matching.

We cooperate with CAS-BEGCL Imaging Technology Corporation on fluid simulation, object deformation and realistic rendering.

The objectives of these Programs address Geometry Modeling and Computing, mainly Robustness and Tolerance as well as Geometric Uncertainties.

CAD is an INRIA/Tsinghua University team related to LIAMA (China).

Dr. Fredo Durand (MIT), Pr. J.D. Boissonnat (INRIA) and Pr. Ramanie (Purdue) visited our team this year.

We attend an international program of National Natural Science Foundation of China from 2010 to 2013.

Floating Point continuity clearly is a pioneer effort to solving a well-known unsolved problem. Up to now, almost all geometric modeling tool kits are based on traditional mathematics.
They ignore the fact that computers can only represent a finite set of real numbers and simply use the formula (

The central challenge with spline surfaces is to control their continuity when multiple patches join and to enable different types of sharpness. We are especially excited by a new result
that addresses a central problem with spline modeling that has been open for five decades: the variation of continuity across a patch. This is needed, for example, when a crease forms in a
smooth area. Because spline surfaces are modeled using a (mostly separable) tonsorial product of polynomial bases, it is hard to have a different level of continuity on two opposite edges of
a patch. We proposed a particularly elegant solution to this challenge by smoothly varying the parametric location of the spline knots. This allows the curve to transition from a
configuration where knots overlap (sharp

Pr. Jean-Claude Paul, Pr. Jun-Hai Yong, Dr. Bin Wang and Dr. Hui Zhang teach at Tsinghua University. Dr. Hui Zhang is the Dean of the Scool of Software Teaching Program.

Pr. Xiaopeng Zhang and Dr. Weiming Dong teach at Graduate University of Chinese Academy of Sciences.

Pr. Jean-Claude Paul was Honorary Chair and Pr. Xiaopeng Zhang was Program Chair of the ACM Siggraph VRCAI 2011 The 10th International Conference on Virtual Reality Continuum and Its Applications in Industry Dec.11-12, 2011, Hong Kong, China.