Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Computer Graphics Forum
In this paper we present a pipeline for rendering dynamic 2D/3D line drawings efficiently. Our main goal is to create efficient static renditions and coherent animations of line drawings in a setting where lines can be added, deleted and arbitrarily transformed on-the-fly. Such a dynamic setting enables us to handle interactively sketched 2D line data, as well as arbitrarily transformed 3D line data in a unified manner. We evaluate the proximity of screen projected strokes to simplify them while preserving their continuity. We achieve this by using a special data structure that facilitates efficient proximity calculations in a dynamic setting. This on-the-fly proximity evaluation also facilitates generation of appropriate visibility cues to mitigate depth ambiguities and visual clutter for 3D line data. As we perform all these operations using only line data, we can create line drawings from 3D models without any surface information. We demonstrate the effectiveness and applicability of our approach by showing several examples with initial line representations obtained from a variety of sources: 2D and 3D hand-drawn sketches and 3D salient geometry lines obtained from 3D surface representations.
Rendering algorithms have tended to restrict themselves to represent the e ect of light sources on scenes as could be observed by the human eye. For certain applications, like teaching surgery and anatomy, somewhat more schematic renditions are called for. Such graphics tend to be line-oriented and encode other information than just the e ect of light. In the lack of appropriate computer-based tools, such images are practically always drawn by hand by a scienti c illustrator. In this paper, we study techniques for rendering what we call rich line drawings. We develop tools for selectively mapping attributes of surfaces of an object onto lines which depict it. This enables us to render images which encode only those properties which are needed for the application at hand.
IEEE Transactions on Visualization and Computer Graphics, 2000
Rendering large numbers of dense line bundles in three dimensions is a common need for many visualization techniques, including streamlines and fiber tractography. Unfortunately, depiction of spatial relations inside these line bundles is often difficult but critical for understanding the represented structures. Many approaches evolved for solving this problem by providing special illumination models or tube-like renderings. Although these methods improve spatial perception of individual lines or related sets of lines, they do not solve the problem for complex spatial relations between dense bundles of lines. In this paper, we present a novel approach that improves spatial and structural perception of line renderings by providing a novel ambient occlusion approach suited for line rendering in real time.
15th Pacific Conference on Computer Graphics and Applications (PG'07), 2007
We introduce a novel mechanism for creating line drawings from three-dimensional models, which captures the dynamic nature of the drawing process. The approach takes into account the interaction between the moving human hand and the drawing instrument. This is demonstrated as applied to the specific problem of making silhouette drawings from polygonal models. A control system drives a pen by tracking the contour of the polygonal model as projected onto the drawing surface, thus mimicking hand motion. The pen is treated as a physically-based object with momentum, giving the generated lines a smooth hand-drawn quality. Lines are rendered using a ribbon metaphor, where thickness is determined by the twist of the ribbon. The twist angle can be dependent upon various attributes such as perspective depth, the curvature of the line, and the lighting of the model. A number of examples are presented, ranging from tightly controlled drawings to expressive gestural drawings.
Computer Graphics Forum, 2013
Producing traditional animation is a laborious task where the key drawings are first drawn by artists and thereafter inbetween drawings are created, whether it is by hand or computer-assisted. Auto-inbetweening of these 2D key drawings by computer is a non-trivial task as 3D depths are missing. An alternate approach is to generate all the drawings by extracting lines directly from animated 3D models frame by frame, concatenating and rendering them together into an animation. However, animation quality generated using this straightforward method bears two problems. Firstly, the animation contains unsatisfactory visual artifacts such as line flickering and popping. This is especially pronounced when the lines are extracted using high-order derivatives, such as ridges and valleys, from 3D models represented in triangle meshes. Secondly, there is a lack of temporal continuity as each drawing is generated without taking its neighboring drawings into consideration. In this paper, we propose an improved approach over the straightforward method by transferring extracted 3D line drawings of each frame into individual 3D lines and processing them along the time domain. Our objective is to minimize the visual artifacts and incorporate temporal relationship of individual lines throughout the entire animation sequence. This is achieved by creating correspondent trajectory of each line from each frame and applying global optimization on each trajectory. To realize this target, we present a fully automatic novel approach, which consists of (1) a line matching algorithm, (2) an optimizing algorithm, taking into account both the variations of numbers and lengths of 3D lines in each frame, and (3) a robust tracing method for transferring collections of line segments extracted from the 3D models into individual lines. We evaluate our approach on several animated model sequences to demonstrate its effectiveness in producing line drawing animations with temporal coherence.
Proceedings of the International Conference on Computer Graphics Theory and Applications, 2010
In this work we introduce an approach for reconstructing digital 3D models from multiple perspective line drawings. One major goal is to keep the required user interaction simple and at a minimum, while making no constraints to the objects shape. Such a system provides a useful extension for digitalization of paper-based styling concepts, which today is still a time consuming process. In the presented method the line drawings are first decomposed in curves assembling a network of curves. In a second step, the positions for the endpoints of the curves are determined in 3D, using multiple sketches and a virtual camera model given by the user. Then the shapes of the 3D curves between the reconstructed 3D endpoints are inferred. This leads to a network of 3D curves, which can be used for first visual evaluations in 3D. During the whole process only little user interaction is needed, which only takes place in the pre-and post-processing phases. The approach has been applied on multiple sketches and it is shown that the approach creates plausible results within reasonable timing.
Smart Graphics, 2007
Abstract. NPR Lenses is an interactive technique for producing ex-pressive non-photorealistic renderings. It provides an intuitive visual in-teraction tool for illustrators, allowing them to seamlessly apply a large variety of emphasis techniques. Advantages of 3D scene manipulation are ...
We propose a new rendering technique that produces 3-D images with enhanced visual comprehensibility. Shape features can be readily understood if certain geometric properties are enhanced. To achieve this, we develop drawing algorithms for discontinuities, edges, contour lines, and curved hatching. All of them are realized with 2-D image processing operations instead of line tracking processes, so that they can be efficiently combined with conventional surface rendering algorithms. Data about the geometric properties of the surfaces are preserved as Geometric Buffers (G-buffers). Each G-buffer contains one geometric property such as the depth or the normal vector of each pixel. By using G-buffers as intermediate results, artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as postprocesses. This permits a user to rapidly examine various combinations of enhancement techniques without excessive recompntation, and easily obtain the most comprehensible image. Our method can be widely applied for various purposes. Several of these, edge enhancement, line drawing illustrations, topographical maps, medical imaging, and surface analysis, are presented in this paper.
Proceedings of the 2001 symposium on Interactive 3D graphics - SI3D '01, 2001
We present a novel drawing system for composing and rendering perspective scenes. Our approach uses a projective 2D representation for primitives rather than a conventional 3D description. This allows drawings to be composed with the same ease as traditional illustrations, while providing many of the advantages of a 3D model. We describe a range of user-interface tools and interaction techniques that give our system its 3D-like capabilities. We provide vanishing point guides and perspective grids to aid in drawing freehand strokes and composing perspective scenes. Our system also has tools for intuitive navigation of a virtual camera, as well as methods for manipulating drawn primitives so that they appear to undergo 3D translations and rotations. We also support automatic shading of primitives using either realistic or non-photorealistic styles. Our system supports drawing and shading of extrusion surfaces with automatic hidden surface removal and highlighted silhouettes. Casting shadows from an infinite light source is also possible with minimal user intervention.
Sketch-based Interfaces and Modeling, 2011
Trace figures are contour drawings of people and objects that capture the essence of scenes without the visual noise of photos or other visual representations. Their focus and clarity make them ideal representations to illustrate designs or interaction techniques. In practice, creating those figures is a tedious task requiring advanced skills, even when creating the figures by tracing outlines based on photos. To mediate the process of creating trace figures, we introduce the open-source tool Esquisse. Informed by our taxonomy of 124 trace figures, Esquisse provides an innovative 3D model staging workflow, with specific interaction techniques that facilitate 3D staging through kinematic manipulation, anchor points and posture tracking. Our rendering algorithm (including stroboscopic rendering effects) creates vector-based trace figures of 3D scenes. We validated Esquisse with an experiment where participants created trace figures illustrating interaction techniques, and results show that participants quickly managed to use and appropriate the tool.
The increasing domination of spline-based graphic objects in CAD/CAS has driven a great attention to methods focusing on natural and intelligent free-form shape manipulation. We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just simple spline sketches. Our technique addresses the issue of the traditional illustrations for depicting 3D subjects, ranging from geometric modeling to progressive refinement. The robust surface interpreters we proposed support NURBS surface construction respecting the designers' different drawing styles, and a so called spline-driven deformation technique provides designer predictable surface edition. In our system the spline strokes are freely sketched in 3D space and they are controlled by 3D dragger, which will produce a sequence of dynamic deformations to facilitate the user to achieve the desired models. The method has been tested with various types of sketches, which are rendered in a 3D environment.
Computer Graphics Forum, 2003
We present a method for rendering 3D models in the traditional line-drawing style used in artistic and scientific illustrations. The goal is to suggest the 3D shape of the objects using a small number of lines drawn with carefully chosen line qualities. The system combines several known techniques into a simple yet effective non-photorealistic line renderer. Feature edges related to the outline and interior of a given 3D mesh are extracted, segmented, and smoothed, yielding chains of lines with varying path, length, thickness, gaps, and enclosures. The paper includes sample renderings obtained for a variety of models. Line economy control, or how many lines to place? Illustrators control the amount of lines to be placed by following the principle that "less in a drawing is not the same as less of a drawing" 3 . Extraneous details are visually eliminated, reducing the subject to simple lines depicting key shape features.
2012
Figure 1: Stylized animation of a galloping horse. From left to right: line samples are extracted from a 3D model; active strokes track the samples; brush paths are attched to the strokes and stylized as circular arcs; two more frames of animation exhibit temporal coherence.
2021
We present a method to produce stylized drawings from stereoscopic 3D (S3D) images. Taking advantage of the information provided by the disparity map, we extract object contours and determine their visibility. The discovered contours are stylized and warped to produce an S3D line drawing. Since the produced line drawing can be ambiguous in shape, we add stylized shading to provide monocular depth cues. We investigate using both consistently rendered shading and inconsistently rendered shading in order to determine the importance of lines and shading to depth perception.
Computer Graphics Forum, 2011
We introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in realtime with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image-space fitting techniques that not only extract their location, but also their profile, which permits to distinguish between sharp and smooth features. Profile parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line-based styles.
Computer Graphics Forum, 2009
This work introduces a technique for interactive walkthroughs of non-photorealistically rendered (NPR) scenes using 3D line primitives to define architectural features of the model, as well as indicate textural qualities. Line primitives are not typically used in this manner in favor of texture mapping techniques which can encapsulate a great deal of information in a single texture map, and take advantage of GPU optimizations for accelerated rendering. However, texture mapped images may not maintain the visual quality or aesthetic appeal that is possible when using 3D lines to simulate NPR scenes such as hand-drawn illustrations or architectural renderings. In addition, line textures can be modified interactively, for instance changing the sketchy quality of the lines. The technique introduced here extracts feature edges from a model, and using these edges, generates a reduced set of line textures which indicate material properties while maintaining interactive frame rates. A clipping algorithm is presented to enable 3D lines to reside only in the interior of the 3D model without exposing the underlying triangulated mesh. The resulting system produces interactive illustrations with high visual quality that are free from animation artifacts.
ACM Transactions on Graphics, 2009
This paper presents a method for real-time line drawing of deforming objects. Object-space line drawing algorithms for many types of curves, including suggestive contours, highlights, ridges and valleys, rely on surface curvature and curvature derivatives. Unfortunately, these curvatures and their derivatives cannot be computed in real-time for animated, deforming objects. In a preprocessing step, our method learns the mapping from a low-dimensional set of animation parameters (e.g., joint angles) to surface curvatures for a deforming 3D mesh. The learned model can then accurately and efficiently predict curvatures and their derivatives, enabling real-time object-space rendering of suggestive contours and other such curves. This represents an orderof-magnitude speed-up over the fastest existing algorithm capable of estimating curvatures and their derivatives accurately enough for many different types of line drawings. The learned model can generalize to novel animation sequences, and is also very compact, typically requiring a few megabytes of storage at run-time. We demonstrate our method for various types of animated objects, including skeleton-based characters, cloth simulation and blend-shape facial animation, using a variety of non-photorealistic rendering styles.
Computer Graphics Forum, 2001
We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model.
ACM SIGGRAPH Computer Graphics, 1979
One of the major drawbacks of video display systems for line drawing applications has been the poor image quality they usually produce—“jaggy”, “staircased” line edges, moire patterns in regions of closely spaced lines, even, with some systems, lines disappearing (“falling in”) between pixels. Correcting these effects, with appropriate area-sampling techniques, has generally been too computationally expensive to adopt. A new algorithm is presented which generates precise, smooth images of line drawings and solid polygonal-shaped objects on multi-grey-level pixel-mapped video systems. The method is based on an analysis of boundary conditions at each pixel affected by one or more lines. With this method a number of previously needed steps can be quickly eliminated. The commonality of boundary conditions between adjacent pixels and the coherence of such conditions in a raster-scan ordering of such pixels allows efficient generation of these boundary conditions. A recursive subdivision ...
2015
In this paper, we are concerned with the problem of finding a good and homogeneous representation to encode line drawing documents (which may be handwritten). We propose a method in which the problems induced by a first step skeletonization have been avoided. First, we achieve a vectorization of the image that enables a fine description of the drawing using only vectors and quadrilateral primitives. A structural graph is built based on these primitives extracted from the initial line drawing image. The objective is to manage attributes relative to elementary objects so as to provide a description of the spatial relationships (inclusion, junction, intersection, etc.) that exist between the graphics in the images. This is done by this representation that provides a global vision of the drawings. The capacity of the representation to evolve and to carry high semantical information is highlighted too. Finally, we show how an architecture using this structural representation and a mechanism of perceptive cycles could enable a good quality interpretation of line drawings.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.