Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001, Proceedings of the 2001 symposium on Interactive 3D graphics - SI3D '01
We present a novel drawing system for composing and rendering perspective scenes. Our approach uses a projective 2D representation for primitives rather than a conventional 3D description. This allows drawings to be composed with the same ease as traditional illustrations, while providing many of the advantages of a 3D model. We describe a range of user-interface tools and interaction techniques that give our system its 3D-like capabilities. We provide vanishing point guides and perspective grids to aid in drawing freehand strokes and composing perspective scenes. Our system also has tools for intuitive navigation of a virtual camera, as well as methods for manipulating drawn primitives so that they appear to undergo 3D translations and rotations. We also support automatic shading of primitives using either realistic or non-photorealistic styles. Our system supports drawing and shading of extrusion surfaces with automatic hidden surface removal and highlighted silhouettes. Casting shadows from an infinite light source is also possible with minimal user intervention.
2001
I present a novel drawing system for composing and rendering perspective scenes. The proposed approach uses a projective two-dimensional representation for primitives rather than a conventional three-dimensional description. This representation is based on points that lie on the surface of a unit sphere centered at the viewpoint. It allows drawings to be composed with the same ease as traditional illustrations, while providing many of the advantages of a three-dimensional model. I describe a range of user-interface tools and interaction techniques that give the drawing system its three-dimensional-like capabilities. The system provides vanishing point guides and perspective grids to aid in drawing freehand strokes and composing perspective scenes. The system also has tools for intuitive navigation of a virtual camera, as well as methods for manipulating drawn primitives so that they appear to undergo three-dimensional translations and rotations. The new representation also supports automatic shading of primitives using either realistic or non-photorealistic styles. My system supports drawing and shading of extrusion surfaces with automatic hidden surface removal and emphasized silhouettes. Casting shadows from an infinite light source is also possible with minimal user intervention. I describe a method for aligning a sketch drawn outside the system using its vanishing points, allowing the integration of computer sketching and freehand sketching on paper in an iterative manner. Photographs and scanned drawings are applied to drawing primitives using conventional texture-mapping techniques, thereby enriching drawings and providing another way of incorporating hand-drawn images. I demonstrate the system with a variety of drawings.
Computer Graphics Forum, 2001
We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model.
Proceedings of the 5th …, 2007
Sketch-based Interfaces and Modeling, 2011
Trace figures are contour drawings of people and objects that capture the essence of scenes without the visual noise of photos or other visual representations. Their focus and clarity make them ideal representations to illustrate designs or interaction techniques. In practice, creating those figures is a tedious task requiring advanced skills, even when creating the figures by tracing outlines based on photos. To mediate the process of creating trace figures, we introduce the open-source tool Esquisse. Informed by our taxonomy of 124 trace figures, Esquisse provides an innovative 3D model staging workflow, with specific interaction techniques that facilitate 3D staging through kinematic manipulation, anchor points and posture tracking. Our rendering algorithm (including stroboscopic rendering effects) creates vector-based trace figures of 3D scenes. We validated Esquisse with an experiment where participants created trace figures illustrating interaction techniques, and results show that participants quickly managed to use and appropriate the tool.
2000
tool palette and pull-down menus. Architects and designers use sketches as a primary tool to generate design ideas and to explore alternatives, and numerous computer-based interfaces have played on the concept of "sketch". However, we restrict the notion of sketch to freehand drawing, which we believe helps people to think, to envision, and to recognize properties of the objects with which they are working. SKETCH [3] employs a pen interface to create three-dimensional models, but it uses a simple language of gestures to control a three-dimensional modeler; it does not attempt to interpret freehand drawings. In contrast, our support of 3D world creation using freehand drawing depend on users' traditional understanding of a floor plan representation. Igarashi et al. [4] used a pen interface to drive browsing in a 3D world, by projecting the user's marks on the ground plane in the virtual world. Our Sketch-3D project extends this approach, investigating an interface that allows direct interpretation of the drawing marks (what you draw is what you get) and serves as a rapid prototyping tool for creating 3D virtual scenes.
1999
Most of the work in NPR has been static 2D images or image sequences generated by a batch process. In this part of the course notes we explore interactive NPR through the example of interactive technical illustration [4]. Work that has been done on computer generated technical illustrations has focused on static images and has not included all of the techniques used to hand draw technical illustrations. We present a paradigm for the display of technical illustrations in a dynamic environment. This display environment includes all of the benefits of computer generated technical illustrations such as a clearer picture of shape, structure, and material composition than traditional computer graphics methods. It also takes advantage of the threedimensional interactive strength of modern display systems. This is accomplished by using new algorithms for real time drawing of silhouette curves, algorithms which solve a number of the problems inherent in previous methods. We incorporate current non-photorealistic lighting methods, and augment them with new shadowing algorithms based on accepted techniques used by artists and studies carried out in human perception. An interactive NPR system needs the capability to interactively display a custom shading model and edge lines. In addition, this interaction must be possible for complex geometric models. In this part of the course notes we describe a variety of techniques for achieving these goals, and describe the tradeoffs involved in choosing a particular technique. 10.2 Making it Interactive There are several new issues to address when creating 3D illustrations. Three-dimensional technical illustrations involve an interactive display of the model while preserving the characteristics of technical illustrations. By allowing the user to move the objects in space, more shape information may be available than can be conveyed by 2D images. Interaction provides the user with motion cues to help deal with visual complexity, cues that are missing in 2D images. Also, removing the distracting wireframe lines and displaying just silhouettes, boundaries, and discontinuities will provide shape information without cluttering the screen, as discussed previously in Section 8. The question remains, "How do the 2D illustration rules change for a 3D interactive technical illustration?" Adapting the shading and line conventions presented previously in the course notes is fairly straightforward as long as the line width conventions have frame-to-frame coherence. The more interesting issues depend upon changing the viewer's position versus moving the object. Since there are no protocols in traditional illustration, it may be best to model these 3D illustration conventions based on how you would move real object. This has an effect on how the light changes with respect to the object. The light position can be relative to the object or to the viewer. When looking at a small object in your hand, you turn the object and do not move your head, so the light stays in the same position relative to your eye. However when you move an object in an modeling program or when you look at a large part, the view Non-Photorealistic Rendering 10-1
Computer Graphics Forum, 2008
We present a framework for interactive sketching that allows users to create three-dimensional (3D) architectural models quickly and easily from a source drawing. The sketching process has four steps. (1) The user calibrates a viewing camera by specifying the origin and vanishing points of the drawing. The user outlines surface polygons in the drawing. (3) A 3D reconstruction algorithm uses perceptual constraints to determine the closest visual fit for the polygon. (4) The user can then adjust aesthetic controls to produce several stylistic effects in the scene: a smooth transition between day and night rendering, a horizon knockout effect and entourage figures. The major advantage of our approach lies in the combination of perception-based techniques, which allow us to minimize unnecessary interactions, and a hinging-angle scheme, which shows significant improvement in numerical stability over previous optimization-based 3D reconstruction algorithms. We also demonstrate how our reconstruction algorithm can be extended to work with perspective images, a feature unavailable in previous approaches.
2012
With the increasing range of available stereoscopic rendering devices, stereoscopic images and videos are reaching the general public. Creating such pairs of images digitally from 3D content is easy to tweak and adjust. On the other hand, drawing directly stereoscopic scenes on a sheet of paper is very hard to perform. In this paper, we focus on such an interactive task where interactive 3D graphics and novel user interfaces are combined together to create stereoscopic drawings from a standard pen and paper interface. Our system is based on augmented reality, multitouch, and 3D spatial interaction, to enhance interaction with the 3D scene. The projection of the left and right views on the paper guides the users in their stereoscopic drawing task, while maintaining a high level of expressiveness. This work is a first step towards promising full applications for interactive drawing of 3D stereoscopic images.
Geometric Modeling, 2000
We propose a new modeling and rendering system that enables users to construct 3D models with an interface that seems no different from sketching by hand, and that displays models in a sketch-like style, preserving the features of the user's strokes. We call this system 3D SKETCH. To reconstruct 3D objects from sketches, we limit the domain of renderable sketches
: a) hand-drawn image; b) webcam analysis; c) binary Image; d) detected corners; e) virtual model representation.
1995
this paper, turns this abstract geometrical solution into a fully rendered image. Both stages are highly interactive, and provide instantaneous feedback. The crucial data-structure that passes information from the first stage to the second (called EPix) is described later. Early results provoked a good deal of interest from architects experienced with CAD. With their help, we were able to formulate some likely goals for the two-stage approach: 1. To allow for a more relevant, and economical, alternative to photorealism, sharing instead some of the qualities of painting, drawing and print-making. 2. To facilitate a "hand-held" technique, in place of the deterministic algorithm, enabling an image to be finished interactively in an hour or so. We have often noticed architects tracing over and re-rendering their computer output by hand. This is not entirely for the sake of the image: the massaging of drawings is an essential stimulus to the architectural imagination - a fact n...
Future Generation Computer Systems, 2005
Most 3D objects in computer graphics are represented as polygonal mesh models. Though techniques like image-based rendering are gaining popularity, a vast majority of applications in computer graphics and animation use such polygonal meshes for representing and rendering 3D objects. High quality mesh models are usually generated through 3D laser scanning techniques. However, even the inexpensive laser scanners cost tens of thousands of dollars and it is difficult for researchers in computer graphics to buy such systems just for model acquisition. In this paper, we describe a simple model acquisition system built from web cams or digital cameras. This low-cost system gives researchers an opportunity to capture and experiment with reasonably good quality 3D models. Our system uses standard techniques from computer vision and computational geometry to build 3D models.
Virtual Reality Continuum and its Applications in Industry, 2008
In this paper we present a novel 3D animation system using a set of easily manipulable space canvases that support free-hand drawing. Our aim is to support the traditional free-hand drawing while improve the functionality imitating the 3D animation in terms of free-viewing and free animation. The system design emphasis is on the feeling of "what you see is what you get". In our system a user is allowed to create planar and curved canvases and place them in 3D space. Free-hand strokes are drawn on each canvas. Various canvases organized in space form a scene. The system utilizes both the advantages of intuitivity of 2D free-hand drawing and the capability of 3D manipulation of strokes, canvases and camera. We demonstrate the usability and efficiency of our system by describing the creation process of several short animation movies.
Computer Graphics Forum, 2008
This paper presents an online personalised non-photorealistic rendering (NPR) technique for 3D models generated from interactively sketched input. This technique has been integrated into a sketch-based modelling system. It lets users interact with computers by drawing naturally, without specifying the number, order, or direction of strokes. After sketches are interpreted as 3D objects, they can be rendered with personalised drawing styles so that the reconstructed 3D model can be presented in a sketchy style similar in appearance to what have been drawn for the 3D model. This technique captures the user's drawing style without using template or prior knowledge of the sketching style. The personalised rendering style can be applied to both visible and initially invisible geometry. The rendering strokes are intelligently selected from the input sketches and mapped to edges of the 3D object. In addition, non-geometric information such as surface textures can be added to the recognised object in different sketching modes. This will integrate sketch-based incremental 3D modelling and NPR into conceptual design.
11th Pacific Conference onComputer Graphics and Applications, 2003. Proceedings.
Two prominent issues in non-photorealistic rendering (NPR) are extracting feature points for placing strokes and maintaining frameto-frame coherence and density of these feature points. Most of the existing NPR systems address these two issues by directly operating on objects, which can not only be expensive, but also dependent on representation. We present an interactive non-photorealistic rendering system, INSPIRE, which performs feature extraction in both image space, on intermediately rendered images, and object space, on models of various representations, e.g., point, polygon, or hybrid models, without needing connectivity information. INSPIRE performs a two-step rendering process. The first step resembles traditional rendering with slight modifications, and is often more efficient to render after the extraction of feature points and their 3D properties in image and/or object space. The second step renders only these feature points by either directly drawing simple primitives, or additionally performing texture mapping to obtain different NPR styles. In the second step, strategies are developed to promise frame-toframe coherence in animation. Because of the small computational overheads and the success of performing all the operations in vertex and pixel shaders using popularly available programmable graphics hardware, INSPIRE obtains interactive NPR rendering with most styles of existing NPR systems, but offers more flexibility on model representations and compromises little on rendering speed.
Proceedings of the 1999 symposium on Interactive 3D graphics - SI3D '99, 1999
A rendering is an abstraction that favors, preserves, or even emphasizes some qualities while sacrificing, suppressing, or omitting other characteristics that are not the focus of attention. Most computer graphics rendering activities have been concerned with photorealism, i.e., trying to emulate an image that looks like a highquality photograph. This laudable goal is useful and appropriate in many applications, but not in technical illustration where elucidation of structure and technical information is the preeminent motivation. This calls for a different kind of abstraction in which technical communication is central, but art and appearance are still essential instruments toward this end. Work that has been done on computer generated technical illustrations has focused on static images, and has not included all of the techniques used to hand draw technical illustrations. A paradigm for the display of technical illustrations in a dynamic environment is presented. This display environment includes all of the benefits of computer generated technical illustrations, such as a clearer picture of shape, structure, and material composition than traditional computer graphics methods. It also includes the three-dimensional interactive strength of modern display systems. This is accomplished by using new algorithms for real time drawing of silhouette curves, algorithms which solve a number of the problems inherent in previous methods. We incorporate current non-photorealistic lighting methods, and augment them with new shadowing algorithms based on accepted techniques used by artists and studies carried out in human perception. This paper, all of the images, and a mpeg video clip are available at
Smart Graphics, 2007
Abstract. NPR Lenses is an interactive technique for producing ex-pressive non-photorealistic renderings. It provides an intuitive visual in-teraction tool for illustrators, allowing them to seamlessly apply a large variety of emphasis techniques. Advantages of 3D scene manipulation are ...
Computer Graphics Forum, 1988
Traditional interactive drawing programs adopt a bottom-up approach, allowing the user to construct a picture by the use of discrete tools, for example, lines, circles, rectangles, and so on. This paper presents a Merent approach, which allows users to construct graphical objects by stretching and cutting existing objects. The representation is simply implemented, based on a ring of cubic Bezier curves, and use of the de Casteljau algorithm.
Rendering algorithms have tended to restrict themselves to represent the e ect of light sources on scenes as could be observed by the human eye. For certain applications, like teaching surgery and anatomy, somewhat more schematic renditions are called for. Such graphics tend to be line-oriented and encode other information than just the e ect of light. In the lack of appropriate computer-based tools, such images are practically always drawn by hand by a scienti c illustrator. In this paper, we study techniques for rendering what we call rich line drawings. We develop tools for selectively mapping attributes of surfaces of an object onto lines which depict it. This enables us to render images which encode only those properties which are needed for the application at hand.
This project discusses interactive non-photorealistic rendering techniques. It is split up into two sections – outlining merhods and shading methods. Three outlining methods were implemented – stencilling, front-face culling and ink-sketching. Sten- cilling uses the stencil buffer to create a mask to draw the outline. Front-face culling uses edge localisation methods to draw the silhouette as well as the outline. Ink- sketching builds on front-face culling to make the edges look like they have been sketched with an ink pen. The second section discusses three shading methods – cellshading, ‘simple’ crosshatching and ‘fine’ crosshatching. The cellshading method uses 1D textures to make the shading of the model discrete. ‘Simple’ crosshatch- ing uses textures to shade shade the polygons. ‘Fine’ crosshatching moves beyond current research and refines the ‘simple’ crosshatching to make it more accurate on models with low polygon counts. All of the work presented in this paper is designed to work in real-time with speeds ranging from 24 to 60 frames per second.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.