Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Computer Graphics Forum
This paper presents an analytic formulation for anti-aliased sampling of 2D polygons and 3D polyhedra. Our framework allows the exact evaluation of the convolution integral with a linear function defined on the polytopes. The filter is a spherically symmetric polynomial of any order, supporting approximations to refined variants such as the Mitchell-Netravali filter family. This enables high-quality rasterization of triangles and tetrahedra with linearly interpolated vertex values to regular and non-regular grids. A closed form solution of the convolution is presented and an efficient implementation on the GPU using DirectX and CUDA C is described.
2000
Very large polygonal models, which are used in more and more graphics applications today, are routinely generated by a variety of methods such as surface reconstruction algorithms from 3D scanned data, isosurface construction algorithms from volumetric data, and photogrametric methods from aerial photography. In this report we provide an overview of several closely related methods developed during the last few yers, to smooth, denoise, edit, compress, transmit, and animate very large polygonal models.
Proceedings of the 16th International Meshing Roundtable, 2008
Many modern research areas face the challenge of meshing level sets of sampled scalar functions. While many algorithms focus on ensuring geometric qualities of the output mesh, recent attention has been paid to building topologically accurate Delaunay conforming meshes of any level set from such volumetric data.
2004
Many applications dealing with geometry acquisition and processing produce polygonal meshes that carry artifacts like discretization noise. While there are many approaches to remove the artifacts by smoothing or filtering the mesh, they are not tailored to any specific application subject to·certain restrictive objectives. We show how to incorporate smoothing schemes based on the general Laplacian approximation to satsify all those objectives at the same time for the results of flow simulation in the application field of car manufacturing. In the presented application setting the major restrictions come from the bounding volume of the flow simulation, the so-called installation space. In particular, clean mesh regions (without noise) should not be smoothed while at the same time the installation space must not be violated by the smoothing of the noisy mesh regions. Additionally, aliasing effects at the boundary between clean and noisy mesh regions must be prevented. To address the f...
Various 3D acquisition, analysis, visualization and compression approaches sample surfaces of 3D shapes in a uniform fashion, without any attempt to align the samples with the sharp edges and corners of the original shape. Consequently, the interpolating triangle meshes chamfer these sharp features and thus exhibit a relatively large error in their vicinity. We introduce here two new filters that restore a large fraction of the straight or curved sharp edges missed by feature-insensitive sampling processes:(1) EdgeSharpener restores ...
ACM Transactions on Mathematical Software
Interpolation is a foundational concept in scientific computing and is at the heart of many scientific visualization techniques. There is usually a trade-off between the approximation capabilities of an interpolation scheme and its evaluation efficiency. For many applications, it is important for a user to navigate their data in real time. In practice, evaluation efficiency outweighs any incremental improvements in reconstruction fidelity. We first analyze, from a general standpoint, the use of compact piece-wise polynomial basis functions to efficiently interpolate data that is sampled on a lattice. We then detail our automatic code generation framework on both CPU and GPU architectures. Specifically, we propose a general framework that can produce a fast evaluation scheme by analyzing the algebro-geometric structure of the convolution sum for a given lattice and basis function combination. We demonstrate the utility and generality of our framework by providing fast implementations...
2004
In a virtual sculpting environment, we manipulate objects as a set of volume elements (voxels). In order to start the sculpture process from a polygonal object, we have to discretize this object as a set of voxels. This step is called voxelization. Several voxelization methods have already been proposed, but none matches all of our criteria.
ACM Transactions on Graphics, 1996
This article introduces quadrature prefiltering, an accurate, efficient, and fairly simple algorithm for prefiltering polygons for scanline rendering. It renders very high quality images at reasonable cost, strongly suppressing aliasing artifacts. For equivalent RMS error, quadrature prefiltering is significantly faster than either uniform or jittered supersampling. Quadrature prefiltering is simple to implement and space-efficient; it needs only a small twodimensional lookup table, even when computing nonradially symmetric filter kernels. Previous algorithms have required either three-dimensional tables or a restriction to radially symmetric filter kernels. Though only slightly more complicated to implement than the widely used box prefiltering method, quadrature prefiltering can generate images with much less visible aliasing artifacts.
Computer-Aided Design, 2004
We present an efficient algorithm to approximate the swept volume (SV) of a complex polyhedron along a given trajectory. Given the boundary description of the polyhedron and a path specified as a parametric curve, our algorithm enumerates a superset of the boundary surfaces of SV. It consists of ruled and developable surface primitives, and the SV corresponds to the outer boundary of their arrangement. We approximate this boundary by using a five-stage pipeline. This includes computing a bounded-error approximation of each surface primitive, computing unsigned distance fields on a uniform grid, classifying all grid points using fast marching front propagation, iso-surface reconstruction, and topological refinement. We also present a novel and fast algorithm for computing the signed distance of surface primitives as well as a number of techniques based on surface culling, fast marching level-set methods and rasterization hardware to improve the performance of the overall algorithm. We analyze different sources of error in our approximation algorithm and highlight its performance on complex models composed of thousands of polygons. In practice, it is able to compute a bounded-error approximation in tens of seconds for models composed of thousands of polygons sweeping along a complex trajectory.
2022 International Conference on Cyber-Physical Social Intelligence (ICCSI), 2022
A fast algorithm for generating first order geometrically differentiable surfaces for arbitrary polygonal meshes by using parametric bicubic spline interpolation is established. It can be applied to any polygonal mesh as a quadrilateralization preprocessing can be performed if not all polygons of the given polygonal mesh are quadrilaterals. If needed, the resulting first order geometrically differentiable surface can create and provide point clouds with any desirable density based on available polygonal meshes. Various examples are demonstrated.
Proceedings of the 9th International Meshing …, 2000
We present an effective and easy-to-implement angle-based smoothing scheme for triangular, quadrilateral and tri-quad mixed meshes. For each mesh node our algorithm compares all the pairs of adjacent angles incident to the node and adjusts these angles so that they become equal in the case of a triangular mesh and a quadrilateral mesh, or they form the ideal ratio in the case of a tri-quad mixed mesh. The size and shape quality of the mesh after this smoothing algorithm is much better than that after Laplacian smoothing. The proposed method is superior to Laplacian smoothing by reducing the risk of generating inverted elements and increasing the uniformity of element sizes. The computational cost of our smoothing method is yet much lower than optimization-based smoothing. To prove the effectiveness of this algorithm, we compared errors in approximating a given analytical surface by a set of bi-linear patches corresponding to a mesh with Laplacian smoothing and a mesh with the proposed smoothing method. The experiments show that a mesh smoothed with our method has roughly 20% less approximation error.
Mathematical Notes of the Academy of Sciences of the USSR, 1975
ArXiv, 2021
Interpolation is a fundamental technique in scientific computing and is at the heart of many scientific visualization techniques. There is usually a trade-off between the approximation capabilities of an interpolation scheme and its evaluation efficiency. For many applications, it is important for a user to be able to navigate their data in real time. In practice, the evaluation efficiency (or speed) outweighs any incremental improvements in reconstruction fidelity. In this two-part work, we first analyze from a general standpoint the use of compact piece-wise polynomial basis functions to efficiently interpolate data that is sampled on a lattice. In the sequel, we detail how we generate efficient implementations via automatic code generation on both CPU and GPU architectures. Specifically, in this paper, we propose a general framework that can produce a fast evaluation scheme by analyzing the algebro-geometric structure of the convolution sum for a given lattice and basis function ...
Figure 1: An input coarse mesh (left) is subjected to tessellation using Curved PN Triangles (middle) and Phong Tessellation (right).
Since the last decade, graphics processing units (GPU) have dominated the world of interactive computer graphics and visualization technology. Over the years, sophisticated tools and programming interfaces (like OpenGL, DirectX API) greatly facilitated the work of developers because these frameworks provide all fundamental visualization algorithms. The research target is to develop a traditional CPU-based pure software rasterizer. Although currently it has a lot of drawbacks compared with GPUs, the modern multi-core systems and the diversity of the platforms may require such implementations. In this paper, we are dealing with triangle rasterization, which is the cornerstone of rendering process. New model optimization as well as the improvement and combination of existing techniques have been presented, which dramatically improve performance of the well-known half-space rasterization algorithm. The presented techniques become applicable in areas such as graphics editors and even computer games.
IEEE Transactions on Visualization and Computer Graphics, 2005
Various acquisition, analysis, visualization and compression approaches sample surfaces of 3D shapes in a uniform fashion, without any attempt to align the samples with sharp edges or to adapt the sampling density to the surface curvature. Consequently, triangle meshes that interpolate these samples usually chamfer sharp features and exhibit a relatively large error in their vicinity. We present two new filters that improve the quality of these re-sampled models. EdgeSharpener restores the sharp edges by splitting the chamfer edges and forcing the new vertices to lie on intersections of planes extending the smooth surfaces incident upon these chamfers. Bender refines the resulting triangle mesh using an interpolating subdivision scheme that preserves the sharpness of the recovered sharp edges while bending their polyline approximations into smooth curves. A combined Sharpen&Bend post-processing significantly reduces the error produced by feature-insensitive sampling processes. For example, we have observed that the mean-squared distortion introduced by the SwingWrapper remeshing-based compressor can often be reduced by 80% executing EdgeSharpener alone after decompression. For models with curved regions, this error may be further reduced by an additional 60% if we follow the EdgeSharpening phase by Bender.
Graphical Models and Image Processing, 1999
We present an unstructured triangular mesh generation algorithm that approximates a set of mutually nonintersecting simple trimmed polynomial parametric surface patches within a user specified geometric tolerance. The proposed method uses numerically robust interval geometric representations/computations and also addresses the problem of topological consistency (homeomorphism) between the exact geometry and its approximation. Those are among the most important outstanding issues in geometry approximation problems. We also extract important differential geometric features of input geometry for use in the approximation. Our surface tessellation algorithm is based on the unstructured Delaunay mesh approach which leads to an efficient adaptive triangulation. A robust decision criterion is introduced to prevent possible failures in the conventional Delaunay triangulation. To satisfy the prescribed geometric tolerance, an adaptive node insertion algorithm is employed and furthermore, an efficient method to compute a tight upper bound of the approximation error is proposed. Unstructured triangular meshes for free-form surfaces frequently involve triangles with high aspect ratio and, accordingly, result in ill-conditioned meshing. Our proposed algorithm constructs 2D triangulation domains which sufficiently preserve the shape of triangles when mapped into 3D space and, furthermore, the algorithm provides an efficient method that explicitly controls the aspect ratio of the triangular elements.
Proceedings Computer Animation '98 (Cat. No.98EX169), 1998
Smoothing techniques are essential in providing high quality renderings out of polygonal objects that are described with a minimal amount of geometrical information. In order to remove the "polygonal" aspect of rough polygonal meshes, several techniques are available, such as shading or interpolation techniques.
2015
In a virtual sculpting environment, we manipulate objects as a set of volume elements (voxels). In order to start the sculpture process from a polygonal object, we have to discretize this object as a set of voxels. This step is called voxelization. Several voxelization methods have already been proposed, but none matches all of our criteria. In this paper, we propose a practical approach that handles these criteria, based on an optimized ray casting. The method provides a voxelization of the inner space of a polygonal object, and not only of its surface. It is designed to work with closed objects which may contain holes or tunnels. Even if our goal is not real time conversion, it is fast enough to provide voxelization of objects made of several thousands of triangles within few seconds. As voxelization is a 3D sampling process, it entails aliasing problems. Our method allows a reduction of aliasing by using a low cost oversampling in order to compute grey scale values for the voxels...
Electronics Letters, 2007
Closed-form solutions for lowpass filtering of 3D digital geometry are presented. Conventional signal processing is difficult to apply since 3D digital geometry does not usually satisfy the prerequisites of regular sampling and global parameterisation. Representing digital geometry as a level set of a 3D scalar field fitted with Gaussians and polynomials, derived are closed-form solutions of convolution with Gaussian smoothing kernels and lowpass filtering without the pitfalls of other approaches are analytically achieved.
2018
Image convolution with a filtering mask is at the base of several image analysis operations. This is motivated by Mathematical foundations and by the straightforward way the discrete convolution can be computed on a grid-like domain. Extending the convolution operation to the mesh manifold support is a challenging task due to the irregular structure of the mesh connections. In this paper, we propose a computational framework that allows convolutional operations on the mesh. This relies on the idea of ordering the facets of the mesh so that a shift-like operation can be derived. Experiments have been performed with several filter masks (Sobel, Gabor, etc.) showing state-of-the-art results in 3D relief patterns retrieval on the SHREC'17 dataset. We also provide evidence that the proposed framework can enable convolution and pooling-like operations as can be needed for extending Convolutional Neural Networks to 3D meshes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.