Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003
…
14 pages
1 file
Abstract Very large triangle meshes, ie, meshes composed of millions of faces, are becoming common in many applications. Obviously, processing, rendering, transmission, and archiving of these meshes are not simple tasks. Mesh simplification and LOD management are a rather mature technology that, in many cases, can efficiently manage complex data. But, only a few available systems can manage meshes characterized by a huge size: RAM size is often a severe bottleneck.
2003
In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm, using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such as parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.
The SQuad data structure represents the connectivity of a triangle mesh by its "S table" of about 2 rpt (integer references per triangle). Yet it allows for a simple implementation of expected constant-time, random-access operators for traversing the mesh, including in-order traversal of the triangles incident upon a vertex. SQuad is more compact than the Corner Table (CT), which stores 6 rpt, and than the recently proposed SOT, which stores 3 rpt. However, in-core access is generally faster in CT than in SQuad, and SQuad requires rebuilding the S table if the connectivity is altered. The storage reduction and memory coherence opportunities it offers may help to reduce the frequency of page faults and cache misses when accessing elements of a mesh that does not fit in memory. We provide the details of a simple algorithm that builds the S table and of an optimized implementation of the SQuad operators.
2020
Unstructured meshes are used in a variety of disciplines to represent simulations and experimental data. Scientists who want to increase accuracy of simulations by increasing resolution must also increase the size of the resulting dataset. However, generating and processing a extremely large unstructured meshes remains a barrier. Researchers have published many parallel Delaunay triangulation (DT) algorithms, often focusing on partitioning the initial mesh domain, so that each rectangular partition can be triangulated in parallel. However, the common problems for this method is how to merge all triangulated partitions into a single domain-wide mesh or the significant cost for communication the sub-region borders. We devised a novel algorithm-Triangulation of Independent Partitions in Parallel (TIPP) to deal with very large DT problems without requiring inter-processor communication while still guaranteeing the Delaunay criteria. The core of the algorithm is to find a set of independent partitions such that the circumcircles of triangles in one partition do not enclose any vertex in other partitions. For this reason, this set of independent partitions can be triangulated in parallel without affecting each other. The results of mesh generation is the large unstructured meshes including vertex index and vertex coordinate files which introduce a new challenge-locality. Partitioning unstructured meshes to improve locality is a key part of our own approach. Elements that were widely scattered in the original dataset are grouped together, speeding data access. For further improve unstructured mesh partitioning, we also described our new approach Direct Load which mitigates the challenges of unstructured meshes by maximizing the proportion of useful data retrieved during each read from disk, which in turn reduces the total number of read operations, boosting performance.
We propose Zipper, a compact representation of incidence and adjacency for manifold triangle meshes with fixed connectivity. Zipper uses on average only 6 bits per triangle, can be constructed in linear space and time, and supports all standard random-access and mesh traversal operators in constant time. Similarly to the previously proposed LR (Laced Ring) approach, the Zipper construction reorders vertices and triangles along a nearly Hamiltonian cycle called the ring. The 4.4x storage reduction of Zipper over LR results from three contributions: (1) For most triangles, Zipper stores a 2-bit delta (plus three additional bits) rather than a full 32-bit reference. (2) Zipper modifies the ring to reduce the number of exceptional triangles. (3) Zipper encodes the remaining exceptional triangles using 2.5x less storage. In spite of these large savings in storage, we show that Zipper off ers comparable performance to LR and other data structures in mesh processing applications. Zipper may also serve as a compact indexed format for rendering meshes, and hence is valuable even in applications that do not require adjacency information.
Computer Graphics Forum, 2012
The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real-time frame rates is currently limited to a few million. Secondly, less than 45 million triangles-with vertices and normal-can be stored per gigabyte. Although the rendering time can be reduced using level-of-detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out-of-core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse-grained random access. A similar problem occurs in view-dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real-time view-dependent rendering of gigabyte-sized models. It is based on a neighbourhood dependency-free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random-access decompression and out-of-core memory management without storing decompressed data.
2002
ABSTRACT The growing availability of massive models and the inability of most existing visualization tools to work with them requires efficient new methods for massive mesh simplification. In this paper, we present a completely adaptive, virtual memory based simplification algorithm for large polygonal datasets. The algorithm is an enhancement of RSimp [2], enabling out of core simplification without reducing the output quality of the original algorithm.
Journal of Information Science and Engineering, 2006
Traditional iterative contraction based polygonal mesh simplification (PMS) algo- rithms usually require enormous amounts of main memory cost in processing large meshes. On the other hand, fast out-of-core algorithms based on the grid re-sampling scheme usually produce low quality output. In this paper, we propose a novel cache- based approach to large polygonal mesh simplification. The new approach introduces the
Current mesh compression schemes encode triangles and vertices in an order derived from systematically traversing the connectivity graph. These schemes struggle with gigabyte-sized mesh input where the construction and the usage of the data structures that support topological traversal queries become I/O-inefficient and require large amounts of temporary disk space. Furthermore they expect the entire mesh as input. Since meshes cannot be compressed until their generation is complete, they have to be stored at least once in uncompressed form. We radically depart from the traditional approach to mesh compression and propose a scheme that incrementally encodes a mesh in the order it is given to the compressor using only minimal memory resources. This makes the compression process essentially transparent to the user and practically independent of the mesh size. This is especially beneficial for compressing large meshes, where previous approaches spend significant memory, disk, and I/O resources on pre-processing, whereas our scheme starts compressing after receiving the first few triangles.
International Journal of Modelling and Simulation, 2005
IEEE Transactions on Visualization and Computer Graphics, 2014
Fig. 1. Grouper represents a triangle mesh as groups of vertices and triangles stored as fixed-size records, most of which encode two adjacent triangles and one incident vertex. A VTT group (tan: 93%) represents one vertex and two adjacent triangles incident upon it; a VT group (blue: 3%) represents one vertex and one incident triangle; a T group (red: 4%) represents one triangle; and a V group (black: 1%) represents one vertex. Thick edges separate groups; thin edges separate triangles within the same group.
Proceedings of the 21st International Meshing Roundtable, 2013
Proceedings Visualization, 2001. VIS '01., 2001
Proceedings Computer Graphics International 2000
IEEE Transactions on Visualization and Computer Graphics, 1999
Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2018
Indian Conference on Computer Vision, Graphics & Image Processing, 2002
IEEE Sixth International Symposium on Multimedia Software Engineering, 2004
Computer Graphics Forum, 2012
Computer graphics and interactive techniques in Austalasia and South East Asia, 2003
2010 Proceedings of the Twelfth Workshop on Algorithm Engineering and Experiments (ALENEX), 2010