Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Computer Graphics Forum
The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real-time frame rates is currently limited to a few million. Secondly, less than 45 million triangles-with vertices and normal-can be stored per gigabyte. Although the rendering time can be reduced using level-of-detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out-of-core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse-grained random access. A similar problem occurs in view-dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real-time view-dependent rendering of gigabyte-sized models. It is based on a neighbourhood dependency-free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random-access decompression and out-of-core memory management without storing decompressed data.
The complexity of polygonal models is growing faster than the ability of graphics hardware to render them in real-time. If a scene contains many models and textures, it is often also not possible to store the entire geometry in the graphics memory. A common way to deal with such models is to use multiple levels of detail (LODs), which represent a model at different complexity levels. With view-dependent progressive meshes it is possible to render complex models in real time, but the whole progressive model must fit into graphics memory. To solve this problem out-of-core algorithms have to be used to load mesh data from external data devices. Hierarchical level of detail (HLOD) algorithms are a common solution for this problem, but they have numerous disadvantages. In this paper, we combine the advantages of view-dependent progressive meshes and HLODs by proposing a new algorithm for real-time view-dependent rendering of huge models. Using a spatial hierarchy we extend parallel view-dependent progressive meshes to support out-of-core rendering. In addition we present a compact data structure for progressive meshes, optimized for parallel GPU-processing and out-of-core memory management.
IEEE Transactions on …, 2003
Very large triangle meshes, i.e. meshes composed of millions of faces, are becoming common in many applications. Obviously, processing, rendering, transmission and archival of these meshes are not simple tasks. Mesh simplification and LOD management are a rather mature technology that in many cases can efficiently manage complex data. But only few available systems can manage meshes characterized by a huge size: RAM size is often a severe bottleneck. In this paper we present a data structure called Octreebased External Memory Mesh (OEMM ). It supports external memory management of complex meshes, loading dynamically in main memory only the selected sections and preserving data consistency during local updates. The functionalities implemented on this data structure (simplification, detail preservation, mesh editing, visualization and inspection) can be applied to huge triangles meshes on lowcost PC platforms. The time overhead due to the external memory management is affordable. Results of the test of our system on complex meshes are presented.
Generating subdivision surfaces from polygonal meshes requires the complete topological information of the original mesh, in order to find the neighbouring faces, and vertices used in the subdivision computations. Normally, winged-edge type data-structures are used to maintain such information about a mesh. For rendering meshes, most of the topological information is irrelevant, and winged-edge type data-structures are inefficient due to their extensive use of dynamical data structures. A standard approach is the extraction of a rendering mesh from the winged-edge type data structure, thereby increasing the memory footprint significantly. We introduce a mesh data-structure that is efficient for both tasks: creating subdivision surfaces as well as fast rendering. The new data structure maintains full topological information in an efficient and easily accessible manner, with all information necessary for rendering optimally suited for current graphics hardware. This is possible by dis...
Highly detailed geometric models are rapidly becoming commonplace in computer graphics. These models, often represented as complex triangle meshes, challenge rendering performance, transmission bandwidth, and storage capacities. This paper introduces the progressive mesh (PM) representation, a new scheme for storing and transmitting arbitrary triangle meshes. This efficient, loss-less, continuous-resolution representation addresses several practical problems in graphics: smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement. In addition, we present a new mesh simplification procedure for constructing a PM representation from an arbitrary mesh. The goal of this optimization procedure is to preserve not just the geometry of the original mesh, but more importantly its overall appearance as defined by its discrete and scalar appearance attributes such as material identifiers, color values, normals, and texture coordinates. We demonstrate construction of the PM representation and its applications using several practical models.
IEEE Visualization, 2002. VIS 2002., 2002
Figure 1: Locality and continuity-preserving vertex sequences at different resolutions: (a) n=2,048 vertices. (b) After 1,024 edge collapses, n=1,024 vertices. (c) After another 512 edge collapses, n=512 vertices.
Proceedings Visualization, 2001. VIS '01., 2001
We present an algorithm that uses partitioning and gluing to compress large triangular meshes which are too complex to fit in main memory. The algorithm is based largely on the existing mesh compression algorithms, most of which require an 'in-core' representation of the input mesh. Our solution is to partition the mesh into smaller submeshes and compress these submeshes separately using existing mesh compression techniques. Since a direct partition of the input mesh is out of question, instead, we partition a simplified mesh and use the partition on the simplified model to obtain a partition on the original model. In order to recover the full connectivity, we present a simple scheme for encoding/decoding the resulting boundary structure from the mesh partition. When compressing large models with few singular vertices, a negligible portion of the compressed output is devoted to gluing information. On desktop computers, we have run experiments on models with millions of vertices, which could not be compressed using standard compression software packages, and have observed compression ratios as high as 17 to 1 using our technique.
2002
In this paper we are presenting a novel approach for rendering large datasets in a view-dependent manner. In a typical view-dependent rendering framework, an appropriate level of detail is selected and sent to the graphics hardware for rendering at each frame. In our approach, we have successfully managed to speed up the selection of the level of detail as well as the rendering of the selected levels. We have accelerated the selection of the appropriate level of detail by not scanning active nodes that do not contribute to the incremental update of the selected level of detail. Our idea is based on imposing a spatial subdivision over the view-dependence trees data-structure, which allows spatial tree cells to refine and merge in real-time rendering to comply with the changes in the active nodes list. The rendering of the selected level of detail is accelerated by using vertex arrays. To overcome the dynamic changes in the selected levels of detail we use multiple small vertex arrays whose sizes depend on the memory on the graphics hardware. These multiple vertex arrays are attached to the active cells of the spatial tree and represent the active nodes of these cells. These vertex arrays, which are sent to the graphics hardware at each frame, merge and split with respect to the changes in the cells of the spatial tree.
Very large irregular-grid volume datasets are typically represented as tetrahedral meshes and require substantial disk I/O and rendering computation. One effective way to reduce this demanding resource requirement is compression. Previous research showed how rendering and decompression of a losslessly compressed irregular-grid dataset can be integrated into a one-pass computation. This work advances the state of the art one step further by showing that a losslessly compressed irregular volume dataset can be simplified while it is being decompressed and that simplification, decompression, and rendering can again be integrated into a pipeline that requires only a single pass through the datasets. In particular, this rendering pipeline can exploit a multi-resolution representation to maintain interactivity on a given hardware/software platform by automatically adjusting the amount of rendering computation that could be afforded, or performing so called time-critical rendering. As a pro...
Proceedings Visualization 2000. VIS 2000 (Cat. No.00CH37145), 2000
Very large irregular-grid data sets are represented as tetrahedral mesh and may incur significant disk I/O access overhead in the rendering process. An effective way to alleviate the disk I/O overhead associated with rendering large tetrahedral mesh is to reduce the I/O bandwidth requirement through compression. Existing tetrahedral mesh compression algorithms focus only on compression efficiency and cannot be readily integrated into the mesh rendering process, and thus demand that a compressed tetrahedral mesh be decompressed before it can be rendered into a 2D image. This paper presents an integrated tetrahedral mesh compression and rendering algorithm called Gatun, which allows compressed tetrahedral meshes to be rendered incrementally as they are being decompressed, thus forming an efficient irregular grid rendering pipeline. Both compression and rendering algorithms in Gatun exploit the same local connectivity information among adjacent tetrahedra, and thus can be tightly integrated into a unified implementation framework. Our tetrahedral compression algorithm is specifically designed to facilitate the integration with irregular grid renderer without any compromise in compression efficiency. A unique performance advantage of Gatun is its ability to reduce the run-time memory footprint requirement by releasing memory allocated to tetrahedra as early as possible. As a result, Gatun is able to decrease rendering time by a factor of 2 for very large tetrahedral mesh whose size exceeds the amount of physical memory. At the same time, the smaller working set and better access locality of Gatun improve the rendering performance by up to 30%, even when the input tetrahedral mesh is entirely memory-resident.
IEEE Transactions on Visualization and Computer Graphics, 2005
We present a novel approach for interactive view-dependent rendering of massive models. Our algorithm combines viewdependent simplification, occlusion culling, and out-of-core rendering. We represent the model as a clustered hierarchy of progressive meshes (CHPM). We use the cluster hierarchy for coarse-grained selective refinement and progressive meshes for fine-grained local refinement. We present an out-of-core algorithm for computation of a CHPM that includes cluster decomposition, hierarchy generation, and simplification. We introduce novel cluster dependencies in the preprocess to generate crack-free, drastic simplifications at runtime. The clusters are used for LOD selection, occlusion culling, and out-of-core rendering. We add a frame of latency to the rendering pipeline to fetch newly visible clusters from the disk and avoid stalls. The CHPM reduces the refinement cost of view-dependent rendering by more than an order of magnitude as compared to a vertex hierarchy. We have implemented our algorithm on a desktop PC. We can render massive CAD, isosurface, and scanned models, consisting of tens or a few hundred million triangles at 15-35 frames per second with little loss in image quality.
Proceedings Shape Modeling Applications, 2004., 2004
Multiresolution geometry streaming has been well studied in recent years. The client can progressively visualize a triangle mesh from the coarsest resolution to the finest one while a server successively transmits detail information. However, the streaming order of the detail data usually depends only on the geometric importance, since basically a mesh simplification process is performed backwards in the streaming. Consequently, the resolution of the model changes globally during streaming even if the client does not want to download detail information for the invisible parts from a given view point.
Eurographics Workshop on Parallel Graphics and Visualization, 2012
In this research we tackle the problem of rendering complex models which are created using implicit primitives, blending operators, affine transformations and constructive solid geometry in a design environment that organizes all these in a scene graph data structure called BlobTree. We propose a fast, scalable, parallel polygonization algorithm for BlobTrees that takes advantage of multicore processors and SIMD optimization techniques available on modern architectures. Efficiency is achieved through the usage of spatial data structures and SIMD optimizations for BlobTree traversals and the computation of mesh vertices and other attributes. Our solution delivers interactive visualization for modeling systems based on BlobTree scene graph.
2003
In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm, using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such as parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.
2008
The maximum compression, efficient transmission and fast rendering of geometric models is a complex problem for many reasons, thereby gaining a lot of attention from several areas, like compression and rendering of geometric models. Normally, the stripification algorithms are used to speed up the rendering of geometric models because they reduce the number of vertices sent to the graphics pipeline by exploiting the fact that adjacent triangles share an edge. In this paper, we present a new compression algorithm based on stripification of geometric models that enable us a progressive visualization of the models during its transmission. It occurs because our algorithm encodes and decodes the geometry and the connectivity of the model in an interwoven fashion. The main purpose is the storage of object files as strips files in server computer, which enables faster transmission and display of the models at client side. In fact, our compression algorithm achieves compression ratios above 40:1 over ASCII encoded formats and the triangle strips improve rendering performance.
Current mesh compression schemes encode triangles and vertices in an order derived from systematically traversing the connectivity graph. These schemes struggle with gigabyte-sized mesh input where the construction and the usage of the data structures that support topological traversal queries become I/O-inefficient and require large amounts of temporary disk space. Furthermore they expect the entire mesh as input. Since meshes cannot be compressed until their generation is complete, they have to be stored at least once in uncompressed form. We radically depart from the traditional approach to mesh compression and propose a scheme that incrementally encodes a mesh in the order it is given to the compressor using only minimal memory resources. This makes the compression process essentially transparent to the user and practically independent of the mesh size. This is especially beneficial for compressing large meshes, where previous approaches spend significant memory, disk, and I/O resources on pre-processing, whereas our scheme starts compressing after receiving the first few triangles.
International Journal of Modelling and Simulation, 2005
Multimedia Systems, 2006
The real-time interactive 3D multimedia applications such as 3D computer games and virtual reality (VR) have become prominent multimedia applications in recent years. In these applications, both visual fidelity and degree of interactivity are usually crucial to the success or failure of employment. Although the visual fidelity can be increased using more polygons for representing an object, it takes a higher rendering cost and adversely affects the rendering efficiency. To balance between the visual quality and the rendering efficiency, a set of level-of-detail (LOD) meshes has to be generated in advance. In this paper, we propose a highly efficient polygonal mesh simplification algorithm that is capable of generating a set of high-quality discrete LOD meshes in linear run time. The new algorithm adopts memoryless vertex quadric computation, and suggests the use of constant size replacement selection minheap, pipelined simplification, two-stage optimization, and a new hole-filling scheme, which enable it to generate very high-quality LOD meshes using relatively small amount of main memory space in linear runtime.
SPIE Proceedings, 2005
This paper provides an overview of the state-of-the-art techniques recently developed within the emerging field of dynamic mesh compression. Static encoders, wavelet-based schemes, PCA-based approaches, differential temporal and spatio-temporal predictive techniques, and clustering-based representations are considered, presented, analyzed, and objectively compared in terms of compression efficiency, algorithmic and computational aspects and offered functionalities (such as progressive transmission, scalable rendering, computational and algorithmic aspects, field of applicability...). The proposed comparative study reveals that: (1) clustering-based approaches offer the best compromise between compression performances and computational complexity; (2) PCA-based representations are highly efficient on long animated sequences (i.e. with number of mesh vertices much smaller than the number of frames) at the price of prohibitive computational complexity of the encoding process; (3) Spatio-temporal Dynapack predictors provides simple yet effective predictive schemes that outperforms simple predictors such as those considered within the interpolator compression node adopted by the MPEG-4 within the AFX standard; (4) Wavelet-based approaches, which provide the best compression performances for static meshes show here again good results, with the additional advantage of a fully progressive representation, but suffer from an applicability limited to large meshes with at least several thousands of vertices per connected component. 1. INTRODUCTION Dynamic 3D content becomes a more and more present feature within nowadays multimedia applications, extensively exploited within the world of games, virtual and augmented reality systems, industrial simulation, and 3D CGI (Computer Generated Imagery) animation films that recently have known a worldwide success. Within this context, the new economic challenges concern the elaboration and the seamless integration of efficient 3D representation technologies. Besides the traditional compression efficiency requirement, such dynamic 3D representations should enable new functionalities, such as real-time visualization on multiple terminals, progressive transmission over various networks, and scalable rendering with multiple Levels of Detail (LODs). The progressive transmission functionality concerns the bitstream adaptation to different, fixed or mobile communication networks with various bandwidths. Here, the decoder can start displaying a coarse approximation of the animation sequence when some baseline, minimal information is received. A refinement process is then applied to this coarse representation in order to obtain progressively finer LODs and gradually improve the visual quality of the decoded animation sequence. The scalable rendering functionality concerns the bitstream adaptation to terminals (e.g., desktop computers, laptops, PDAs, mobilephones…) of various complexities with different memory and computational capacities, under real-time visualization constrains. Here, decimation-based approaches 15 are generally adopted in order to obtain simplified representation that are visually close to the original and which can be rendered at a low computation time. Within this challenging applicative framework, the issue of elaborating efficient compression methodologies for memory consuming 3D animated meshes becomes of a crucial importance.
Computer Animation and Virtual Worlds, 2009
This paper proposes a novel approach for mono-resolution 3D mesh compression, called TFAN (Triangle Fan-based compression). TFAN treats in a unified manner meshes of arbitrary topologies, i.e., manifold or not, oriented or not, while offering a linear computational complexity (with respect to the number of mesh vertices) for both encoding and decoding algorithms. In addition, the TFAN compressed representation is optimized for real-time decoding applications. In order to validate the proposed approach, two databases have been considered for experimentations. The first is the MPEG-4 test set, which includes over 3500 general purpose manifold meshes. The second, related to the French national project SEMANTIC-3D, includes over 4000 computer assisted design (CAD) meshes of highly irregular, non-manifold topologies. In both cases, the TFAN approach outperforms existing techniques such as MPEG-4/3DMC (3D Mesh Coding) or Touma and Gotsman, with decoding times lower by an order of magnitude at equivalent or even better levels of compression efficiency (W10% in bitrate). In addition, when applied to non-manifold 3D data, the compression performances are significantly enhanced (6-30% gain in bitrate). Due to its high compression performances the TFAN approach has been recently retained for ISO standardization, within the framework of the MPEG-4/AFX standard.
Computer-aided Design, 2000
Triangle strips are a widely used hardware-supported data-structure to compactly represent and efficiently render polygonal meshes. In this paper we survey the efficient generation of triangle strips as well as their variants. We present efficient algorithms for partitioning polygonal meshes into triangle strips. Triangle strips have traditionally used a buffer size of two vertices. In this paper we also study the impact of larger buffer sizes and various queuing disciplines on the effectiveness of triangle strips. View-dependent simplification has emerged as a powerful tool for graphics acceleration in visualization of complex environments. However, in a view-dependent framework the triangle mesh connectivity changes at every frame making it difficult to use triangle strips. In this paper we present a novel data-structure, Skip Strip, that efficiently maintains triangle strips during such view-dependent changes. A Skip Strip stores the vertex hierarchy nodes in a skip-list-like manner with path compression. We anticipate that Skip Strips will provide a road-map to combine rendering acceleration techniques for static datasets, typical of retained-mode graphics applications, with those for dynamic datasets found in immediate-mode applications.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.