Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, Computer Graphics Forum
In this paper, we propose a novel external-memory algorithm to support view-dependent simplification for datasets much larger than main memory. In the preprocessing phase, we use a new spanned sub-meshes simplification technique to build view-dependence trees I/O-efficiently, which preserves the correct edge collapsing order and thus assures the run-time image quality. We further process the resulting view-dependence trees to build the meta-node trees, which can facilitate the run-time level-of-detail rendering and is kept in disk. During run-time navigation, we keep in main memory only the portions of the meta-node trees that are necessary to render the current level of details, plus some prefetched portions that are likely to be needed in the near future. The prefetching prediction takes advantage of the nature of the run-time traversal of the meta-node trees, and is both simple and accurate. We also employ the implicit dependencies for preventing incorrect foldovers, as well as main-memory buffer management and parallel processes scheme to separate the disk accesses from the navigation operations, all in an integrated manner. The experiments show that our approach scales well with respect to the main memory size available, with encouraging preprocessing and run-time rendering speeds and without sacrificing the image quality.
Computer Graphics Forum, 2000
In this paper, we propose a novel external-memory algorithm to support view-dependent simplification for datasets much larger than main memory. In the preprocessing phase, we use a new spanned sub-meshes simplification technique to build view-dependence trees I/O-efficiently, which preserves the correct edge collapsing order and thus assures the run-time image quality. We further process the resulting view-dependence trees to build the meta-node trees, which can facilitate the run-time level-of-detail rendering and is kept in disk. During run-time navigation, we keep in main memory only the portions of the meta-node trees that are necessary to render the current level of details, plus some prefetched portions that are likely to be needed in the near future. The prefetching prediction takes advantage of the nature of the run-time traversal of the meta-node trees, and is both simple and accurate. We also employ the implicit dependencies for preventing incorrect foldovers, as well as main-memory buffer management and parallel processes scheme to separate the disk accesses from the navigation operations, all in an integrated manner. The experiments show that our approach scales well with respect to the main memory size available, with encouraging preprocessing and run-time rendering speeds and without sacrificing the image quality.
Fourth International Conference on Virtual Reality and Its Applications in Industry, 2004
Hierarchical levels of details (HLODs) have proven to be an efficient way to visualize complex environments and models even in an out-of-core system. Large objects are partitioned into a spatial hierarchy and on each node a level of detail is generated for efficient view-dependent rendering. To ensure correct matching between adjacent nodes in the hierarchy care has to be taken to prevent cracks along the cuts. This either leads to severe simplification constraints at the cuts and thus to a significantly higher number of triangles or the need for a costly runtime stitching of these nodes. In this paper we present an out-of-core visualization algorithm that overcomes this problem by filling the cracks generated by the simplification algorithm with appropriately shaded fat borders. Furthermore, several minor yet important improvements of previous approaches are made. This way we come up with a simple nevertheless efficient view-dependent rendering technique which allows for the natural incorporation of state-of-the-art culling, simplification, compression and prefetching techniques.
2003
Hierarchical levels of details (HLODs) have proven to be an efficient way to visualize complex environments and models even in an out-of-core system. Large objects are partitioned into a spatial hierarchy and on each node a level of detail is generated for efficient view-dependent rendering. To ensure correct matching between adjacent nodes in the hierarchy care has to be taken to prevent cracks along the cuts. This either leads to severe simplification constraints at the cuts and thus to a significantly higher number of triangles or the need for a costly runtime stitching of these nodes. In this paper we present an out-of-core visualization algorithm that overcomes this problem by filling the cracks generated by the simplification algorithm with appropriately shaded fat borders. Furthermore, several minor yet important improvements of previous approaches are made. This way we come up with a simple nevertheless efficient view-dependent rendering technique which allows for the natural incorporation of state-of-the-art culling, simplification, compression and prefetching techniques.
2002
Abstract We present an algorithm for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases:(1) memory insensitive simplification;(2) memory insensitive construction of a level-of-detail hierarchy; and (3) run-time, output sensitive, view-dependent rendering and navigation of the mesh.
Computer Graphics Forum, 1999
We propose a technique for performing view-dependent geometry and topology simplifications for level-of-detailbased renderings of large models. The algorithm proceeds by preprocessing the input dataset into a binary tree, the view-dependence tree of general vertex-pair collapses. A subset of the Delaunay edges is used to limit the number of vertex pairs considered for topology simplification. Dependencies to avoid mesh foldovers in manifold regions of the input object are stored in the view-dependence tree in an implicit fashion. We have observed that this not only reduces the space requirements by a factor of two, it also highly localizes the memory accesses at run time. The view-dependence tree is used at run time to generate the triangles for display. We also propose a cubic-spline-based distance metric that can be used to unify the geometry and topology simplifications by considering the vertex positions and normals in an integrated manner. Publishers,
2001
We present a new framework for generic and adaptive memoryless surface simplification. We show that many existing techniques of simplification based on the edge collapse I vertex split operations differ only in terms of memory-resident data used to improve running performance. By removing the need for this memory we are able to implement multiple simplification techniques on the same platform. Our generic platform can be used as a tool for the generation and evaluation of custom error metrics. We present two new error metries designed using our generic framework. We present a novel batched ordering teelmique based on the generic simplification framework, which allows for adaptive simplification and automatic level-of-detail generation.
2002
ABSTRACT The growing availability of massive models and the inability of most existing visualization tools to work with them requires efficient new methods for massive mesh simplification. In this paper, we present a completely adaptive, virtual memory based simplification algorithm for large polygonal datasets. The algorithm is an enhancement of RSimp [2], enabling out of core simplification without reducing the output quality of the original algorithm.
1999
View-dependent simplification has emerged as a powerful tool for graphics acceleration in visualization of complex environments. However, view-dependent simplification techniques have not been able to take full advantage of the underlying graphics hardware. Specifically, triangle strips are a widely used hardware-supported mechanism to compactly represent and efficiently render static triangle meshes. However, in a view-dependent framework, the triangle mesh connectivity changes at every frame making it difficult to use triangle strips. In this paper we present a novel data-structure, Skip Strip, that efficiently maintains triangle strips during such view-dependent changes. A Skip Strip stores the vertex hierarchy nodes in a skip-list-like manner with path compression. We anticipate that Skip Strips will provide a road-map to combine rendering acceleration techniques for static datasets, typical of retained-mode graphics applications, with those for dynamic datasets found in immediate-mode applications.
Visualization and Data Analysis 2014, 2013
As visualization is applied to larger data sets residing in more diverse hardware environments, visualization frameworks need to adapt. Rendering techniques are currently a major limiter since they tend to be built around central processing with all of the geometric data present. This is not a fundamental requirement of information visualization. This paper presents Abstract Rendering (AR), a technique for eliminating the centralization requirement while preserving some forms of interactivity. AR is based on the observation that pixels are fundamentally bins, and that rendering is essentially a binning process on a lattice of bins. By providing a more flexible binning process, the majority of rendering can be done with the geometric information stored out-of-core. Only the bin representations need to reside in memory. This approach enables: (1) rendering on large datasets without requiring large amounts of working memory, (2) novel and useful control over image composition, (3) a direct means of distributing the rendering task across processes, and (4) high-performance interaction techniques on large datasets. This paper introduces AR in a theoretical context, provides an overview of an implementation, and discusses how it has been applied to large-scale data visualization problems.
Proceedings of the 31th International Conference on Computer Graphics and Vision. Volume 2, 2021
Rendering of large scenes in external memory is one of the most important problems of computer graphics, which is used in such areas as CAD/CAM/CAE, geoinformatics, project management, scientific visualization, virtual and augmented reality, computer games and animation. Unlike static scenes, for which a number of effective approaches have been proposed, in particular, levels of detail (LOD), rendering of large dynamic scenes with a given level of the tolerance and trustworthiness remains a big challenge. This paper discusses and investigates the possibility of using the previously proposed method of hierarchical dynamic levels of detail (HDLOD) for conservative rendering of dynamic scenes with a deterministic nature of events in external memory. The carried out series of computational experiments prove the feasibility and effectiveness of the method for real industrial scenes when employed in combination with memory management techniques.
2002
In this paper we are presenting a novel approach for rendering large datasets in a view-dependent manner. In a typical view-dependent rendering framework, an appropriate level of detail is selected and sent to the graphics hardware for rendering at each frame. In our approach, we have successfully managed to speed up the selection of the level of detail as well as the rendering of the selected levels. We have accelerated the selection of the appropriate level of detail by not scanning active nodes that do not contribute to the incremental update of the selected level of detail. Our idea is based on imposing a spatial subdivision over the view-dependence trees data-structure, which allows spatial tree cells to refine and merge in real-time rendering to comply with the changes in the active nodes list. The rendering of the selected level of detail is accelerated by using vertex arrays. To overcome the dynamic changes in the selected levels of detail we use multiple small vertex arrays whose sizes depend on the memory on the graphics hardware. These multiple vertex arrays are attached to the active cells of the spatial tree and represent the active nodes of these cells. These vertex arrays, which are sent to the graphics hardware at each frame, merge and split with respect to the changes in the cells of the spatial tree.
IEEE Transactions on Visualization and Computer Graphics, 2005
We present a novel approach for interactive view-dependent rendering of massive models. Our algorithm combines viewdependent simplification, occlusion culling, and out-of-core rendering. We represent the model as a clustered hierarchy of progressive meshes (CHPM). We use the cluster hierarchy for coarse-grained selective refinement and progressive meshes for fine-grained local refinement. We present an out-of-core algorithm for computation of a CHPM that includes cluster decomposition, hierarchy generation, and simplification. We introduce novel cluster dependencies in the preprocess to generate crack-free, drastic simplifications at runtime. The clusters are used for LOD selection, occlusion culling, and out-of-core rendering. We add a frame of latency to the rendering pipeline to fetch newly visible clusters from the disk and avoid stalls. The CHPM reduces the refinement cost of view-dependent rendering by more than an order of magnitude as compared to a vertex hierarchy. We have implemented our algorithm on a desktop PC. We can render massive CAD, isosurface, and scanned models, consisting of tens or a few hundred million triangles at 15-35 frames per second with little loss in image quality.
IEEE Transactions on …, 2003
Very large triangle meshes, i.e. meshes composed of millions of faces, are becoming common in many applications. Obviously, processing, rendering, transmission and archival of these meshes are not simple tasks. Mesh simplification and LOD management are a rather mature technology that in many cases can efficiently manage complex data. But only few available systems can manage meshes characterized by a huge size: RAM size is often a severe bottleneck. In this paper we present a data structure called Octreebased External Memory Mesh (OEMM ). It supports external memory management of complex meshes, loading dynamically in main memory only the selected sections and preserving data consistency during local updates. The functionalities implemented on this data structure (simplification, detail preservation, mesh editing, visualization and inspection) can be applied to huge triangles meshes on lowcost PC platforms. The time overhead due to the external memory management is affordable. Results of the test of our system on complex meshes are presented.
2003
In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm, using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such as parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.
IEEE Transactions on Visualization and …, 2002
This paper describes a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh; and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.
in the hierarchy care has to be taken to prevent cracks along the cuts. This either leads to severe simplification constraints at the cuts and thus to a significantly higher number of triangles or the need for a costly runtime stitching of these nodes. In this paper we present an out-of-core visualization algorithm that overcomes this problem by filling the cracks generated by the simplification algorithm with appropriately shaded Fat Borders. Furthermore, several minor yet important improvements of previous approaches are made. This way we come up with a simple nevertheless ecient view- dependent rendering technique which allows for the natural incorporation of state-of-the- art culling, simplification, compression and prefetching techniques leading to real-time rendering performance of the overall system. Several examples demonstrate the eciency of our approach.
Computer Graphics Forum, 2012
The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real-time frame rates is currently limited to a few million. Secondly, less than 45 million triangles-with vertices and normal-can be stored per gigabyte. Although the rendering time can be reduced using level-of-detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out-of-core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse-grained random access. A similar problem occurs in view-dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real-time view-dependent rendering of gigabyte-sized models. It is based on a neighbourhood dependency-free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random-access decompression and out-of-core memory management without storing decompressed data.
The complexity of polygonal models is growing faster than the ability of graphics hardware to render them in real-time. If a scene contains many models and textures, it is often also not possible to store the entire geometry in the graphics memory. A common way to deal with such models is to use multiple levels of detail (LODs), which represent a model at different complexity levels. With view-dependent progressive meshes it is possible to render complex models in real time, but the whole progressive model must fit into graphics memory. To solve this problem out-of-core algorithms have to be used to load mesh data from external data devices. Hierarchical level of detail (HLOD) algorithms are a common solution for this problem, but they have numerous disadvantages. In this paper, we combine the advantages of view-dependent progressive meshes and HLODs by proposing a new algorithm for real-time view-dependent rendering of huge models. Using a spatial hierarchy we extend parallel view-dependent progressive meshes to support out-of-core rendering. In addition we present a compact data structure for progressive meshes, optimized for parallel GPU-processing and out-of-core memory management.
International Journal of Modelling and Simulation, 2005
Journal of Information Science and Engineering, 2006
Traditional iterative contraction based polygonal mesh simplification (PMS) algo- rithms usually require enormous amounts of main memory cost in processing large meshes. On the other hand, fast out-of-core algorithms based on the grid re-sampling scheme usually produce low quality output. In this paper, we propose a novel cache- based approach to large polygonal mesh simplification. The new approach introduces the
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.