Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014, Computer Graphics Forum
…
25 pages
1 file
Figure 1: Frames from real-time rendering of animated supernova data set (432 3 × 60, float-18GB), compressed using sparse coding of voxel blocks [GIM12] (block size 6, sparsity 4) at 0.10 bps, PSNR 46.57 dB. The full compressed dataset (184MB) fits in GPU memory and is available for low-latency local and transient decoding during rendering. Data made available by Dr.
2013
Great advancements in commodity graphics hardware have favored GPU-based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time-varying or multi-volume visualization, or for networked visualization on the emerging mobile devices. To address this issue, a variety of level-of-detail data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and level-of-detail pre-computation does not have to adhere to real-time constraints and can be performed off-line for high quality results. In co...
The wide majority of current state-of-the-art compressed GPU volume renderers are based on block-transform coding, which is susceptible to blocking artifacts, particularly at low bit-rates. In this paper we address the problem for the first time, by introducing a specialized deferred filtering architecture working on block-compressed data and including a novel deblocking algorithm. The architecture efficiently performs high quality shading of massive datasets by closely coordinating visibility-and resolution-aware adaptive data loading with GPU-accelerated per-frame data decompression, deblocking, and rendering. A thorough evaluation including quantitative and qualitative measures demonstrates the performance of our approach on large static and dynamic datasets including a massive 512 4 turbulence simulation (256GB), which is aggressively compressed to less than 2 GB, so as to fully upload it on graphics board and to explore it in real-time during animation.
2016
The sheer size of volume data sampled in a regular grid requires efficient lossless and lossy compression algorithms that allow for on-the-fly decompression during rendering. While all hardware assisted approaches are based on fixed bit rate block truncation coding, they suffer from degradation in regions of high variation while wasting space in homogeneous areas. On the other hand, vector quantization approaches using texture hardware achieve an even distribution of error in the entire volume at the cost of storing overlapping blocks or bricks. However, these approaches suffer from severe blocking artifacts that need to be smoothed over during rendering. In contrast to existing approaches, we propose to build a lossy compression scheme on top of a state-of-the-art lossless compression approach built on non-overlapping bricks by combining it with straight forward vector quantization. Due to efficient caching and load balancing, the rendering performance of our approach improves with...
2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2016
In rendering, textures are usually consuming more graphics memory than the geometry. This is especially true when rendering regular sampled volume data as the geometry is a single box. In addition, volume rendering suffers from the curse of dimensionality. Every time the resolution doubles, the number of projected pixels is multiplied by four but the amount of data is multiplied by eight. Data compression is thus mandatory even with the increasing amount of memory available on today's GPUs. Existing compression schemes are either lossy or do not allow on-the-fly random access to the volume data while rendering. Both of these properties are, however, important for high quality direct volume rendering. In this paper, we propose a lossless compression and caching strategy that allows random access and decompression on the GPU using a compressed volume object.
2011 15th International Conference on Information Visualisation, 2011
Real-time rendering of static volumetric data is generally known to be a memory and computationally intensive process. With the advance of graphic hardware, especially GPU, it is now possible to do this using desktop computers. However, with the evolution of real-time CT and MRI technologies, volumetric rendering is an even bigger challenge. The first one is how to reduce the data transmission between the main memory and the graphic memory. The second one is how to efficiently take advantage of the time redundancy which exists in time-varying volumetric data. We proposed an optimized compression scheme that explores the time redundancy as well as space redundancy of time-varying volumetric data. The compressed data is then transmitted to graphic memory and directly rendered by the GPU, reducing significantly the data transfer between main memory and graphic memory.
2004
at NVIDIA for an invaluable 3 month Internship in 2003. Thanks must also go to David Kirk at NVIDIA and Michael Dogget at ATI for always supplying me with the latest graphics hardware, drivers and bug fixes. Since working alone is impossible nowadays, I would like to thank my colleagues and other people I had the opportunity to work with. Stefan Gumhold for suggesting wavelet compression for animated volume data and his support at the very beginning of my work, Michael Meissner for asking me if I would like to do volume rendering on the graphics card (a GeForce3 at that time), Michael Wand for helping me with the ideas of multiresolution rendering and writing the paper together with me, Martin Kraus for his ideas on rendering of tetrahedral meshes, Stefan Roettger for further ideas on that topic, Johannes Hirche for his displacement mapping algorithm and other valuable discussions, Armin Kanitsar for the Christmas Tree data set, the Christmas Tree case study and the best case study award, and Gunter Knittel for always telling me what I still can't do on the graphics card. I would also like to thank two students that did some work on this thesis. Thanks to Julius Gonser for his efficient cache implementation for multiresolution volume rendering and Andreas Schieber for his tetrahedral sorting algorithm. Most of this work was funded by the Sonderforschungsbereich (SFB) 382 of the German Research Council (DFG). During the last time period, this work was sponsored by the NVIDIA Fellowship Program 2003-2004. I would like to thank Melanie Künzel for always believing in me, Stefan Kimmerle, Sven Fleck and Johannes Hirche for proof reading, and finally my friends and my family for their never ending support. vi An important technique in interactively exploring these data sets is direct volume rendering. To achieve interactive frame rates, general purpose graphics hardware is used nowadays. This approach has two advantages. First, the central processing unit (CPU) is free for other processing, such as on the fly decompression. Second, the graphics processing unit (GPU) is a lot faster by now when it comes to volume rendering. To store volume data on disc usually either the raw data or a simply compressed 1.1.4 Structured Grids Structured grids are usually encountered in physical simulations where both, a regular connectivity and adaptation to local details, are necessary. While physical simulations mostly use curvilinear grids (see Figure 1.3), i.e. regular grids that have been deformed with a continuous function, structured grids can always be described as regular grids with arbitrary sample point positions. In order to render these grids using graphics hardware, the grid cells have to be traversed in either a front-to-back or back-to-front order and will be processed individually. Since the structure is regular and the data set only consists of hexahedral cells, the sorting can be done very efficiently. The rendering of a single cell can either be done directly or by splitting each hexahedron into 5 or 6 tetrahedra. Introducing higher order lighting terms, i.e. scattering, increases the realism of the resulting images but also dramatically increases the rendering costs. Besides the increased realism there are very few additional details to be discovered so that most rendering schemes will only use a local light source for shading the voxel. κp g (s, − → n) = g a (s) + g d (s) l d (− → n) + g s (s) l s (− → n) The dependency between g d (s) and − → n can also depend on any other nearly linear mapping, i.e. fake-shading or even a very smooth cube map or light map. The only
Parallel Computing, 2005
We describe a system for the texture-based direct volume visualization of large data sets on a PC cluster equipped with GPUs. The data is partitioned into volume bricks in object space, and the intermediate images are combined to a final picture in a sort-last approach. Hierarchical wavelet compression is applied to increase the effective size of volumes that can be handled. An adaptive rendering mechanism takes into account the viewing parameters and the properties of the data set to adjust the texture resolution and number of slices. We discuss the specific issues of this adaptive and hierarchical approach in the context of a distributed memory architecture and present corresponding solutions. Furthermore, our compositing scheme takes into account the footprints of volume bricks to minimize the costs for reading from framebuffer, network communication, and blending. A detailed performance analysis is provided for several network, CPU, and GPU architectures-and scaling characteristics of the parallel system are discussed. For example, our tests on a 8-node AMD64 cluster with InfiniBand show a rendering speed of 6 frames per second for a 2048×1024×1878 data set on a 1024 2 viewport.
2004
We describe a system for the texture-based direct volume visualization of large data sets on a PC cluster equipped with GPUs. The data is partitioned into volume bricks in object space, and the intermediate images are combined to a final picture in a sort-last approach. Hierarchical wavelet compression is applied to increase the effective size of volumes that can be handled. An adaptive rendering mechanism takes into account the viewing parameters and the properties of the data set to adjust the texture resolution and number of slices. We discuss the specific issues of this adaptive and hierarchical approach in the context of a distributed memory architecture and present solutions for these problems. Furthermore, our compositing scheme takes into account the footprints of volume bricks to minimize the costs for reading from framebuffer, network communication, and blending. A detailed performance analysis is provided and scaling characteristics of the parallel system are discussed. F...
Proceedings. Visualization '97 (Cat. No. 97CB36155), 1997
Volumetric data sets require enormous storage capacity even at moderate resolution levels. The excessive storage demands not only stress the capacity of the underlying storage and communications systems, but also seriously limit the speed of volume rendering due to data movement and manipulation. A novel volumetric data visualization scheme is proposed and implemented in this work that renders 2D images directly from compressed 3D data sets. The novelty of this algorithm is that rendering is performed on the compressed representation of the volumetric data without pre-decompression. As a result, the overheads associated with both data movement and rendering processing are signi cantly reduced. The proposed algorithm generalizes previously proposed whole-volume frequency-domain rendering schemes by rst dividing the 3D data set into subcubes, transforming each subcube to a frequency-domain representation, and applying the Fourier Projection Theorem to produce the projected 2D images according to given viewing angles. Compared to the whole-volume approach, the subcube-based scheme not only achieves higher compression e ciency by exploiting local coherency, but also improves the quality of resultant rendering images because it approximates the occlusion e ect on a subcube by subcube basis.
Computer Graphics Forum, 2001
Volume data sets resulting from, e.g., computerized tomography (CT) or magnetic resonance (MR) imaging modalities require enormous storage capacity even at moderate resolution levels. Such large files may require compression for processing in CPU memory which, however, comes at the cost of decoding times and some loss in reconstruction quality with respect to the original data. For many typical volume visualization applications (rendering of volume slices, subvolumes of interest, or isosurfaces) only a part of the volume data needs to be decoded. Thus, efficient compression techniques are needed that provide random access and rapid decompression of arbitrary parts the volume data. We propose a technique which is block based and operates in the wavelet transformed domain. We report performance results which compare favorably with previously published methods yielding large reconstruction quality gains from about 6 to 12 dB in PSNR for a 512 3-volume extracted from the Visible Human data set. In terms of compression our algorithm compressed the data 6 times as much as the previous state-of-theart block based coder for a given PSNR quality.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2015
IEEE Transactions on Visualization and Computer Graphics, 2012
Arxiv preprint arXiv:1001.2049, 2010
Proceedings Visualization 2000. VIS 2000 (Cat. No.00CH37145), 2000
Computational Visual Media, 2016
Lecture Notes in Computer Science, 2007
Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001, 2001
Computer Methods and Programs in Biomedicine, 2008
Computers & Graphics, 2004
Computer Graphics Forum, 1997
IEEE Transactions on Visualization and Computer Graphics, 2013
IEEE Computer Graphics and Applications, 2001
Proceedings. Visualization '97 (Cat. No. 97CB36155)
Procedia Computer Science, 2014
Computer Vision and Image Understanding, 2004