Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007, 2007 IEEE Symposium on 3D User Interfaces
We developed three distinct two-handed selection techniques for volumetric data visualizations that use splat-based rendering. Two techniques are bimanual asymmetric, where each hand has a different task. One technique is bimanual symmetric, where each hand has the same task. These techniques were then evaluated based on accuracy, completion times, TLX workload assessment, overall comfort and fatigue, ease of use, and ease of learning. Our results suggest that the bimanual asymmetric selection techniques are best used when performing gross selection for potentially long periods of time and for cognitively demanding tasks. However when optimum accuracy is needed, the bimanual symmetric technique was best for selection.
Proceedings of Seventh Annual IEEE Visualization '96, 1996
This paper describes a minimally immersive interactive system for visualization of multivariate volumetric data. The system, SFA, uses glyph-based volume rendering which does not suffer the initial costs of isosurface rendering or voxel-based volume rendering, while offering the capability of viewing the entire volume. Glyph rendering also allows the simultaneous display of multiple data values per volume location. Two-handed interaction using three-space magnetic trackers and stereoscopic viewing are combined to produce a minimally immersive volumetric visualization system that enhances the user's three-dimensional perception of the data. We describe the usefulness of this system for visualizing volumetric scalar and vector data. SFA allows the three-dimensional volumetric visualization, manipulation, navigation, and analysis of multivariate, time-varying volumetric data, increasing the quantity and clarity of the information conveyed from the visualization system.
Proceeding Visualization '91, 1991
Interactive direct visualization of 30 data requires fast update rates and the ability to extract regions of interest from the surrounding data. Parallel volume rendering yields rates that make interactive control of image viewing possible for the first time. We have achieved rates as high as I5 frames per second by trading some function for speed, while volume rendering with a full complement of ramp clussificution capabilities is performed at 1.4 frames per second. These speeds have made the combination of region selection with volume rendering practical for the first time. Semuniic driven selection, rather than geomciric clipping, hus proven to be a natural means of interacting with 30 data. Internal organs in medical data or other regions of interest can be built from preprocessed region primitives. We have applied the resulting combined system to red 3 0 medical data with encouraging results. The ideas presented ure not just limited to our platform. but can be generalized to include most parallel architectures. We present lessons learned from writing fust volume renderers and from applying image processing technique., to viewing volumetric data.
2006
Abstract Visualization of volumetric datasets is common in many fields and has been an active area of research in the past two decades. In spite of developments in volume visualization techniques, interacting with large datasets still demands research efforts due to perceptual and performance issues. The support of graphics hardware for texture-based visualization allows efficient implementation of rendering techniques that can be combined with interactive sculpting tools to enable interactive inspection of 3D datasets.
International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering, 2007
Conventional human-computer interfaces for the exploration of volume datasets employ the mouse as an input device. Specifying an oblique orientation for a cross- sectional plane through the dataset using such interfaces requires an indirect approach involving a combination of actions that must be learned by the user. In this paper we propose a new interface model that aims to provide
Lecture Notes in Computer Science, 2014
Visualization enables scientists to transform data in its raw form to a visual form that will facilitate discoveries and insights. Although there are advantages for displaying inherently 3-dimensional (3D) data in immersive environments, those advantages are hampered by the challenges involved in selecting volumes of that data for exploration or analysis. Selection involves the user identifying a set of points for a specific task. This paper preliminary data collection on natural user actions for volume selection. This paper also presents a research agenda outlining an extension for volume selection classification, as well as challenges, for designing components for a direct selection of volumes of data points.
Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N = 5) and interns (N = 2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed.
2011
Radiologists from all application areas are trained to read slice-based visualizations of 3D medical image data. Despite the numerous examples of sophisticated threedimensional renderings, especially all variants of direct volume rendering, such methods are often considered not very useful by radiologists who prefer slice-based visualization. Just recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking that result in repositioning slices. In this paper, we present a new volume picking technique that, in contrast to previous work, does not require pre-segmented data or metadata. The positions picked by our method are solely based on the data itself, the transfer function and, most importantly, on the way the volumetric rendering is perceived by viewers. To demonstrate the usefulness of the proposed method we apply it for automatically repositioning slices in an abdominal MRI scan, a data set from a flow simulation and a number of other volumetric scalar fields. Furthermore we discuss how the method can be implemented in combination with various different volumetric rendering techniques.
2011
Radiologists from all application areas are trained to read slice-based visualizations of 3D medical image data. Despite the numerous examples of sophisticated three-dimensional renderings, especially all variants of direct volume rendering, such methods are often considered not very useful by radiologists who prefer slice-based visualization. Just recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking that result in repositioning slices. In this paper, we present a new volume picking technique that, in contrast to previous work, does not require pre-segmented data or metadata. The positions picked by our method are solely based on the data itself, the transfer function and, most importantly, on the way the volumetric rendering is perceived by viewers. To demonstrate the usefulness of the proposed method we apply it for automatically repositioning slices in an abdominal MRI scan, a data set ...
Interactive data exploration and manipulation are often hindered by dataset sizes. For 3D data, this is aggravated by occlusion, important adjacencies, and entangled patterns. Such challenges make visual interaction via common filtering techniques hard. We describe a set of realtime multi-dimensional data deformation techniques that aim to help users to easily select, analyze, and eliminate spatial-and-data patterns. Our techniques allow animation between view configurations, semantic filtering and view deformation. Any data subset can be selected at any step along the animation. Data can be filtered and deformed to reduce occlusion and ease complex data selections. Our techniques are simple to learn and implement, flexible, and real-time interactive with datasets of tens of millions of data points. We demonstrate our techniques on three domain areas: 2D image segmentation and manipulation, 3D medical volume exploration, and astrophysical exploration.
3D User Interfaces (3DUI'06)
This paper presents a novel system for interactive visualization and manipulation of medical datasets for surgery planning based on a hybrid VR / Tablet PC user interface. The goal of the system is to facilitate efficient visual inspection and correction of surface models generated by automated segmentation algorithms based on x-ray computed tomography scans, needed for planning surgical resections of liver tumors. Factors like the quality of the visualization, nature of the dataset and interaction efficiency strongly influence system design decisions, in particular the design of the user interface, input devices and interaction techniques, leading to a hybrid setup. Finally, a user study is presented, which characterizes the system in terms of method efficiency and usability.
1993
fast display method for volumetric data sets is presented, which involves a slice-f based method for extracting the potentially visible voxels to represent the visible sur aces. For a given viewing direction, the number of visible voxels can be trimmed a further by culling most of the voxels not visible from that direction. The entire 3D rray of voxels is also present for invasive operations and direct access to interior i structures. This approach has been integrated on a low-cost graphic engine as an nteractive system for craniofacial surgical planning that is currently in clinical use.
Medical Imaging 2004: Visualization, Image-Guided Procedures, and Display, 2004
This work presents a set of tools developed to provide 3D visualization and interaction with large volumetric data that relies on recent programmable capabilities of consumer-level graphics cards. We are exploiting the programmable control of calculations performed by the graphics hardware for generating the appearance of each pixel on the screen to develop real-time, interactive volume manipulation tools. These tools allow real-time modification of visualization parameters, such as color and opacity classification or the selection of a volume of interest, extending the benefit of hardware acceleration beyond display, namely for computation of voxel visibility. Three interactive tools are proposed: a cutting tool that allows the selection of a convex volume of interest, an eraser-like tool to eliminate non-relevant parts of the image and a digger-like tool that allows the user to eliminate layers of a 3D image. To interactively apply the proposed tools on a volume, we are making use of some so known user interaction techniques, as the ones used in 2D painting systems. Our strategy is to minimize the user entrainment efforts involved in the tools learning. Finally, we illustrate the potential application of the conceived tools for preoperative planning of liver surgery and for liver vascular anatomy study. Preliminary results concerning the system performance and the images quality and resolution are presented and discussed.
International Congress Series, 2003
With modern CT scanners, radiologists are facing an ever increasing number of images not possible to review on a slice by slice basis. During the past years, volume rendering has developed to an interesting alternative for reading large medical data volumes. Due to the increasing computer power and the development of dedicated acceleration hardware, it can now be realized as a real-time system with standard personal computers at reasonable costs. However, the specification of transfer functions needed to visualize features of interest is still a difficult task [W. Schroeder, C. Bajaj, G. Kindlmann, H. Pfister, 2000. The Transfer Function Bake-Off. IEEE Visualization Conference]. A fast and simple technique for setting transfer functions is crucial for clinical routine work. We present a novel, interactive graphical user interface to deal with this problem.
2011
Volumetric visualization has many practical applications, particularly in medical imaging. Usability of volumetric visualization algorithms depends on available means to select areas of interest in volumetric data that are to be visualized. A simple and sucient method is a onedimensional transfer function that assigns colours to intensity values of the data. A drawback of this method is that it produces visual artefacts in specic cases. In this paper we propose a volumetric visualization method that overcomes this drawback by using ltration with volumetric ray casting algorithm. Our method enables users to use simple transfer functions with signicant visual artefacts reduction.
Visual quality of volume rendering for medical imagery strongly depends on the underlying transfer function. Conventional Windows–Icons–Menus–Pointer interfaces typically refer the user to browse a lengthy catalog of predefined transfer functions or to pain-staking refine the transfer function by clicking and dragging several independent handles. To turn the standard design process less difficult and tedious, this paper proposes novel interactions on a sketch-based interface that supports the design of 1D transfer functions via touch gestures to directly control voxel opacity and easily assign colors. User can select different types of transfer function shapes including ramp function, free hand curve drawing, and slider bars similar to those of a mixing table. An assorted array of thumbnails provides an overview of the data when editing the transfer function. User performance is evaluated by comparing the time and effort necessary to complete a number of tests with sketch-based and conventional interfaces. Users were able to more rapidly explore and understand volume data using the sketch-based interface, as the number of design iterations necessary to obtain a desirable transfer function was reduced. In addition, informal evaluation sessions carried out with professionals (two senior radiologists, a general surgeon and two scientific illustrators) provided valuable feedback on how suitable the sketch-based interface is for illustration, patient communication and medical education.
IEEE Transactions on Visualization and Computer Graphics, 2016
Visualization area Visualization area Dynamic gallery area Dynamic gallery area Adjust opacity Set color Edit Fig. 1: Left: The top shows the final rendering while the bottom part provides a dynamically generated image gallery of the data domain. Overview and details of the data domain are presented on demand by zooming and panning in the gallery using pinch and swipe touch gestures. Pressing a gallery image enables the user to edit opacity and color of the respective data range. Right: In edit mode, the feature is highlighted in the top by decreasing the opacity of everything else which was found to be essential for novice users. The bottom area enables intuitive editing of opacity and color using a nonlinear mapping function and a simplified color picker.
Visualization and Computer Graphics, IEEE Transactions on, 2005
We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, ...
The Journal of Supercomputing, 2010
In this paper, we present an approach to interactive out-of-core volume data exploration that has been developed to augment the existing capabilities of the LhpBuilder software, a core component of the European project LHDL (http://www.biomedtown.org/biomed_town/lhdl). The requirements relate to importing, accessing, visualizing and extracting a part of a very large volume dataset by interactive visual exploration. Such datasets contain billions of voxels and, therefore, several gigabytes are required just to store them, which quickly surpass the virtual address limit of current 32-bit PC platforms. We have implemented a hierarchical, bricked, partition-based, out-of-core strategy to balance the usage of main and external memories. A new indexing scheme is introduced, which permits the use of a multiresolution bricked volume layout with minimum overhead and also supports fast data compression. Using the hierarchy constructed in a pre-processing step, we generate a coarse approximation that provides a preview using direct volume visualization for large-scale datasets. A user can interactively explore the dataset by specifying a region of interest (ROI), which further generates a much more accurate data representation inside the ROI. If even more precise accuracy is needed inside the ROI, nested ROIs are used. The software has been constructed using the Multimod Application Framework, a VTK-based system; however, the approach can be adopted for the other systems in a straightforward way. Experimental results show that the user can interactively explore large volume datasets such as the Visible Human Male/Female (with file sizes of 3.15/12.03 GB, respectively) on a commodity graphics platform, with ease.
Proceedings of the 1997 symposium on Interactive 3D graphics - SI3D '97, 1997
On many scales, volume data sets often lack the resolution to allow automatic network segmentation, from blood vessels to molten rock channels; even where humans can see the net clearly. We introduce a precise and dextrous environment for users to transform perception to data, using a stereoscopic view for easy identification of vessels. Our tools exploit the reach-in handimmersion ergonomics of the Virtual Workbench to allow sustained productive work, and 3D-textured subvolumes to allow interactive vessel tracing in real time.
This paper presents Direct Volume Rendering (DVR) improvement strategies, which provide new opportunities for scientific and medical visualization which are not available in due measure in analogues: 1) multi-volume rendering in a single space of up to 3 volumetric datasets determined in different coordinate systems and having sizes as big as up to 512x512x512 16-bit values; 2) performing the above process in real time on a middle class GPU, e. g. nVidia GeForce GTS 250 512 M B; 3) a custom bounding mesh for more accurate selection of the desired region in addition to the clipping bounding box; 4) simultaneous usage of a number of visualization techniques including the shaded Direct Volume Rendering via the 1D-or 2D-transfer functions, multiple semi-transparent discrete iso-surfaces visualization, M IP, and M IDA. The paper discusses how the new properties affect the implementation of the DVR. In the DVR implementation we use such optimization strategies as the early ray termination and the empty space skipping. The clipping ability is also used as the empty space skipping approach to the rendering performance improvement. We use the random ray start position generation and the further frame accumulation in order to reduce the rendering artifacts. The rendering quality can be also improved by the onthe-fly tri-cubic filtering during the rendering process. Our framework supports 4 different stereoscopic visualization modes. Finally we outline the visualization performance in terms of the frame rates for different visualization techniques on different graphic cards.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.