Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021
The presence or not of collisions between objects is usually required to study the interaction between them, increasing the realism in virtual environments. Collision detection between polygonal objects has been widely studied, and more recently some studies have been made concerning collisions between volume objects. Collision detection between volume datasets and polygonal objects is introduced in this work. This kind of mixed scenes appears naturally in many applications such as surgery simulation and volume edition. To detect the collision, first the volume dataset is represented by a single 3D texture. Then, a mapping from eye space to volume space is established, such as each mesh fragment has a 3D texture coordinate. The collision is verified by fragment during the rasterization stage. We use OpenGL R occlusion query extension to count the number of mesh fragments colliding with the volume. Our tests show that up to 3800 pairs of volume-mesh may be evaluated in one second.
International Journal of Creative Interfaces and Computer Graphics, 2012
Collision detection has been studied for scenes containing only polygonal objects (surfaces) or only volumes. With the evolution of the graphics hardware, surfaces and volumes can be rendered together, demanding new challenges for the area of collision detection. In this order of ideas, the authors propose the first approach for volume-surface collision detection, with GPU support. A mapping from surface space to texture space is established, such as each mesh fragment has a 3D texture coordinate. The volume-surface collision is tested in the fragment shader, verifying if a surface fragment is texturized with an opaque voxel. OpenGL® occlusion query extension is used to count the number of mesh fragments colliding with the volume. Since one surface can be texturized with multiple volume textures, the authors’ approach is naturally extended to discard collision between one surface and several volumes in a single pass, with a minimum impact in the rendering time. The authors’ tests re...
KỶ YẾU HỘI NGHỊ KHOA HỌC CÔNG NGHỆ QUỐC GIA LẦN THỨ XI NGHIÊN CỨU CƠ BẢN VÀ ỨNG DỤNG CÔNG NGHỆ THÔNG TIN, 2018
Collision detection is an important component of many applications in computer graphics. Klosowski proposed a bounding volume is to used k-DOP, a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. However, the cost of creating boxes and collision detection with k-DOP is huge. In order to achieve high accuracy with k-DOP, the multi-faceted surfaces must be very large. This is the biggest drawback of the k-DOP methods. In this paper, we focus on the properties of the Bounding Volume Hierarchy (BVH) and degree of the tree estimation in order to increase the efficiency of collision detection between rigid bodies in virtual environments; test results showed efficacy with framework for Animation of virtual Characters in Three dimensions (fACT).
IEEE Transactions on Visualization and Computer Graphics, 1999
In Volume Graphics, objects are represented by arrays or clusters of sampled 3D data. A volumetric object representation is necessary in computer modeling whenever interior structure affects an objectś behavior or appearance. However, existing volumetric representations are not sufficient for modeling the behaviors expected in applications such as surgical simulation, where interactions between both rigid and deformable objects and the cutting, tearing, and repairing of soft tissues must be modeled in real time. 3D voxel arrays lack the sense of connectivity needed for complex object deformation while finite element models and mass-spring systems require substantially reduced geometric resolution for interactivity and they can not be easily cut or carved interactively. This paper discusses a linked volume representation that enables physically realistic modeling of object interactions such as: collision detection, collision response, 3D object deformation, and interactive object modification by carving, cutting, tearing, and joining. The paper presents a set of algorithms that allow interactive manipulation of linked volumes that have more than an order of magnitude more elements and considerably more flexibility than existing methods. Implementation details, results from timing tests, and measurements of material behavior are presented.
2011
We present a novel culling algorithm to perform fast and robust contin-uous collision detection between deforming volume meshes. This includes a continuous separating axis test that can conservatively check whether two volume meshes overlap during a given time interval. In addition, we present efficient methods to eliminate redundant elementary tests between the fea-tures (e.g., vertices, edges, and faces) of volume elements (e.g., tetrahedra, hexahedra, triangular prisms, etc.). Our approach is applicable to various deforming meshes, including those with changing topologies, and efficient-ly computes the first time of contact. We are able to perform inter-object and intra-object collision queries in models represented with tens of thousands of volume elements at interactive rates on a single CPU core. Moreover, we observe more than an order of magnitude performance improvement over prior methods.
Proceedings of the Computer Graphics International 2010 Conference, 2010
2000
Point clouds models are a common shape representation for several reasons. Three-dimensional scanning devices are widely used nowadays and points are an attractive primitive for rendering complex geometry. Nevertheless, there is not much literature on collision detection for point cloud models. This paper presents a novel collision detection algorithm for point cloud models. The scene graph is divided in voxels. The objects of each voxel are organized in R-trees hierarchies of Axis-Aligned Bounding Boxes to group neighboring points and filter out very quickly parts of objects that do not interact with other models. The proposed algorithm also uses Overlapping Axis-Aligned Bounding Boxes to improve the performance of the collision detection process. Points derived from laser scanned data typically are not segmented and can have arbitrary spatial resolution thus introducing computational and modeling issues. We address these issues and results show that the proposed collision detection algorithm effectively finds intersections between point cloud models since it is able to reduce the number of bounding volume checks and updates.
Proceedings of the sixth international conference on 3D Web technology - Web3D '01, 2001
Volume visualization has become an invaluable visualization tool. A wide variety of data sets coming from medical applications (e.g. MRI, CT or 3D ultrasound) or geological sensory information are represented as structured volume grids. In many application it is favorable to access the data sets from the net and explore the volume on a typical PC. Modern graphics hardware makes volume rendering at interactive rates possible. However, protocols for exchange of 3D graphics content such as VRML97 are not equipped to process volume data. This paper presents an approach using 2D/3D textures and standard rendering hardware, which allows real-time rendering of volume and polygonal data in VRML applications. The proposed environment enables the user to navigate through -and interact with -the VRML scene, combining volume and surface model data sets.
Medical Image Analysis, 1998
Surgical simulation has many applications in medical education, surgical training, surgical planning and intra-operative assistance. However, extending current surface-based computer graphics methods to model phenomena such as the deformation, cutting, tearing or repairing of soft tissues poses significant challenges for real-time interactions. This paper discusses the use of volumetric methods for modeling complex anatomy and tissue interactions. New techniques are introduced that use volumetric methods for modeling soft-tissue deformation and tissue cutting at interactive rates. An initial prototype for simulating arthroscopic knee surgery is described which uses volumetric models of the knee derived from 3-D magnetic resonance imaging, visual feedback via real-time volume and polygon rendering, and haptic feedback provided by a force-feedback device.
1993
Volume rendering is a title often ambiguously used in science. One meaning often quoted is:`to render any three volume dimensional data set'; however, within this categorisation \surface rendering" is contained. Surface rendering is a technique for visualising a geometric representation of a surface from a three dimensional volume data set. A more correct de nition of Volume Rendering would only incorporate the direct visualisation of volumes, without the use of intermediate surface geometry representations. Hence we state:`Volume Rendering is the Direct Visualisation of any three dimensional Volume data set; without the use of an intermediate geometric representation for isosurfaces';`Surface Rendering is the Visualisation of a surface, from a geometric approximation of an isosurface, within a Volume data set'; where an isosurface is a surface formed from a cross connection of data points, within a volume, of equal value or density. This paper is an overview of both Surface Rendering and Volume Rendering techniques. Surface Rendering mainly consists of contouring lines over data points and triangulations between contours. Volume rendering methods consist of ray casting techniques that allow the ray to be cast from the viewing plane into the object and the transparency, opacity and colour calculated for each cell; the rays are often cast until an opaque object is`hit' or the ray exits the volume.
Surgical simulation has many applications in education and training, surgical planning, and intra-operative assistance. However, extending current surface-based computergraphics methods to model phenomena such as the deformation, cutting, tearing, orrepairing of soft ...
In this paper we present a novel haptic rendering method for exploration of volumetric data. It addresses a recurring flaw in almost all related approaches, where the manipulated object, when moved too quickly, can go through or inside an obstacle. Additionally, either a specific topological structure for the collision objects is needed, or extra speed-up data structures should be prepared. These issues could make it difficult to use a method in practice. Our approach was designed to be free of such drawbacks. An improved version of the method presented here does not have the issues of the original method – oscillations of the interaction point and wrong friction force in some cases. It uses the ray casting technique for collision detection and a path finding approach for rigid collision response. The method operates directly on voxel data and does not use any precalculated structures, but uses an implicit surface representation being generated on the fly. This means that a virtual scene may be both dynamic or static. Additionally, the presented approach has a nearly constant time complexity independent of data resolution.
1997
Surgical simulation has many applications in education and training, surgical planning, and intra-operative assistance. However, extending current surface-based computer graphics methods to model phenomena such as the deformation, cutting, tearing, or repairing of soft tissues pose significant challenges for real-time interactions. In this paper, the use of volumetric methods for modeling complex anatomy and tissue interactions is introduced. New techniques for modeling soft tissue deformation and tissue cutting at interactive rates are detailed. In addition, an initial prototype for simulating arthroscopic knee surgery that has resulted from an ongoing collaboration is described. Volumetric models for the knee simulator were derived from 3D Magnetic Resonance Imaging. Visual and haptic feedback is provided to the user via real-time volume and polygon rendering and a force feedback device.
Teleoperators and Virtual Environments, 1998
We propose an accurate collision detection algorithm for use in virtual reality applications. The algorithm works for three-dimensional graphical environments where multiple objects, represented as polyhedra (boundary representation), are undergoing arbitrary motion (translation and rotation). The algorithm can be used directly for both convex and concave objects and objects can be deformed (nonrigid) during motion. The algorithm works efficiently by first reducing the number of face pairs that need to be checked accurately for interference, by first localizing possible collision regions using bounding box and spatial subdivision techniques. Face pairs that remain after this pruning stage are then accurately checked for interference. The algorithm is efficient, simple to implement, and does not require any memory-intensive auxiliary data structures to be precomputed and updated. The performance of the proposed algorithm is compared directly against other existing algorithms, e.g., the separating plane algorithm, octree update method, and distance-based method. Results are given to show the efficiency of the proposed method in a general environment.
Proceedings Computer Animation 1999, 1999
We present a simple method for performing real-time collision detection in a virtual surgery environment. The method relies on the graphics hardware for testing the interpenetration between a virtual deformable organ and a rigid tool controlled by the user. The method enables to take into account the motion of the tool between two consecutive time steps. For our specific application, the new method runs about a hundred times faster than the well known orientedbounding-boxes tree method .
Mathematics and Visualization, 2016
We describe a method for combining and visualizing a set of overlapping volume images with high resolution but limited spatial extent. Our system combines the calculation of a registration metric with ray casting for direct volume rendering into a combined operation performed on the graphics processing unit (GPU). The combined calculation reduces memory traffic, increases rendering frame rate, and makes possible interactive-speed, usersupervised, semi-automatic combination of many component volume images. For volumes that do not overlap any other imaged volume, the system uses contextual information provided in the form of an overall 2D background image to calculate a registration metric.
If two closed polygonal objects with outfacing normals intersect each other there exist one or more lines that intersect these objects at at least two consecutive front or back facing object points. In this work we present a method to efficiently detect these lines using depth-peeling and simple fragment operations. Of all polygons only those having an intersection with any of these lines are potentially colliding. Polygons not intersected by the same line do not intersect each other. We describe how to find all potentially colliding polygons and the potentially colliding pairs using a mipmap hierarchy that represents line bundles at ever increasing width. To download only potentially colliding polygons to the CPU for polygon-polygon intersection testing, we have developed a general method to convert a sparse texture into a packed texture of reduced size. Our method exploits the intrinsic strength of GPUs to scan convert large sets of polygons and to shade billions of fragments at interactive rates. It neither requires a bounding volume hierarchy nor a pre-processing stage, so it can efficiently deal with very large and deforming polygonal models. The particular design makes the method suitable for applications where geometry is modified or even created on the GPU.
2012 International Conference on Cyberworlds, 2012
Haptic exploration adds an additional dimension to working with 3D data: a sense of touch. This is especially useful in areas such as medical simulation, training and pre-surgical planning, as well as in museum display, sculpting, CAD, military applications, assistive technology for blind and visually impaired, entertainment and others. There exist different surface-and voxel-based haptic rendering methods. Unaddressed practical problems for almost all of them are that no guarantees for collision detection could be given and/or that a special topological structure of objects is required. Here we present a novel and robust approach based on employing the ray casting technique to collision detection, which does not have the aforementioned drawbacks while guaranteeing nearly constant time complexity independent of data resolution. This is especially important for such delicate procedures as pre-operation planning. A collision response in the presented prototype system is rigid and operates on voxel data, and no precalculation is needed. Additionally, our collision response uses an implicit surface representation "on the fly", which can be used with dynamically changing objects.
International Journal of Technology, 2019
2006
This paper presents a method for fast-approximate collision detection between 3D models S undergoing rigid body motion known as oriented convex polyhedra R(S). By enclosing 3D models tightly, the fineness of detected collision can be enhanced. It is known that the large number of void areas which belongs to any 3D bounding volumes B(S) can affect the accuracy of collision detection system. Therefore, a way to compute R(S) using intersection of a set of halfspaces is described. The directions of these halfspaces are generated from calculating covariance matrix. To develop the tightest R(S), the quality of abutting corners by implementing Tribox Bounds method is improved. To detect collision between R(S), a straightforward approach by simply checking its interval pairs in local space system is performed. The proposed approach was implemented and a number of comparisons in terms of time and recorded collision with other B(S) were performed. From the conducted tests, R(S) performs well and might be a possible choice for detecting collision of 3D models undergoing rigid body motion.
2010
Volume rendering is a well known visualization technique for volumetric scalar data. Several publications already described the acceleration of this technique when using the graphical processing unit (GPU). This enables real-time viewpoint manipulations which, in turn, facilitates the interpretation. Stereo, where two slightly different images are generated for each eye, proved to be a valuable addition to this. Mostly because, apart from shading and motion parallax, stereo is one of the most important depth cues available in this context. We have written a fully object-oriented C++ framework that contains software for both GPU accelerated volume rendering and stereo image generation. Using this, the benefits of stereo are presented in a qualitative fashion. One example show how it helps doctors with the examination of CT scans, a second illustrates the benefits for other applications, here the analysis of volume data originating from material science simulations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.