Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015
A system is described to remotely perform complex geometry processing on arbitrarily large triangle meshes. A distributed network of servers provides both the software and hardware necessary to undertake the computations, while the overall execution is managed by a central engine that both invokes appropriate Web services and handles the data transmission. Nothing more than a standard web browser needs to be installed on the client machine hosting the input mesh. The user interface allows to build complex pipelines by stacking geometric algorithms and by controlling their execution through conditions and cycles. Besides the technological contribution, an innovative mesh transfer protocol is described to treat large datasets whose transmission across scattered servers may represent a bottleneck. Also, efficiency and effectiveness are guaranteed thanks to a novel divide and conquer approach that the engine exploits to partition large meshes into smaller pieces, each delivered to a ded...
This paper discusses the implementation of a distributed geometry for parallel mesh generation, involving dynamic load-balancing and hence dynamic re-partitioning of the geometry. A novel approach is described for improving the efficiency of the distributed geometry interface when dealing with irregular shaped mesh partitions.
Algorithms
We propose a new strategy for the parallelization of mesh processing algorithms. Our main contribution is the definition of distributed combinatorial maps (called n-dmaps), which allow us to represent the topology of big meshes by splitting them into independent parts. Our mathematical definition ensures the global consistency of the meshes at their interfaces. Thus, an n-dmap can be used to represent a mesh, to traverse it, or to modify it by using different mesh processing algorithms. Moreover, an nD mesh with a huge number of elements can be considered, which is not possible with a sequential approach and a regular data structure. We illustrate the interest of our solution by presenting a parallel adaptive subdivision method of a 3D hexahedral mesh, implemented in a distributed version. We report space and time performance results that show the interest of our approach for parallel processing of huge meshes.
2011
Scientists commonly turn to supercomputers or Clusters of Workstations with hundreds (even thousands) of nodes to generate meshes for large-scale simulations. Parallel mesh generation software is then used to decompose the original mesh generation problem into smaller sub-problems that can be solved (meshed) in parallel. The size of the final mesh is limited by the amount of aggregate memory of the parallel machine. Also, requesting many compute nodes on a shared computing resource may result in a long waiting, far surpassing the time it takes to solve the problem. These two problems (i.e., insufficient memory when computing on a small number of nodes, and long waiting times when using many nodes from a shared computing resource) can be addressed by using out-of-core algorithms. These are algorithms that keep most of the dataset out-of-core (i.e., outside of memory, on disk) and load only a portion in-core (i.e., into memory) at a time. We explored two approaches to out-of-core comp...
Advances in Engineering Software, 2013
This work describes a techni que for generating two-dimensional triangular meshes using distributed memory parallel computers, based on a master/slaves model. This techni que uses a coarse quadtree to decompo se the domain and a serial advancing front techni que to generate the mesh in each subdomain concurrently. In order to advance the front to a neighboring subdomain, each subdomain suffers a shift to a Cartesian direction, and the same advancing front approach is performed on the shi fted subdomain. This shift-and-remesh procedure is repeatedly applied until no more mesh can be generated, shifting the subdomains to different directions each turn. A finer quadtree is also employed in this work to help estimate the processing load associated with each subdomain. This load estimati on technique produces results that accurately represent the numbe r of elements to be generated in each subdomain, leading to proper runtime prediction and to a well-balanced algorithm. The meshes generated with the parallel technique have the same quality as those generated serially, within acceptable limits. Although the presented approach is two-dimensional, the idea can be easily extended to three dimensions.
Computer-aided Design, 2011
In recent years, the concept of distributed systems has been applied to industries to enable cooperative work and collect distributed information. A distributed geometrical modeling system has been developed to share the functions among the systems on the network with a peer-to-peer (P2P) structure. The systems that are linked on the network can use functions of other systems. Such a network can perform operations concurrently by using other systems, which saves time. The importance of constructing this kind of distributed CAD system is to transfer three-dimensional (3D) CAD model data and access the locations of providers. The developed system has a sharing of functions, which consist of client (requester) and server (provider) as a P2P system, and is constructed by using a CAD kernel and COM/DCOM technology. Simple operations have been performed and tested by the developed system, such as Boolean operation, obtaining properties, triangulation and tessellation. The developed system has been evaluated with standalone and client-server systems for simple operations based on two criteria. In the first evaluation, the processing timings have been compared for simple operations among the systems: the stand-alone system is faster than the other systems. In the second evaluation, the systems are overloaded and the processing timings have been compared: the developed system is faster than the other systems.
2012
Remote access to large meshes is the subject of studies since several years. We propose in this paper a contribution to the problem of remote mesh viewing. We work on triangular meshes. After a study of existing methods of remote viewing, we propose a visualization approach based on a client-server architecture, in which almost all operations are performed on the
2012
Dealing with large simulation is a growing challenge. Ideally for the wellparallelized software prepared for high performance, the problem solving capability depends on the available hardware resources. But in practice there are several technical details which reduce the scalability of the system and prevent the effective use of such a software for large problems. In this work we describe solutions implemented in order to obtain a scalable system to solve and visualize large scale problems. The present work is based on Kratos MutliPhysics [1] framework in combination with GiD [2] pre and post processor. The applied techniques are verified by CFD simulation and visualization of a wind tunnel problem with more than 100 millions of elements in our in-hose cluster in CIMNE.
1996
A method is outlined for optimising graph partitions which arise in mapping unstructured mesh calculations to parallel computers. The method employs a combination of iterative techniques to both evenly balance the workload and minimise the number and volume of interprocessor communications.
1995
We give an overview of some strategies for mapping unstructured meshes onto processor grids. Sample results show that the mapping can make a considerable difference to the communication overhead in the parallel solution time, particularly as the number of processors increase.
2008
The paper presents MeshLab, an open source, extensible, mesh processing system that has been developed at the Visual Computing Lab of the ISTI-CNR with the helps of tens of students. We will describe the MeshLab architecture, its main features and design objectives discussing what strategies have been used to support its development. Various examples of the practical uses of MeshLab in research and professional frameworks are reported to show the various capabilities of the presented system.
Parallel computing is commonly used in computational fluid dynamics. For distributed memory machines -currently the most used architecture for large parallel machines -mesh partitioning is utilized to distribute the workload. In iterative codes, the cost of synchronising the partition borders will be the dominant factor in determining the parallel performance. As such, optimal synchronisation is of upmost concern. MPI, the de facto programming model for these machines, traditionally offered multiple ways to implement this synchronisation. However, in practice, inefficiencies in the MPI libraries usually limited portable applications to a single method. Advances in MPI implementations, interconnect networks and processor technology now changed the situation. In this work, we reevaluate boundary synchronisation. The communication pattern arising from the partitioning of different 2D and 3D meshes is studied, and used as input to a number of MPI synchronisation methods, both on distributed memory and on shared memory machines.
Large scale simulation is moving towards sustained teraflop rates. This simulation power potentiates the most cutting edge large-scale resources to solve large and challenging problems in science and engineering. For finite element simulations, a significant challenge is how to generate meshes with billions of nodes and elements and to deliver such meshes to processors of such large-scale systems. In this work, we discuss some strategies ranging from parametric mesh generators, suitable for simple geometries, to general approaches using octree based meshes for immersed boundary geometries.
Computing Systems in Engineering, 1995
TOP/DOMDEC is an interactive software package for mesh partitioning and parallel processing. It offers several state-of-the-art graph decomposition algorithms in a user friendly environment. Generated mesh partitions can be smoothed and optimized for minimum interface and maximum load balance using one of several non-deterministic optimization algorithms. TOP/DOMDEC also provides real-time means for assessing u priori the quality of a mesh partition and discriminating between different partitioning algorithms.
prace-ri.eu
The main goal of this project is to develop a parallel tetrahedral mesh generator based on existing sequential mesh generation software. As sequential mesh generation software, the Netgen mesh generator was used due to its availability as LGPL open source software and its wide user base. Parallel mesh generation routines were developed using the MPI libraries and the C++ language. The parallel mesh generation algorithms developed proceed by decomposing the whole geometry into a number of sub-geometries sequentially on a master node at the beginning and then mesh each sub-geometry in parallel on multiple processors. Three methods were implemented. The first decomposes the CAD geometry and produces conforming surface sub-meshes that are sent to other processors for volume mesh generation. The second and third methods are refinement based methods that also make use of the CAD geometry information. Advantages and disadvantages of each method are discussed. Parallel repartitioning also need to be done in the first method. To facilitate distributed element movements in parallel, a migration algorithm that utilizes "owner updates" rule is developed. Timing results obtained on the Curie supercomputer are presented. In particular, results show that by using a refinement based method, one can generate over a billion element meshes in under a minute.
1994
We outline the philosophy behind a new method for solving the graph-partitioning problem which arises in mapping unstructured mesh calculations to parallel computers. The method, encapsulated in a software tool, JOSTLE, employs a combination of techniques including the Greedy algorithm to give an initial partition, together with some powerful optimisation heuristics. A clustering technique is additionally employed to speed up the whole process.
Lecture Notes in Computational Science and Engineering
Parallel mesh generation is a relatively new research area between the boundaries of two scientific computing disciplines: computational geometry and parallel computing. In this chapter we present a survey of parallel unstructured mesh generation methods. Parallel mesh generation methods decompose the original mesh generation problem into smaller subproblems which are meshed in parallel. We organize the parallel mesh generation methods in terms of two basic attributes: (1) the sequential technique used for meshing the individual subproblems and (2) the degree of coupling between the subproblems. This survey shows that without compromising in the stability of parallel mesh generation methods it is possible to develop parallel meshing software using off-the-shelf sequential meshing codes. However, more research is required for the efficient use of the state-of-the-art codes which can scale from emerging chip multiprocessors (CMPs) to clusters built from CMPs.
2003
In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm, using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such as parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.
IEEE Transactions on Visualization and Computer Graphics, 2014
Fig. 1. Grouper represents a triangle mesh as groups of vertices and triangles stored as fixed-size records, most of which encode two adjacent triangles and one incident vertex. A VTT group (tan: 93%) represents one vertex and two adjacent triangles incident upon it; a VT group (blue: 3%) represents one vertex and one incident triangle; a T group (red: 4%) represents one triangle; and a V group (black: 1%) represents one vertex. Thick edges separate groups; thin edges separate triangles within the same group.
1998
The Multi-Triangulation (MT) is a general framework for managing the Level-of-Detail in large triangle meshes, which we have introduced in our previous work. In this paper, we describe an efficient implementation of an MT based on vertex decimation. We present general techniques for querying an MT, which are independent of a specific application, and which can be applied for solving problems, such as selective refinement, windowing, point location, and other spatial interference queries. We describe alternative data structures for encoding an MT, which achieve different trade-offs between space and performance. Experimental results are discussed.
Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2018
Due to its flexibility, compute mode is becoming more and more attractive as a way to implement many of the algorithms part of a state-of-the-art rendering pipeline. A key problem commonly encountered in graphics applications is streaming vertex and geometry processing. In a typical triangle mesh, the same vertex is on average referenced six times. To avoid redundant computation during rendering, a post-transform cache is traditionally employed to reuse vertex processing results. However, such a vertex cache can generally not be implemented efficiently in software and does not scale well as parallelism increases. We explore alternative strategies for reusing per-vertex results on-the-fly during massively-parallel software geometry processing. Given an input stream divided into batches, we analyze the effectiveness of sorting, hashing, and intra-thread-group communication for identifying and exploiting local reuse potential. We design and present four vertex reuse strategies tailored...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.