Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006, International Journal of Computational Geometry & Applications
We develop a mathematical framework for describing local features of a geometric object—such as the edges of a square or the apex of a cone—in terms of algebraic topological invariants. The main tool is the construction of a "tangent complex" for an arbitrary geometrical object, generalising the usual tangent bundle of a manifold. This framework can be used to develop algorithms for automatic feature location. We give several examples of applying such algorithms to geometric objects represented by point-cloud data sets.
Acta Numerica, 2014
In this paper we discuss the adaptation of the methods of homology from algebraic topology to the problem of pattern recognition in point cloud data sets. The method is referred to as persistent homology, and has numerous applications to scientific problems. We discuss the definition and computation of homology in the standard setting of simplicial complexes and topological spaces, then show how one can obtain useful signatures, called barcodes, from finite metric spaces, thought of as sampled from a continuous object. We present several different cases where persistent homology is used, to illustrate the different ways in which the method can be applied.
Parole chiave: Shape comparison, size function, natural pseudo-distance, persistent homology module,Čech homology, shape occlusion.
Journal of Physics: Conference Series, 2007
Scientific datasets obtained by measurement or produced by computational simulations must be analyzed to understand the phenomenon under study. The analysis typically requires a mathematically sound definition of the features of interest and robust algorithms to identify these features, compute statistics about them, and often track them over time. Because scientific datasets often capture phenomena with multi-scale behaviour, and almost always contain noise the definitions and algorithms must be designed with sufficient flexibility and care to allow multi-scale analysis and noise-removal. In this paper, we present some recent work on topological feature extraction and tracking with applications in molecular analysis, combustion simulation, and structural analysis of porous materials.
ACM Computing Surveys, 2008
Differential topology, and specifically Morse theory, provide a suitable setting for formalizing and solving several problems related to shape analysis. The fundamental idea behind Morse theory is that of combining the topological exploration of a shape with quantitative measurement of geometrical properties provided by a real function defined on the shape. The added value of approaches based on Morse theory is in the possibility of adopting different functions as shape descriptors according to the properties and invariants that one wishes to analyze. In this sense, Morse theory allows one to construct a general framework for shape characterization, parametrized with respect to the mapping function used, and possibly the space associated with the shape. The mapping function plays the role of a lens through which we look at the properties of the shape, and different functions provide different insights.
2007
We present a computational method for extracting simple descriptions of high dimensional data sets in the form of simplicial complexes. Our method, called Mapper, is based on the idea of partial clustering of the data guided by a set of functions defined on the data. The proposed method is not dependent on any particular clustering algorithm, i.e. any clustering algorithm may be used with Mapper. We implement this method and present a few sample applications in which simple descriptions of the data present important information about its structure.
2008 19th International Conference on Pattern Recognition, 2008
In this paper, we design linear time algorithms to recognize and determine topological invariants such as the genus and homology groups in 3D. These properties can be used to identify patterns in 3D image recognition. This has many and is expected to have more applications in 3D medical image analysis. Our method is based on cubical images with direct adjacency, also called (6,26)-connectivity images in discrete geometry. According to the fact that there are only six types of local surface points in 3D and a discrete version of the well-known Gauss-Bonnett Theorem in differential geometry, we first determine the genus of a closed 2Dconnected component (a closed digital surface). Then, we use Alexander duality to obtain the homology groups of a 3D object in 3D space. This idea can be extended to general simplicial decomposed manifolds or cell complexes in 3D.
Proc. Sympos. Point-Based Graphics, 2004
This paper tackles the problem of computing topological invariants of geometric objects in a robust manner, using only point cloud data sampled from the object. It is now widely recognised that this kind of topological analysis can give qualitative information about data sets which is not readily available by other means. In particular, it can be an aid to visualisation of high dimensional data. Standard simplicial complexes for approximating the topological type of the underlying space (such asČech, Rips, or α-shape) produce simplicial complexes whose vertex set has the same size as the underlying set of point cloud data. Such constructions are sometimes still tractable, but are wasteful (of computing resources) since the homotopy types of the underlying objects are generally realisable on much smaller vertex sets. We obtain smaller complexes by choosing a set of 'landmark' points from our data set, and then constructing a "witness complex" on this set using ideas motivated by the usual Delaunay complex in Euclidean space. The key idea is that the remaining (non-landmark) data points are used as witnesses to the existence of edges or simplices spanned by combinations of landmark points. Our construction generalises the topology-preserving graphs of Martinetz and Schulten [MS94] in two directions. First, it produces a simplicial complex rather than a graph. Secondly it actually produces a nested family of simplicial complexes, which represent the data at different feature scales, suitable for calculating persistent homology [ELZ00, ZC04]. We find that in addition to the complexes being smaller, they also provide (in a precise sense) a better picture of the homology, with less noise, than the full scale constructions using all the data points. We illustrate the use of these complexes in qualitatively analyzing a data set of 3 × 3 pixel patches studied by David Mumford et al [LPM03].
Topological invariants are very useful in various areas related to digital images and geometric modelling. In this paper, we study the simplicial homology groups of certain minimal simple closed surfaces, extend an earlier definition of the Euler characteristics of digital image, and show how to compute the Euler characteristic of several digital surfaces.
2009
An F-35 fighter jet represented by 5 landmarks (modified from [12]). An image of the jet in Fig. 3 generated by the focal point projection (modified from [12]).
We present TopMesh, a tool for extracting topological information from non-manifold three-dimensional objects with parts of non-uniform dimensions. The boundary of such objects is discretized as a mesh of triangles and of dangling edges, representing one-dimensional parts of the object. The geometrical and topological information extracted include the number of elements in the mesh, the number of non-manifold singularities and the Betti numbers, which characterize the topology of an object independently of the discretization of its boundary. TopMesh also computes a decomposition of the mesh into connected parts of uniform dimension, into edge-connected components formed by triangles, and into oriented edge-connected sub-meshes. We describe the functionalities of TopMesh and the algorithms implementing them
IET Image Processing, 2017
Holes, tunnels and cavities of two-dimensional (2D) and 3D objects are concise topological features used for object representation and recognition. In this study, the authors are representing any cubical tessellation (regular or not) of 2D and 3D objects and dealing with the extraction and the localisation of these features by using homology-based approach. The cubical tessellation (regular or not) of objects is translated into algebraic language suitable for building a reduced cell complex structure. The extraction of the homology information is equivalent to the estimation of the rank of the homology groups of the reduced complex. The localisation means the reconstruction of the object cycles from the generators of the homology groups. The reduction operation of the cell complex leads to an efficient algorithm. Note that, several objects can be analysed simultaneously by the algorithm conceived in our approach. This algorithm is validated by using 2D and 3D binary images.
IEEE Transactions on Knowledge and Data Engineering, 1998
A set of topological invariants for relations between lines embedded in the 2-dimensional Euclidean space is given. The set of invariants is proven to be necessary and sufficient to characterize topological equivalence classes of binary relations between simple lines. The topology of arbitrarily complex geometric scenes is described with a variation of the same set of invariants. Polynomial time algorithms are given to assess topological equivalence of two scenes. The relevance of identifying such a set of invariants and efficient algorithms is due to application areas of spatial database systems, where a model for describing topological relations between planar features is sought.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001
AbstractÐWe propose a new computational method for segmenting topological subdimensional point-sets in scalar images of arbitrary spatial dimensions. The technique is based on calculating the homotopy class defined by the gradient vector in a subdimensional neighborhood around every image point. This neighborhood is defined as the linear envelope spawned over a given subdimensional vector frame. In the simplest case where the rank of this frame is maximal, we obtain a technique for localizing the critical points, i.e., extrema and saddle points. We consider, in particular, the important case of frames formed by an arbitrary number of the first largest by absolute value principal directions of the Hessian. The method then segments positive and and negative ridges as well as other types of critical surfaces of different dimensionalities. The signs of the eigenvalues associated to the principal directions provide a natural labeling of the critical subsets. The result, in general, is a constructive definition of a hierarchy of point-sets of different dimensionalities linked by inclusion relations. Because of its explicit computational nature, the method gives a fast way to segment height ridges or edges in different applications. The defined topological point-sets are connected manifolds and, therefore, our method provides a tool for geometrical grouping using only local measurements. We have demonstrated the grouping properties of our construction by presenting two different cases where an extra image coordinate is introduced. In one of the examples, we considered the image analysis in the framework of the linear scale-space concept, where the topological properties are gradually simplified through the scale parameter. This scale parameter can be taken as an additional coordinate. In the second example, a local orientation parameter was used for grouping and segmenting elongated structures.
The propose a novel algorithm that compute a skeletal graph and thus capture the topology of an object Topology is an important attribute of an object which describes how different parts of an object surface are connected to each other. The method is based on capturing the topology of a modified reeb graph by tracking the critical points of a distance function. The algorithm for constructing the distance function based skeletal graph follows directly from the Morse lemma, which states that a change in the topology of a level set of a Morse function occurs only at its critical level. Distance function is used for constructing skeletal graphs. This approach employs Morse theory in the study of translation, rotation, and scale invariant skeletal graph.
Keypoints and Local Descriptors of Scalar Functions on 2D Manifolds
"This paper addresses the problem of describing surfaces using local features and descriptors. While methods for the detection of interest points in images and their description based on local image features are very well understood, their extension to discrete manifolds has not been well investigated. We provide a methodological framework for analyzing real-valued functions defined over a 2D manifold, embedded in the 3D Euclidean space, e.g., photometric information, local curvature, etc. Our work is motivated by recent advancements in multiple-camera reconstruction and image-based rendering of 3D objects: there is a growing need for describing object surfaces, matching two surfaces, or tracking them over time. Considering polygonal meshes, we propose a new methodological framework for the scale-space representations of scalar functions defined over such meshes. We propose a local feature detector (MeshDOG) and region descriptor (MeshHOG). Unlike the standard image features, the proposed surface features capture both the local geometry of the underlying manifold and the scale-space differential properties of the real-valued function itself. We provide a thorough experimental evaluation. The repeatability of the feature detector and the robustness of feature descriptor are tested, by applying a large number of deformations to the manifold or to the scalar function."
ArXiv, 2021
We introduce a linear dimensionality reduction technique preserving topological features via persistent homology. The method is designed to find linear projection L which preserves the persistent diagram of a point cloud X via simulated annealing. The projection L induces a set of canonical simplicial maps from the Rips (or Čech) filtration of X to that of LX. In addition to the distance between persistent diagrams, the projection induces a map between filtrations, called filtration homomorphism. Using the filtration homomorphism, one can measure the difference between shapes of two filtrations directly comparing simplicial complexes with respect to quasi-isomorphism μquasi-iso or strong homotopy equivalence μequiv. These μquasi-iso and μequiv measures how much portion of corresponding simplicial complexes is quasi-isomorphic or homotopy equivalence respectively. We validate the effectiveness of our framework with simple examples.
2012
Defining high-level features, detecting them, tracking them and deriving quantities based on them is an integral aspect of modern data analysis and visualization. In combustion simulations, for example, burning regions, which are characterized by high fuel-consumption, are a possible feature of interest. Detecting these regions makes it possible to derive statistics about their size and track them over time. However, features of interest in scientific simulations are extremely varied, making it challenging to develop cross-domain feature definitions. Topology-based techniques offer an extremely flexible means for general feature definitions and have proven useful in a variety of scientific domains. This paper will provide a brief introduction into topological structures like the contour tree and Morse-Smale complex and show how to apply them to define features in different science domains such as combustion. The overall goal is to provide an overview of these powerful techniques and start a discussion how these techniques can aid in the analysis of astrophysical simulations.
2008
A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity.
2012
In this thesis, we address the effective representation of arbitrary shapes, called non-manifold shapes, discretized through simplicial complexes, and we introduce a set of tools for their modeling and analysis. Specifically, we propose two dimension-independent data structures for simplicial complexes in arbitrary dimensions. The first contribution is the Incidence Simplicial (IS) data structure, based on the incidence relations for simplices of consecutive dimensions. The second contribution is the Generalized Indexed Data Structure with Adjacencies (IA∗), based on the adjacency relations for top simplices. The IS and IA∗ data structures are compact, support efficient navigation, and exhibit a small overhead, if restricted to manifolds. In the literature, there are several topological data structures for cell and simplicial complexes, thus a framework targeted to their fast prototyping is a valuable tool. Here, we introduce the dimension-independent and extensible Mangrove Topological Data Structure (Mangrove TDS) framework. This framework describes any data structure through a graph-based representation, which we call a mangrove. In this thesis, we provide extensive experimental comparisons for several data structures implemented in the Mangrove TDS framework, including the IS and IA∗ data structures. At the same time, we complete the definition of several data structures, previously proposed in the literature. In the second part of the thesis, we decompose any non-manifold shape into almost manifold parts in order to deal with its intrinsic complexity. We consider a dimension-independent decomposition of a non-manifold shape, called Manifold-Connected Decomposition (MC-Decomposition), previously investigated only for two- and three-dimensional complexes. Here, we propose several graph-based representations of such a decomposition, which can be combined with any topological data structure. We provide experimental comparisons about building times and storage costs of these data structures. Recently, the computation of topological invariants, like the simplicial homology, has drawn much attention in several applications. Here, we design and implement the dimension-independent and modular Mayer-Vietoris (MV) algorithm, which exploits the MC-Decomposition for computing the simplicial homology of a non-manifold simplicial shape in arbitrary dimensions. The MV algorithm offers an elegant way for computing the homology of any simplicial complex from the homology of its MC-components and of their intersections.
Journal of Mathematical Imaging and …, 2003
Combining implicit polynomials and algebraic invariants for representing and recognizing complicated objects proves to be a powerful technique. In this paper, we explore the findings of the classical theory of invariants for the calculation of algebraic invariants of implicit curves and surfaces, a theory largely disregarded in the computer vision community by a shadow of skepticism. Here, the symbolic method of the classical theory is described, and its results are extended and implemented as an algorithm for computing algebraic invariants of projective, affine, and Euclidean transformations. A list of some affine invariants of 4th degree implicit polynomials generated by the proposed algorithm is presented along with the corresponding symbolic representations, and their use in recognizing objects represented by implicit polynomials is illustrated through experiments. An affine invariant fitting algorithm is also proposed and the performance is studied.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.