Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, Advances in Data Analysis and Classification
…
21 pages
1 file
General clustering deals with weighted objects and fuzzy memberships. We investigate the group-or object-aggregation-invariance properties possessed by the relevant functionals (effective number of groups or objects, centroids, dispersion, mutual object-group information, etc.). The classical squared Euclidean case can be generalized to non-Euclidean distances, as well as to non-linear transformations of the memberships, yielding the c-means clustering algorithm as well as two presumably new procedures, the convex and pairwise convex clustering. Cluster stability and aggregation-invariance of the optimal memberships associated to the various clustering schemes are examined as well.
Fuzzy Sets and Systems, 1998
Possibilistic clustering is seen increasingly as a suitable means to resolve the limitations resulting from the constraints imposed in the fuzzy C-means algorithm. Studying the metric derived from the covariance matrix we obtain a membership function and an objective function whether the Mahalanobis distance or the Euclidean distance is used. Applying the theoretical results using the Euclidean distance we obtain a new algorithm called fuzzy-minimals, which detects the possible prototypes of the groups of a sample. We illustrate the new algorithm with several examples.
IEEE International Conference on Fuzzy Systems
In our everyday life the number of groups of similar objects that we visually perceive is deeply constrained by how far we are from the objects and also by the direction we are approaching them. Based on this metaphor, in this work we present a generalization of partitional clustering aiming at the inclusion into the clustering process of both distance and direction of the point of observation towards the dataset. This is done by incorporating a new term in the objective function, accounting for the distance between the clusterspsila prototypes and the point of observation. It is a well known fact that the chosen number of partitions has a major effect on the objective function based partitional clustering algorithms, conditioning both the level of granularity of the data grouping and the capability of the algorithm to accurately reflect the underlying structure of the data. Thus the correct choice of the number of clusters is essential for any successful application of such algorit...
Kernelized Fuzzy C-Means clustering technique is an attempt to improve the performance of the conventional Fuzzy C-Means clustering technique. Recently this technique where a kernel-induced distance function is used as a similarity measure instead of a Euclidean distance which is used in the conventional Fuzzy C-Means clustering technique, has earned popularity among research community. Like the conventional Fuzzy C-Means clustering technique this technique also suffers from inconsistency in its performance due to the fact that here also the initial centroids are obtained based on the randomly initialized membership values of the objects. Our present work proposes a new method where we have applied the Subtractive clustering technique of Chiu as a preprocessor to Kernelized Fuzzy C-Means clustering technique. With this new method we have tried not only to remove the inconsistency of Kernelized Fuzzy C-Means clustering technique but also to deal with the situations where the number of clusters is not predetermined. We have also provided a comparison of our method with the Subtractive clustering technique of Chiu and Kernelized Fuzzy C-Means clustering technique using two validity measures namely Partition Coefficient and Clustering Entropy.
The distance measure is an important criterion in any clustering algorithm. This paper shows how fuzzy clustering results can be improved by introducing a weighting factor in the inter-objects distance measures. New weighted versions of four well-known distance measures are considered. These distances are tested, using the fuzzy c-means algorithm, on three datasets. Experimental results show that the introduced weighting factor leads to a significant improvement in comparison with the standard unweighted distances.
2013 Fifth International Conference on Advanced Computing (ICoAC), 2013
Clustering is an important facet of explorative data mining and finds extensive use in several fields. In this paper, we propose an extension of the classical Fuzzy C-Means clustering algorithm. The proposed algorithm, abbreviated as VFC, adopts a multi-dimensional membership vector for each data point instead of the traditional, scalar membership value defined in the original algorithm. The membership vector for each point is obtained by considering each feature of that point separately and obtaining individual membership values for the same. We also propose an algorithm to efficiently allocate the initial cluster centers close to the actual centers, so as to facilitate rapid convergence. Further, we propose a scheme to achieve crisp clustering using the VFC algorithm. The proposed, novel clustering scheme has been tested on two standard data sets in order to analyze its performance. We also examine the efficacy of the proposed scheme by analyzing its performance on image segmentation examples and comparing it with the classical Fuzzy C-means clustering algorithm.
Pattern Recognition, 2014
Relational fuzzy c-means (RFCM) is an algorithm for clustering objects represented in a pairwise dissimilarity values in a dissimilarity data matrix D. RFCM is dual to the fuzzy c-means (FCM) object data algorithm when D is a Euclidean matrix. When D is not Euclidean, RFCM can fail to execute if it encounters negative relational distances. To overcome this problem we can Euclideanize the relation D prior to clustering. There are different ways to Euclideanize D such as the β-spread transformation. In this article we compare five methods for Euclideanizing D toD. The quality ofD for our purpose is judged by the ability of RFCM to discover the apparent cluster structure of the objects underlying the data matrix D. The subdominant ultrametric transformation is a clear winner, producing much better partitions ofD than the other four methods. This leads to a new algorithm which we call the improved RFCM (iRFCM).
Procedia Computer Science, 2014
With the advancement of information processing technology in recent years, larger and more complicated data has appeared. On the basis of this situation, a method to deal with this kind of data is required. Cluster analysis, or clustering will be one solution. There are two types of data in a clustering method. One is the data that consists of objects and attributes, the other is the data that consists of the similarity of each object. The latter, a data of similarity is treated in this study. The purpose of the clustering for similarity data is to obtain the clustering result based on the similarity scaling among objects. However, when the data is complex the given similarity data does not always have the structure of similarity scaling defined in the clustering method. Therefore, in this paper, a fuzzy clustering method that enables us to obtain a clear classification for the complex data is proposed, by introducing the similarity data to the obtained clustering result and considering the relative structure for all the clusters. By considering the relative structure of the belongingness to clusters, more specific information of objects can be given, and the belongingness would be improved.
We explore an approach to possibilistic fuzzy c-means clustering that avoids a severe drawback of the conventional approach, namely that the objective function is truly minimized only if all cluster centers are identical. Our approach is based on the idea that this undesired property can be avoided if we introduce a mutual repulsion of the clusters, so that they are forced away from each other. In our experiments we found that in this way we can combine the partitioning property of the probabilistic fuzzy c-means algorithm with the advantages of a possibilistic approach w.r.t. the interpretation of the membership degrees.
1993
Cluster analysis has been playing an important role in pattern recognition, image processing, and time series analysis. The majority of the existing clustering algorithms depend on initial parameters and assumptions about the underlying dat,a structure. In this paper a fuzzy method of mode separation is proposed. The method addresses the task of multi-modal partition through a sequence of locally sensitive searches guided by a stochastic gradient ascent procedure, and addresses the cluster validity problem through a global partition performance criterion. the algorithm is computational efficient and provided gocd results when tested with a number of simulated and real data sets. key words: clustering analysis; fuzzy clustering; mode separation; cluster validity
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Neurocomputing, 2014
Advances in Intelligent Systems and Computing, 2016
2015 Third International Conference on Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2015
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2000
Studies in Computational Intelligence, 2013
Peachfuzz 2000 : 19th International Conference of the North American Fuzzy Information Processing Society - Nafips, 2000
Computers, Materials & Continua, 2021
International Journal of Computer Applications, 2014
Studies in Computational Intelligence, 2008
Proc. 15th Int. Fuzzy Systems Association World …, 2005
Ninth IEEE International Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No.00CH37063), 2000
Scientific Programming, 2021
IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), 1999
IEEE Transactions on Image Processing, 2000
IEEE Transactions on Fuzzy Systems, 2003
International Journal of Engineering Research, 2014
Fuzzy Sets and Systems, 2008