Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Journal of Process Management. New Technologies
…
15 pages
1 file
In fuzzy clustering, unlike hard clustering, depending on the membership value, a single object may belong exactly to one cluster or partially to more than one cluster. Out of a number of fuzzy clustering techniques Bezdek's Fuzzy C-Means and Gustafson-Kessel clustering techniques are well known where Euclidian distance and Mahalanobis distance are used respectively as a measure of similarity. We have applied these two fuzzy clustering techniques on a dataset of individual differences consisting of fifty feature vectors of dimension (feature) three. Based on some validity measures we have tried to see the performances of these two clustering techniques from three different aspects-first, by initializing the membership values of the feature vectors considering the values of the three features separately one at a time, secondly, by changing the number of the predefined clusters and thirdly, by changing the size of the dataset.
International Journal of Computer Applications, 2014
Fuzzy logic is an organized and mathematical method of handling inherently imprecise concepts through the use of membership functions, which allows membership with a certain degree. It has found application in numerous problem domains. It has been used in the interval [0, 1] fuzzy clustering, in pattern recognition and in other domains. In this paper, we introduce fuzzy logic, fuzzy clustering and an application and benefits. A case analysis has been done for various clustering algorithms in Fuzzy Clustering. It has been proved that some of the defined and available algorithms have difficulties at the borders in handling the challenges posed in collection of natural data. An analysis of two fuzzy clustering algorithms namely fuzzy c-means and Gustafson-Kessel fuzzy clustering algorithm has been analyzed.
Lecture Notes in Computer Science, 2003
In this paper, a comparative analysis of the mixed-type variable fuzzy c-means (MVFCM) and the fuzzy c-means using dissimilarity functions (FCMD) algorithms is presented. Our analysis is focused in the dissimilarity function and the way of calculating the centers (or representative objects) in both algorithms.
International journal of performability engineering, 2006
Performance testing of an algorithm is necessary to ascertain its applicability in real data and to evolve software. Clustering of a data set could be either fuzzy (having vague boundaries among the clusters) or crisp (having welldefined fixed boundaries) in nature. The present work is focused on the performance measure of some similarity-based fuzzy clustering algorithms, where three methods and each method having three different approaches are developed. In the first method, cluster centers are decided based on the minimum of entropy (probability) values of different data points [10]. In the second method, cluster centers are selected based on the maximum of total similarity values of data points and in the third method, a ratio of dissimilarity to similarity is considered to determine the cluster centers. Performances of these methods and approaches are compared on three standard data sets, such as IRIS, WINES, and OLITOS. Experimental results show that entropy-based method is able to generate better quality clusters but at the cost of little more computations. Finally, the best sets of clusters are mapped to 2-D using a self-organizing map (SOM) for visualization.
International Journal of Engineering Research, 2014
In data mining clustering techniques are used to group together the objects showing similar characteristics within the same cluster and the objects demonstrating different characteristics are grouped into clusters. Clustering approaches can be classified into two categories namely-Hard clustering and Soft clustering. In hard clustering data is divided into clusters in such a way that each data item belongs to a single cluster only while soft clustering also known as fuzzy clustering forms clusters such that data elements can belong to more than one cluster based on their membership levels which indicate the degree to which the data elements belong to the different clusters.
International Journal of Computer Applications, 2012
Fuzzy clustering techniques handle the fuzzy relationships among the data points and with the cluster centers (may be termed as cluster fuzziness). On the other hand, distance measures are important to compute the load of such fuzziness. These are the two important parameters governing the quality of the clusters and the run time. Visualization of multidimensional data clusters into lower dimensions is another important research area to note the hidden patterns within the clusters. This paper investigates the effects of cluster fuzziness and three different distance measures, such as Manhattan distance (MH), Euclidean distance (ED), and Cosine distance (COS) on Fuzzy c-means (FCM) and Fuzzy k-nearest neighborhood (FkNN) clustering techniques, implemented on Iris and extended Wine data. The quality of the clusters is assessed based on (i) data discrepancy factor (i.e., DDF, proposed in this study), (ii) cluster size, (iii) its compactness, (iv) distinctiveness, (v) execution time taken, and (vi) cluster fuzziness (m) values. The study observes that FCM handles the cluster fuzziness better than FkNN. MH distance measure yields best clusters with both FCM and FkNN. Finally, best clusters are visualized using a Self Organizing Map (SOM).
Fuzzy Optimization and Decision Making, 2008
Clustering algorithms divide up a dataset into a set of classes/clusters, where similar data objects are assigned to the same cluster. When the boundary between clusters is ill defined, which yields situations where the same data object belongs to more than one class, the notion of fuzzy clustering becomes relevant. In this course, each datum belongs to a given class with some membership grade, between 0 and 1. The most prominent fuzzy clustering algorithm is the fuzzy c-means introduced by Bezdek (Pattern recognition with fuzzy objective function algorithms, 1981), a fuzzification of the k-means or ISODATA algorithm. On the other hand, several research issues have been raised regarding both the objective function to be minimized and the optimization constraints, which help to identify proper cluster shape (Jain et al., ACM Computing Survey 31 : 1999). This paper addresses the issue of clustering by evaluating the distance of fuzzy sets in a feature space. Especially, the fuzzy clustering optimization problem is reformulated when the distance is rather given in terms of divergence distance, which builds a bridge to the notion of probabilistic distance. This leads to a modified fuzzy clustering, which implicitly involves the variancecovariance of input terms. The solution of the underlying optimization problem in terms of optimal solution is determined while the existence and uniqueness of the solution are demonstrated. The performances of the algorithm are assessed through two numerical applications. The former involves clustering of Gaussian membership functions and the latter tackles the well-known Iris dataset. Comparisons with standard fuzzy c-means (FCM) are evaluated and discussed.
Applications of Fuzzy Sets Theory; pp. 195-202, 2007
Difierent clustering algorithms are based on difierent similar- ity or distance measures (e.g. Euclidian distance, Minkowsky distance, Jackard coe-cient, etc.). Jarvis-Patrick clustering method utilizes the number of the common neighbors of the k-nearest neighbors of objects to disclose the clusters. The main drawback of this algorithm is that its parameters determine a too crisp cutting criterion, hence it is di-cult to determine a good parameter set. In this paper we give an extension of the similarity measure of the Jarvis-Patrick algorithm. This extension is carried out in the following two ways: (i) fuzzyflcation of one of the parameters, and (ii) spreading of the scope of the other parameter. The suggested fuzzy similarity measure can be applied in various forms, in difierent clustering and visualization techniques (e.g. hierarchical clus- tering, MDS, VAT). In this paper we give some application examples to illustrate the e-ciency of the use of the proposed fuzzy similarity measure in clustering. These examples show that the proposed fuzzy similarity measure based clustering techniques are able to detect clus- ters with difierent sizes, shapes and densities. It is also shown that the outliers are also detectable by the proposed measure.
Pattern Recognition, 1999
Fuzzy cluster-validity criterion tends to evaluate the quality of fuzzy c-partitions produced by fuzzy clustering algorithms. Many functions have been proposed. Some methods use only the properties of fuzzy membership degrees to evaluate partitions. Others techniques combine the properties of membership degrees and the structure of data. In this paper a new heuristic method is based on the combination of two functions. The search of good clustering is measured by a fuzzy compactness}separation ratio. The "rst function calculates this ratio by considering geometrical properties and membership degrees of data. The second function evaluates it by using only the properties of membership degrees. Four numerical examples are used to illustrate its use as a validity functional. Its e!ectiveness is compared to some existing cluster-validity criterion.
IEEE Annual Meeting of the Fuzzy Information, 2004. Processing NAFIPS '04., 2004
In many applications the objects to cluster are described by quantitative as well as qualitative features. A variety of algorithms has been proposed for unsupervised classification if fuzzy partitions and descriptive cluster prototypes are desired. However, most of these methods are designed for data sets with variables measured in the same scale type (only categorical, or only metric). We propose a new fuzzy clustering approach based on a probabilistic distance measure. Thus a major drawback of present methods can be avoided which lies in the vulnerability to favor one type of attributes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Ninth IEEE International Conference on Fuzzy Systems. FUZZ- IEEE 2000 (Cat. No.00CH37063), 2000
Procedia Computer Science, 2014
International Journal of Computer Applications, 2013
Pattern Recognition, 2006
International Journal of Advanced Computer Science and Applications, 2013
International Journal of Computer Applications, 2016
International Journal of Circuits, Systems and Signal Processing, 2021
Pattern Recognition, 2014
Computing and Informatics / Computers and Artificial Intelligence, 2011
Fuzzy Logic and Applications, 2009
IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), 1999