Deep learning is a cutting-edge image processing method that is still relatively new but produces... more Deep learning is a cutting-edge image processing method that is still relatively new but produces reliable results. Leaf disease detection and categorization employ a variety of deep learning approaches. Tomatoes are one of the most popular vegetables and can be found in every kitchen in various forms, no matter the cuisine. After potato and sweet potato, it is the third most widely produced crop. The second-largest tomato grower in the world is India. However, many diseases affect the quality and quantity of tomato crops. This article discusses a deep-learning-based strategy for crop disease detection. A Convolutional-Neural-Network-based technique is used for disease detection and classification. Inside the model, two convolutional and two pooling layers are used. The results of the experiments show that the proposed model outperformed pre-trained InceptionV3, ResNet 152, and VGG19. The CNN model achieved 98% training accuracy and 88.17% testing accuracy.
Advances in Intelligent Systems and Computing, 2016
Smart city is an aspiration of the various stakeholders of the city. We strongly believe that soc... more Smart city is an aspiration of the various stakeholders of the city. We strongly believe that social media can be one of the real-time data sources, which help stakeholder to realize this dream. In this paper, we have analyzed the real-time data provided by Twitter in order to empower citizens by keeping them updated about what is happening around the city. We have implemented various clustering algorithms like k-means, Hierarchical agglomerative, LDA topic modeling on Twitter stream and reported results with purity 0.476, normal mutual information (NMI) 0.3835, and F-measure 0.54. We conclude that HA-ward outperforms K-means and LDA substantially. We also conclude that results are not impressive and need to design separate feature based clustering algorithm. We have identified various tasks to mine microblog in the ambit of smart city such as event detection, geo-tagging, city clustering based upon the user activity on ground.
Advances in Intelligent Systems and Computing, 2017
The original version of the book was inadvertently published with an incorrect author name "J. Ja... more The original version of the book was inadvertently published with an incorrect author name "J. Jayaram Reddy" which has been corrected to "A. Jayaram Reddy".
International Journal of Intelligent Systems and Applications, 2019
In most of the clustering algorithms, the assignment of initial centroids is performed randomly, ... more In most of the clustering algorithms, the assignment of initial centroids is performed randomly, which affects both the final outcome and the number of iterations required. Another aspect of the approaches in clustering algorithms is the use of Euclidean distance as the measure of similarity between data points, which is handicapped by linear separability of input data. The purpose of this paper is to combine suitable techniques so that both the above problems can be handled suitably leading to efficient algorithms. For the initial assignment of centroids we use Firefly and Fuzzy Firefly algorithms. We replace the Euclidean distance by Kernels (Gaussian and Hyper-tangent) leading to hybridized versions. For experimental analysis we use five different images from different domains as input. Two efficiency measures; Davis Bouldin index (DB) and Dunn index (D) are used for comparison. The tabular values, their graphical representations and output images are generated to support the claims. The analysis proves the superiority of the optimized algorithms over their existing counterparts. We also find that Hyper-tangent kernel with Rough Intuitionistic Fuzzy C-Means algorithm using Fuzzy Firefly algorithm produces the best results and has a much faster convergence rate. The analysis of medical, satellite or geographical images can be done more efficiently using the proposed optimized algorithms. It is supposed to play an important role in image segmentation and analysis.
International Journal of Advanced Information Technology, 2012
Data anonymization techniques enable publication of detailed information, while providing the pri... more Data anonymization techniques enable publication of detailed information, while providing the privacy of sensitive information in the data against a variety of attacks. Anonymized data describes a set of possible worlds that include the original data. Generalization and suppression have been the most commonly used techniques for achieving anonymization. Some algorithms to protect privacy in the publication of setvalued data were developed by Terrovitis et al .,[16]. The concept of k-anonymity was introduced by Samarati and Sweeny [15], so that every tuple has at least (k-1) tuples identical with it. This concept was modified in [16] in order to introduce m k-anonymity, to limit the effects of the data dimensionality. This approach depends upon generalisation instead of suppression. To handle this problem two heuristic algorithms; namely the DA-algorithm and the AA-algorithm were developed by them.These alogorithms provide near optimal solutions in many cases.In this paper,we improve DA such that undesirable duplicates are not generated andwe can display the anonymized data even in the FP-Tree way.We illustrate through suitable examples,the efficiency of our proposed algorithm.
Advances in Intelligent Systems and Computing, 2013
Data anonymization techniques enable publication of detailed information, while providing the pri... more Data anonymization techniques enable publication of detailed information, while providing the privacy of sensitive information in the data against a variety of attacks. Anonymized data describes a set of possible worlds that include the original data. Generalization and suppression have been the most commonly used techniques for achieving anonymization. Some algorithms to protect privacy in the publication of set-valued data were developed by Terrovitis et al.,[16]. The concept of k-anonymity was introduced by Samarati and Sweeny [15], so that every tuple has at least (k-1) tuples identical with it. This concept was modified in [16] in order to introduce K m -anonymity, to limit the effects of the data dimensionality. This approach depends upon generalisation instead of suppression. To handle this problem two heuristic algorithms; namely the DA-algorithm and the AA-algorithm were developed by them.These alogorithms provide near optimal solutions in many cases.In this paper,we improve DA such that undesirable duplicates are not generated and using a FP-growth we display the anonymized data.We illustrate through suitable examples,the efficiency of our proposed algorithm.
International Journal of Rough Sets and Data Analysis, 2019
One of the extensions of the basic rough set model introduced by Pawlak in 1982 is the notion of ... more One of the extensions of the basic rough set model introduced by Pawlak in 1982 is the notion of rough sets on fuzzy approximation spaces. It is based upon a fuzzy proximity relation defined over a Universe. As is well known, an equivalence relation provides a granularization of the universe on which it is defined. However, a single relation defines only single granularization and as such to handle multiple granularity over a universe simultaneously, two notions of multigranulations have been introduced. These are the optimistic and pessimistic multigranulation. The notion of multigranulation over fuzzy approximation spaces were introduced recently in 2018. Topological properties of rough sets are an important characteristic, which along with accuracy measure forms the two facets of rough set application as mentioned by Pawlak. In this article, the authors introduce the concept of topological property of multigranular rough sets on fuzzy approximation spaces and study its properties.
We propose a framework for the study of covering based rough set approximations. Three equivalent... more We propose a framework for the study of covering based rough set approximations. Three equivalent formulations of the classical rough sets are examined by using equivalence relations, partitions, and r-algebras, respectively. They suggest the element based, the granule based and the subsystem based definitions of approximation operators. Covering based rough sets are systematically investigated by generalizing these formulations and definitions. A covering of universe of objects is used to generate different neighborhood operators, neighborhood systems, coverings, and subsystems of the power set of the universe. They are in turn used to define different types of generalized approximation operators. Within the proposed framework, we review and discuss covering based approximation operators according to the element, granule, and subsystem based definitions.
Deep learning is a cutting-edge image processing method that is still relatively new but produces... more Deep learning is a cutting-edge image processing method that is still relatively new but produces reliable results. Leaf disease detection and categorization employ a variety of deep learning approaches. Tomatoes are one of the most popular vegetables and can be found in every kitchen in various forms, no matter the cuisine. After potato and sweet potato, it is the third most widely produced crop. The second-largest tomato grower in the world is India. However, many diseases affect the quality and quantity of tomato crops. This article discusses a deep-learning-based strategy for crop disease detection. A Convolutional-Neural-Network-based technique is used for disease detection and classification. Inside the model, two convolutional and two pooling layers are used. The results of the experiments show that the proposed model outperformed pre-trained InceptionV3, ResNet 152, and VGG19. The CNN model achieved 98% training accuracy and 88.17% testing accuracy.
Advances in Intelligent Systems and Computing, 2016
Smart city is an aspiration of the various stakeholders of the city. We strongly believe that soc... more Smart city is an aspiration of the various stakeholders of the city. We strongly believe that social media can be one of the real-time data sources, which help stakeholder to realize this dream. In this paper, we have analyzed the real-time data provided by Twitter in order to empower citizens by keeping them updated about what is happening around the city. We have implemented various clustering algorithms like k-means, Hierarchical agglomerative, LDA topic modeling on Twitter stream and reported results with purity 0.476, normal mutual information (NMI) 0.3835, and F-measure 0.54. We conclude that HA-ward outperforms K-means and LDA substantially. We also conclude that results are not impressive and need to design separate feature based clustering algorithm. We have identified various tasks to mine microblog in the ambit of smart city such as event detection, geo-tagging, city clustering based upon the user activity on ground.
Advances in Intelligent Systems and Computing, 2017
The original version of the book was inadvertently published with an incorrect author name "J. Ja... more The original version of the book was inadvertently published with an incorrect author name "J. Jayaram Reddy" which has been corrected to "A. Jayaram Reddy".
International Journal of Intelligent Systems and Applications, 2019
In most of the clustering algorithms, the assignment of initial centroids is performed randomly, ... more In most of the clustering algorithms, the assignment of initial centroids is performed randomly, which affects both the final outcome and the number of iterations required. Another aspect of the approaches in clustering algorithms is the use of Euclidean distance as the measure of similarity between data points, which is handicapped by linear separability of input data. The purpose of this paper is to combine suitable techniques so that both the above problems can be handled suitably leading to efficient algorithms. For the initial assignment of centroids we use Firefly and Fuzzy Firefly algorithms. We replace the Euclidean distance by Kernels (Gaussian and Hyper-tangent) leading to hybridized versions. For experimental analysis we use five different images from different domains as input. Two efficiency measures; Davis Bouldin index (DB) and Dunn index (D) are used for comparison. The tabular values, their graphical representations and output images are generated to support the claims. The analysis proves the superiority of the optimized algorithms over their existing counterparts. We also find that Hyper-tangent kernel with Rough Intuitionistic Fuzzy C-Means algorithm using Fuzzy Firefly algorithm produces the best results and has a much faster convergence rate. The analysis of medical, satellite or geographical images can be done more efficiently using the proposed optimized algorithms. It is supposed to play an important role in image segmentation and analysis.
International Journal of Advanced Information Technology, 2012
Data anonymization techniques enable publication of detailed information, while providing the pri... more Data anonymization techniques enable publication of detailed information, while providing the privacy of sensitive information in the data against a variety of attacks. Anonymized data describes a set of possible worlds that include the original data. Generalization and suppression have been the most commonly used techniques for achieving anonymization. Some algorithms to protect privacy in the publication of setvalued data were developed by Terrovitis et al .,[16]. The concept of k-anonymity was introduced by Samarati and Sweeny [15], so that every tuple has at least (k-1) tuples identical with it. This concept was modified in [16] in order to introduce m k-anonymity, to limit the effects of the data dimensionality. This approach depends upon generalisation instead of suppression. To handle this problem two heuristic algorithms; namely the DA-algorithm and the AA-algorithm were developed by them.These alogorithms provide near optimal solutions in many cases.In this paper,we improve DA such that undesirable duplicates are not generated andwe can display the anonymized data even in the FP-Tree way.We illustrate through suitable examples,the efficiency of our proposed algorithm.
Advances in Intelligent Systems and Computing, 2013
Data anonymization techniques enable publication of detailed information, while providing the pri... more Data anonymization techniques enable publication of detailed information, while providing the privacy of sensitive information in the data against a variety of attacks. Anonymized data describes a set of possible worlds that include the original data. Generalization and suppression have been the most commonly used techniques for achieving anonymization. Some algorithms to protect privacy in the publication of set-valued data were developed by Terrovitis et al.,[16]. The concept of k-anonymity was introduced by Samarati and Sweeny [15], so that every tuple has at least (k-1) tuples identical with it. This concept was modified in [16] in order to introduce K m -anonymity, to limit the effects of the data dimensionality. This approach depends upon generalisation instead of suppression. To handle this problem two heuristic algorithms; namely the DA-algorithm and the AA-algorithm were developed by them.These alogorithms provide near optimal solutions in many cases.In this paper,we improve DA such that undesirable duplicates are not generated and using a FP-growth we display the anonymized data.We illustrate through suitable examples,the efficiency of our proposed algorithm.
International Journal of Rough Sets and Data Analysis, 2019
One of the extensions of the basic rough set model introduced by Pawlak in 1982 is the notion of ... more One of the extensions of the basic rough set model introduced by Pawlak in 1982 is the notion of rough sets on fuzzy approximation spaces. It is based upon a fuzzy proximity relation defined over a Universe. As is well known, an equivalence relation provides a granularization of the universe on which it is defined. However, a single relation defines only single granularization and as such to handle multiple granularity over a universe simultaneously, two notions of multigranulations have been introduced. These are the optimistic and pessimistic multigranulation. The notion of multigranulation over fuzzy approximation spaces were introduced recently in 2018. Topological properties of rough sets are an important characteristic, which along with accuracy measure forms the two facets of rough set application as mentioned by Pawlak. In this article, the authors introduce the concept of topological property of multigranular rough sets on fuzzy approximation spaces and study its properties.
We propose a framework for the study of covering based rough set approximations. Three equivalent... more We propose a framework for the study of covering based rough set approximations. Three equivalent formulations of the classical rough sets are examined by using equivalence relations, partitions, and r-algebras, respectively. They suggest the element based, the granule based and the subsystem based definitions of approximation operators. Covering based rough sets are systematically investigated by generalizing these formulations and definitions. A covering of universe of objects is used to generate different neighborhood operators, neighborhood systems, coverings, and subsystems of the power set of the universe. They are in turn used to define different types of generalized approximation operators. Within the proposed framework, we review and discuss covering based approximation operators according to the element, granule, and subsystem based definitions.
Uploads
Papers by Jayaram Reddy