Papers by Bhaskar Nuthanakanti

Lung cancer is the number one cause of cancer deaths in both men and women in the U.S. and worldw... more Lung cancer is the number one cause of cancer deaths in both men and women in the U.S. and worldwide. The earlier detection of lung cancer is a challenging problem due to structure of cancer cell, where most of the cells are overlapped each other. For early detection and treatment stages image processing technique are widely used and for prediction of lung cancer, identification of genetic as well as environmental factors are very important in developing novel method of lung cancer prevention. In various cancer tumours such as lung cancer the time factor is very important to discover the abnormality issue in target images. Prediction of lung cancer we consider significant pattern and their corresponding weight age and score using decision tree algorithm. Using the significant pattern tool for lung cancer prediction system will develop. In this proposed system we use Histogram Equalization is used for preprocessing of images and feature extraction processes and neural network classifier to check the state of patient whether it is normal or abnormal. If the lung cancer is successfully detected and predicted in its early stages will reduce many treatment options and also reduce risk of invasive surgery and increase survival rate. Therefore lung cancer detection and prediction system will propose which is easy, cost effective and time saving. This will produce promising result for detection and prediction of lung cancer. Therefore early detection and prediction of lung cancer should play a vital role in the diagnosis process and also increase the survival rate of patient.
Early detection of lung cancer is the promising way of patient's survival which can be recognize ... more Early detection of lung cancer is the promising way of patient's survival which can be recognize by using Computed Tomography (CT) which is the technique in the detection of lung nodule compared to other imaging techniques. With the advancement of medical technology Computer Aided Detection Schemes (CAD) are developed [2]. It provides higher accuracy and performance rate. Here the lung CT images are taken as input, based on the support vector machine (SVM) it helps the doctors to perform image analysis. This paper focuses a study concerning automatic detection of lung cancer nodules by different methods.

Face recognition has been a fast growing, challenging and interesting area in real time applicati... more Face recognition has been a fast growing, challenging and interesting area in real time applications. A large number of face recognition algorithms have been developed in last decades. In this paper an attempt is made to review a wide range of methods used for face recognition comprehensively. This include PCA, LDA, ICA, SVM, Gabor wavelet soft computing tool like ANN for recognition and various hybrid combination of this techniques. This review investigates all these methods with parameters that challenges face recognition like illumination, pose variation, facial expressions. Face recognition is an important part of the capability of human perception system and is a routine task for humans, while building a similar computational model of face recognition. The computational model not only contribute to theoretical insights but also to many practical applications like automated crowd surveillance, access control, design of human computer interface (HCI), content based image database management, criminal identification and so on. The earliest work on face recognition can be traced back at least to the 1950s in psychology [1] and to the 1960s in the engineering literature [2]. Some of the earliest studies include work on facial expression emotions by Darwin [3]. But research on automatic machine recognition of faces started in the 1970s [4] and after the seminal work of Kanade [5]. In 1995, a review paper [6] gave a thorough survey of face recognition technology at that time [7]. At that time, video-based face recognition was still in a nascent stage. During the past decades, face recognition has received increased attention and has advanced technically. Many commercial systems for still face recognition are now available. Recently, significant research efforts have been focused on video-based face modeling/tracking, recognition and system integration. New databases have been created and evaluations of recognition techniques using these databases have been carried out. Now, the face recognition has become one of the most active applications of pattern recognition, image analysis and understanding. II.FACE RECOGNITION ALGORITHMS (A) Principal Component Analysis (PCA): Principle Component Analysis (PCA) plays a vital role in the face recognition system by reducing the dimensions of the original data which makes the way for producing the accurate results for better recognition. The recognition of important features of the face such as nose, eyes, cheeks etc can be in ease way and this all done based on the uncorrelated linear set of values which are resultant of correlated variables. the obtained principle component

Lung cancer is the number one cause of cancer deaths in both men and women in the U.S. and worldw... more Lung cancer is the number one cause of cancer deaths in both men and women in the U.S. and worldwide. The earlier detection of lung cancer is a challenging problem due to structure of cancer cell, where most of the cells are overlapped each other. For early detection and treatment stages image processing technique are widely used and for prediction of lung cancer, identification of genetic as well as environmental factors are very important in developing novel method of lung cancer prevention. In various cancer tumours such as lung cancer the time factor is very important to discover the abnormality issue in target images. Prediction of lung cancer we consider significant pattern and their corresponding weight age and score using decision tree algorithm. Using the significant pattern tool for lung cancer prediction system will develop. In this proposed system we use Histogram Equalization is used for preprocessing of images and feature extraction processes and neural network classifier to check the state of patient whether it is normal or abnormal. If the lung cancer is successfully detected and predicted in its early stages will reduce many treatment options and also reduce risk of invasive surgery and increase survival rate. Therefore lung cancer detection and prediction system will propose which is easy, cost effective and time saving. This will produce promising result for detection and prediction of lung cancer. Therefore early detection and prediction of lung cancer should play a vital role in the diagnosis process and also increase the survival rate of patient.

Cloud computing is ascending as an overall data intelligent worldview to understand clients' data... more Cloud computing is ascending as an overall data intelligent worldview to understand clients' data remotely hang on in a web cloud server. Cloud administrations offer decent comforts for the clients to savor the on-interest cloud applications while not considering the local foundation constraints. All through the data getting to, totally diverse clients could likewise be in an exceptionally strong affiliation, and after that data sharing turns out to be vital to acknowledge unique pay. the present well being measures arrangements fundamentally focus on the validation to fathom that a client's private data can't be illicit access, however relinquish a modern protection issue all through a client troublesome the cloud server to demand elective clients for indirect sharing. The stand up to right of passage request itself could uncover the client's protection despite regardless of whether or not it will get the data right of access authorizations. Amid this paper, we tend to propose a mutual power principally based protection safeguarding validation convention (SAPA) to handle higher than security issue for distributed storage space inside the SAPA, Shared access power is achieved by anonymous access request matching mechanism with security and privacy concerns and forward security); Attribute primarily based access management is adopted to comprehend that the user will solely access its own information fields; Proxy re-encryption is applied by the cloud server to produce information sharing among the multiple users. Meanwhile, universal composability (UC) model is established to prove that the SAPA in theory has the look correctness. It indicates that the planned protocol realizing privacy-preserving information access authority sharing, is engaging for multiuser cooperative cloud applications.

We consider Big Data as an upcoming trend and necessity for Big Data mining is taking place in qu... more We consider Big Data as an upcoming trend and necessity for Big Data mining is taking place in quite a lot of domains. Driven by real-world applications initialized by agencies of national funding agencies, managing of Big Data has revealed to be a challenging and extremely compelling task. In the systems of distinctive data mining, the mining procedures necessitate intensive computing units for data analysis. A computing proposal is, thus, essential to have resourceful access to, not less than, two types of resources such as data as well as computing processors. For mining of Big Data, since data scale is far ahead of capacity that a particular personal computer can hold, a distinctive framework of Big Data processing will rely on cluster computers by a high-performance computing platform, with a task of data mining being deployed by functioning several parallel programming tools on huge number of comput ing nodes. For a system of intelligent learning database to hold Big Data, the necessary key is to expand remarkably huge volume of data and make available treatments for features featured by HACE theorem. In our work we put forward an approach of HACE that distinguish features of Big Data revolution, and study a mode of Big Data processing, from data mining viewpoint. The conceptual vision of Big Data processing structure includes three tiers from with consideration on data computing known as Tier I, privacy of data and domain knowledge known as Tier II, as well as Big Data mining algorithms specified as Tier III.
Our work comprises a concept on joint trouble of packet scheduling additionally to self-localizat... more Our work comprises a concept on joint trouble of packet scheduling additionally to self-localization in underwater acoustic sensor network with distributed nodes at random. While a lot of the scientific studies are made round the techniques of underwater localization no work ended to uncover how a anchors have to transmit their packets towards sensor nodes. Concerning packet scheduling, our purpose is always to reduce localization time, and to get this done we produce a deliberation over two packet transmission approaches for example collision-free plan, additionally to collision-tolerant plan. The collision-tolerant require a shorter time for localization when compared to collision-free one for similar possibility of localization. Without average energy consumed by anchors, the method of collision-tolerant includes a lot of advantages.
The major aim of this paper is to solve the problem of multi-keyword ranked search over encrypted... more The major aim of this paper is to solve the problem of multi-keyword ranked search over encrypted cloud data (MRSE) at the time of protecting exact method wise privacy in the cloud computing concept. Data holders are encouraged to outsource their difficult data management systems from local sites to the business public cloud for large flexibility and financial savings.
During data transmission between the source and the destination in computer network, the data is ... more During data transmission between the source and the destination in computer network, the data is exposed to external modifications with malicious intentions. In today's world, most of the means of secure data and code storage and distribution rely on using cryptographic schemes such as certificates or encryption keys. Cryptography is widely used to protect sensitive data from unauthorized access and modifications while on transit. There are two basic types of cryptography: i. Symmetric key and ii. Asymmetric key algorithms. Symmetric algorithms are the quickest and most commonly used type of encryption. Here, a single key is used for both encryption and decryption. There are few well-known symmetric key algorithms i.e. DES, IDEA, AES, RC2, RC4 etc. In this paper, a new symmetric key algorithm is proposed. The advantages of this new algorithm are also explained.

The project includes a streaming data warehouse update problem as a scheduling problem where jobs... more The project includes a streaming data warehouse update problem as a scheduling problem where jobs correspond to the process that load new data into tables and the objective is to minimize data staleness over time. The proposed scheduling framework that handles the complications encountered by a stream warehouse: view hierarchies and priorities, data consistency, inability to preempt updates, heterogeneity of update jobs caused by different inter arrival times and data volumes among different sources and transient overload. Update scheduling in streaming data warehouses which combine the features of traditional data warehouses and data stream systems. The need for on-line warehouse refreshment introduces several challenges in the implementation of data warehouse transformations, with respect to their Execution time and their overhead to the warehouse processes. The problem with this approach is that new data may arrive on multiple streams, but there is no mechanism for limiting the number of tables that can be updated simultaneously.

— We present a weighted guided image filter (WGIF) to improve filtering and avoid halo artifacts.... more — We present a weighted guided image filter (WGIF) to improve filtering and avoid halo artifacts. We know that the previously used local filtering-based edge preserving smoothening technique suffers from halo artifacts and also some drawbacks. To overcome this problem we are introducing an extension method called WGIF method. In this paper WGIF method is applied on digital videos for acquiring high quality. Actually this method is introduced by incorporating an edge-aware weighted into an existing guided image filter (GIF).It has two advantages of both global and local smoothing filter in such a way that its complexity is 0(N) and avoids halo artifacts The output of WGIF results in better visual quality and avoid halo artifacts. WGIF used in image enhancement, image haze removal. Keywords—Edge-preserving smoothing, weighted guided image filter, edge aware weighting, detail enhancement, haze removal.
Weighted guided image filter (WGIF) can be used to avoid halo artifacts and to increase filtering... more Weighted guided image filter (WGIF) can be used to avoid halo artifacts and to increase filtering. In this technique, WGIF is put forth for capturing high quality videos which are digital. This method integrating edge-aware weighted into an existing guided image filter (GIF) [14]. Local and Global smoothing filters are a uniqueness of this approach, i.e. its complexity is 0 (N) and avoids halo artifacts. WGIF used in enhancement of image and image haze removal. Keywords— Enhancement of detail image, Image haze [7], Edge-preserving smoothing [4],

We consider Big Data as an upcoming trend and necessity for Big Data mining is taking place in
qu... more We consider Big Data as an upcoming trend and necessity for Big Data mining is taking place in
quite a lot of domains. Driven by real-world applications initialized by agencies of national
funding agencies, managing of Big Data has revealed to be a challenging and extremely
compelling task. In the systems of distinctive data mining, the mining procedures necessitate
intensive computing units for data analysis. A computing proposal is, thus, essential to have
resourceful access to, not less than, two types of resources such as data as well as computing
processors. For mining of Big Data, since data scale is far ahead of capacity that a particular
personal computer can hold, a distinctive framework of Big Data processing will rely on cluster
computers by a high-performance computing platform, with a task of data mining being
deployed by functioning several parallel programming tools on huge number of comput ing
nodes. For a system of intelligent learning database to hold Big Data, the necessary key is to
expand remarkably huge volume of data and make available treatments for features featured by
HACE theorem. In our work we put forward an approach of HACE that distinguish features of
Big Data revolution, and study a mode of Big Data processing, from data mining viewpoint. The
conceptual vision of Big Data processing structure includes three tiers from with consideration
on data computing known as Tier I, privacy of data and domain knowledge known as Tier II, as
well as Big Data mining algorithms specified as Tier III

The project includes a streaming data warehouse
update problem as a scheduling problem where jobs... more The project includes a streaming data warehouse
update problem as a scheduling problem where jobs
correspond to the process that load new data into
tables and the objective is to minimize data staleness
over time. The proposed scheduling framework that
handles the complications encountered by a stream
warehouse: view hierarchies and priorities, data
consistency, inability to pre-empt updates,
heterogeneity of update jobs caused by different inter
arrival times and data volumes among different sources
and transient overload. Update scheduling in
streaming data warehouses which combine the features
of traditional data warehouses and data stream
systems. The need for on-line warehouse refreshment
introduces several challenges in the implementation of
data warehouse transformations, with respect to their
Execution time and their overhead to the warehouse
processes. The problem with this approach is that new
data may arrive on multiple streams, but there is no
mechanism for limiting the number of tables that can
be updated simultaneously.
During data transmission between the source and the destination in computer network, the data is ... more During data transmission between the source and the destination in computer network, the data is exposed to external modifications with malicious intentions. In today’s world, most of the means of secure data and code storage and distribution rely on using cryptographic schemes such as certificates or encryption keys. Cryptography is widely used to protect sensitive data from unauthorized access and modifications while on transit. There are two basic types of cryptography: i. Symmetric key and ii. Asymmetric key algorithms. Symmetric algorithms are the quickest and most commonly used type of encryption. Here, a single key is used for both encryption and decryption. There are few well-known symmetric key algorithms i.e. DES, IDEA, AES, RC2, RC4 etc. In this paper, a new symmetric key algorithm is proposed. The advantages of this new algorithm are also explained.
Uploads
Papers by Bhaskar Nuthanakanti
quite a lot of domains. Driven by real-world applications initialized by agencies of national
funding agencies, managing of Big Data has revealed to be a challenging and extremely
compelling task. In the systems of distinctive data mining, the mining procedures necessitate
intensive computing units for data analysis. A computing proposal is, thus, essential to have
resourceful access to, not less than, two types of resources such as data as well as computing
processors. For mining of Big Data, since data scale is far ahead of capacity that a particular
personal computer can hold, a distinctive framework of Big Data processing will rely on cluster
computers by a high-performance computing platform, with a task of data mining being
deployed by functioning several parallel programming tools on huge number of comput ing
nodes. For a system of intelligent learning database to hold Big Data, the necessary key is to
expand remarkably huge volume of data and make available treatments for features featured by
HACE theorem. In our work we put forward an approach of HACE that distinguish features of
Big Data revolution, and study a mode of Big Data processing, from data mining viewpoint. The
conceptual vision of Big Data processing structure includes three tiers from with consideration
on data computing known as Tier I, privacy of data and domain knowledge known as Tier II, as
well as Big Data mining algorithms specified as Tier III
update problem as a scheduling problem where jobs
correspond to the process that load new data into
tables and the objective is to minimize data staleness
over time. The proposed scheduling framework that
handles the complications encountered by a stream
warehouse: view hierarchies and priorities, data
consistency, inability to pre-empt updates,
heterogeneity of update jobs caused by different inter
arrival times and data volumes among different sources
and transient overload. Update scheduling in
streaming data warehouses which combine the features
of traditional data warehouses and data stream
systems. The need for on-line warehouse refreshment
introduces several challenges in the implementation of
data warehouse transformations, with respect to their
Execution time and their overhead to the warehouse
processes. The problem with this approach is that new
data may arrive on multiple streams, but there is no
mechanism for limiting the number of tables that can
be updated simultaneously.
quite a lot of domains. Driven by real-world applications initialized by agencies of national
funding agencies, managing of Big Data has revealed to be a challenging and extremely
compelling task. In the systems of distinctive data mining, the mining procedures necessitate
intensive computing units for data analysis. A computing proposal is, thus, essential to have
resourceful access to, not less than, two types of resources such as data as well as computing
processors. For mining of Big Data, since data scale is far ahead of capacity that a particular
personal computer can hold, a distinctive framework of Big Data processing will rely on cluster
computers by a high-performance computing platform, with a task of data mining being
deployed by functioning several parallel programming tools on huge number of comput ing
nodes. For a system of intelligent learning database to hold Big Data, the necessary key is to
expand remarkably huge volume of data and make available treatments for features featured by
HACE theorem. In our work we put forward an approach of HACE that distinguish features of
Big Data revolution, and study a mode of Big Data processing, from data mining viewpoint. The
conceptual vision of Big Data processing structure includes three tiers from with consideration
on data computing known as Tier I, privacy of data and domain knowledge known as Tier II, as
well as Big Data mining algorithms specified as Tier III
update problem as a scheduling problem where jobs
correspond to the process that load new data into
tables and the objective is to minimize data staleness
over time. The proposed scheduling framework that
handles the complications encountered by a stream
warehouse: view hierarchies and priorities, data
consistency, inability to pre-empt updates,
heterogeneity of update jobs caused by different inter
arrival times and data volumes among different sources
and transient overload. Update scheduling in
streaming data warehouses which combine the features
of traditional data warehouses and data stream
systems. The need for on-line warehouse refreshment
introduces several challenges in the implementation of
data warehouse transformations, with respect to their
Execution time and their overhead to the warehouse
processes. The problem with this approach is that new
data may arrive on multiple streams, but there is no
mechanism for limiting the number of tables that can
be updated simultaneously.