Papers by Rahul Parundekar
Despite the increase in the number of linked instances in the Linked Data Cloud in recent times, ... more Despite the increase in the number of linked instances in the Linked Data Cloud in recent times, the absence of links at the concept level has resulted in heterogenous schemas, challenging the interoperability goal of the Semantic Web. In this paper, we address this problem by finding alignments between concepts from multiple Linked Data sources.
Abstract. In this paper, we present a semantic content-based approach that is employed to study d... more Abstract. In this paper, we present a semantic content-based approach that is employed to study driver preferences for Points of Interest (POIs), eg banks, grocery stores, etc., and provide recommendations for new POIs. Initially, logs about the places that the driver visits are collected from the cloud-connected navigation application running in the car. Data about the visited places is gathered from multiple sources and represented semantically in RDF by 'lifting'it.
Abstract. Despite the recent growth in the size of the Linked Data Cloud, the absence of links be... more Abstract. Despite the recent growth in the size of the Linked Data Cloud, the absence of links between the vocabularies of the sources has resulted in heterogenous schemas. Our previous work tried to find conceptual mapping between two sources and was successful in finding alignments, such as equivalence and subset relations, using the instances that are linked as equal.

The Semantic Web- …, Jan 1, 2009
The work on integrating sources and services in the Semantic Web assumes that the data is either ... more The work on integrating sources and services in the Semantic Web assumes that the data is either already represented in RDF or OWL or is available through a Semantic Web Service. In practice, there is a tremendous amount of data on the Web that is not available through the Semantic Web. In this paper we present an approach to automatically discover and create new Semantic Web Services. The idea behind this approach is to start with a set of known sources and the corresponding semantic descriptions of those sources and then discover similar sources, extract the data from those sources, build semantic descriptions of those sources, and then turn them into Semantic Web Services. We implemented an end-to-end solution to this problem in a system called Deimos and evaluated the system across five different domains. The results demonstrate that the system can automatically discover, learn semantic descriptions, and build Semantic Web Services with only example sources and their descriptions as input.

The Semantic WebISWC 2010, Jan 1, 2010
The Web of Linked Data is characterized by linking structured data from different sources using e... more The Web of Linked Data is characterized by linking structured data from different sources using equivalence statements, such as owl:sameAs, as well as other types of linked properties. The ontologies behind these sources, however, remain unlinked. This paper describes an extensional approach to generate alignments between these ontologies. Specifically our algorithm produces equivalence and subsumption relationships between classes from ontologies of different Linked Data sources by exploring the space of hypotheses supported by the existing equivalence statements. We are also able to generate a complementary hierarchy of derived classes within an existing ontology or generate new classes for a second source where the ontology is not as refined as the first. We demonstrate empirically our approach using Linked Data sources from the geospatial, genetics, and zoology domains. Our algorithm discovered about 800 equivalences and 29,000 subset relationships in the alignment of five source pairs from these domains. Thus, we are able to model one Linked Data source in terms of another by aligning their ontologies and understand the semantic relationships between the two sources.

2010 AAAI Spring Symposium …, Jan 1, 2010
Even though the Linked Data movement is gaining ground, vast amounts of information are only pres... more Even though the Linked Data movement is gaining ground, vast amounts of information are only present in the traditional Web of human-readable pages. Data from such sources in the Surface Web and the Deep Web needs to be published as structured data into the Linked Data Web. The work described in this paper links the schema and individuals in the RDF extracted from surface and deep Web sources with the schema and individuals already present in the linked data cloud. To this end, we extend our prior work on automatically generating Semantic Web Services from Web sources. Once we are able to link individuals of the generated Semantic Web Service with the data present in the linked data cloud, we can populate the Linked Data Web with data from Deep Web sources for given domains. Our approach not only integrates known sources from the Deep Web into the Linked Data Web, but also automatically discovers and links previously unknown sources for the same domain. Our techniques can significantly increase the amount of data available in the Linked Data Web.
April 2015, Volume 6, Number 2 by Rahul Parundekar

International Journal of Web & Semantic Technology (IJWesT), 2018
The Semantic Web aims at representing knowledge about the real world at web scale-things, their a... more The Semantic Web aims at representing knowledge about the real world at web scale-things, their attributes and relationships among them can be represented as nodes and edges in an inter-linked semantic graph. In the presence of noisy data, as is typical of data on the Semantic Web, a software Agent needs to be able to robustly infer one or more associated actionable classes for the individuals in order to act automatically on it. We model this problem as a multi-label classification task where we want to robustly identify types of the individuals in a semantic graph such as DBpedia, which we use as an exemplary dataset on the Semantic Web. Our approach first extracts multiple features for the individuals using random walks and then performs multi-label classification using fully-connected Neural Networks. Through systematic exploration and experimentation, we identify the effect of hyper-parameters of the feature extraction and the fully-connected Neural Network structure on the classification performance. Our final results show that our method performs better than state-of-the-art inferencing systems like SDtype and SLCN, from which we can conclude that random-walk-based feature extraction of individuals and their multi-label classification using Deep Neural Networks is a promising alternative to these systems for type classification of individuals on the Semantic Web. The main contribution of our work is to introduce a novel approach that allows us to use Deep Neural Networks to identify types of individuals in a noisy semantic graph by extracting features using random walks.
Uploads
Papers by Rahul Parundekar
April 2015, Volume 6, Number 2 by Rahul Parundekar