Papers by rowaida ibrahim

IEEE Xplore, 2022
The Internet's continued growth has resulted in a significant rise in the amount of electronic te... more The Internet's continued growth has resulted in a significant rise in the amount of electronic text documents. Grouping these materials into meaningful collections has become crucial. The old approach of document compilation based on statistical characteristics and categorization relied on syntactic rather than semantic information. This article introduces a unique approach for classifying texts based on their semantic similarity. The graph-based approach is depended an efficient technique been utilized for clustering. This is performed by extracting document summaries called synopses from the Wikipedia and IMDB databases and grouping thus downloaded documents, then utilizing the NLTK dictionary to generate them by making some important preprocessing to make it more convenient to use. Following that, a vector space is modelled using TFIDF and converted to TFIDF matrix as numeric form, and clustering is accomplished using Spectral methods. The results are compared with previews work.

Design a Clustering Document based Semantic Similarity System using TFIDF and K-Mean
2021 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA), 2021
The continuing success of the Internet has led to an enormous rise in the volume of electronic te... more The continuing success of the Internet has led to an enormous rise in the volume of electronic text records. The strategies for grouping these records into coherent groups are increasingly important. Traditional text clustering methods are focused on statistical characteristics, with a syntactic rather than semantical concept used to do clustering. A new approach for collecting documentation based on textual similarities is presented in this paper. The method is accomplished by defining, tokenizing, and stopping text synopses from Wikipedia and IMDB datasets using the NLTK dictionary. Then, a vector space is created using TFIDF with the K-mean algorithm to carry out clustering. The results were shown as an interactive website.

Comprehensive Study of Moving from Grid and Cloud Computing Through Fog and Edge Computing towards Dew Computing
2021 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA), 2021
Dew Computing (DC) is a comparatively modern field with a wide range of applications. By examinin... more Dew Computing (DC) is a comparatively modern field with a wide range of applications. By examining how technological advances such as fog, edge and Dew computing, and distributed intelligence force us to reconsider traditional Cloud Computing (CC) to serve the Internet of Things. A new dew estimation theory is presented in this article. The revised definition is as follows: DC is a software and hardware cloud-based company. On-premises servers provide autonomy and collaborate with cloud networks. Dew Calculation aims to enhance the capabilities of on-premises and cloud-based applications. These categories can result in the development of new applications. In the world, there has been rapid growth in Information and Communication Technology (ICT), starting with Grid Computing (GC), CC, Fog Computing (FC), and the latest Edge Computing (EC) technology. DC technologies, infrastructure, and applications are described. We’ll go through the newest developments in fog networking, QoE, cloud at the edge, platforms, security, and privacy. The dew-cloud architecture is an option concerning the current client-server architecture, where two servers are located at opposite ends. In the absence of an Internet connection, a dew server helps users browse and track their details. Data are primarily stored as a local copy on the dew server that starts the Internet and is synchronized with the cloud master copy. The local dew pages, a local online version of the current website, can be browsed, read, written, or added to the users. Mapping between different Local Dew sites has been made possible using the dew domain name scheme and dew domain redirection.

Asian Journal of Research in Computer Science, 2021
Water is an essential human need in all forms of economic activities, Key sectors for the economy... more Water is an essential human need in all forms of economic activities, Key sectors for the economy include farmland, clean energy, the manufacturing sector, and mining. Water resources are under many pressures, with the population increase, the requires for water from competing economic sectors is increased. So, there is not enough water left to meet human needs and maintain environmental flows that maintain the integrity of our ecosystems. Underground water is becoming depleted in many sectors, making now and future generations near to the point of being deprived of access to any sort of protection from the increasing variability of climate. Therefore, the important role that information technology methods and internet communication technologies (ICT) play in water resources managing to limit the excessive waste of fresh water and to control and monitor water pollution. In this paper, we have to review research that uses the internet of things (IoT) as a communication technology that controls the preservation of the available amount of water and not wastes it by homeowners and farmers while they use water, and we have also reviewed some researches that preserve water quality and reduce its pollution.

Clustering Document based Semantic Similarity System using TFIDF and K-Mean
IEEE Xplore, 2021
The steady success of the Internet has led to an enormous rise in the volume of electronic text r... more The steady success of the Internet has led to an enormous rise in the volume of electronic text records. Sensitive tasks are increasingly being used to organize these materials in meaningful bundles. The standard clustering approach of documents was focused on statistical characteristics and clustering using the syntactic rather than semantic notion. This paper provides a new way to group documents based on textual similarities. Text synopses are found, identified, and stopped using the NLTK dictionary from Wikipedia and IMDB datasets. The next step is to build a vector space with TFIDF and cluster it using an algorithm K-mean. The results were obtained based on three proposed scenarios: 1) no treatment. 2) preprocessing without derivation, and 3) Derivative processing. The results showed that good similarity ratios were obtained for the internal evaluation when using (txt-sentoken data set) for all K values. In contrast, the best ratio was obtained with K = 20. In addition, as an external evaluation, purity measures were obtained and presented V measure of (txt). -sentoken) and the accuracy scale of (nltk-Reuter) gave the best results in three scenarios for K = 20 as subjective evaluation, the maximum time consumed with the first scenario (no preprocessing), and the minimum time recorded with the second scenario (excluding derivation).

Clustering Documents based on Semantic Similarity using HAC and K-Mean Algorithms
2020 International Conference on Advanced Science and Engineering (ICOASE), 2020
The continuing success of the Internet has greatly increased the number of text documents in elec... more The continuing success of the Internet has greatly increased the number of text documents in electronic formats. The techniques for grouping these documents into meaningful collections have become mission-critical. The traditional method of compiling documents based on statistical features and grouping did use syntactic rather than semantic. This article introduces a new method for grouping documents based on semantic similarity. This process is accomplished by identifying document summaries from Wikipedia and IMDB datasets, then deriving them using the NLTK dictionary. A vector space afterward is modeled with TFIDF, and the clustering is performed using the HAC and K-mean algorithms. The results are compared and visualized as an interactive webpage.

Advances in Science, Technology and Engineering Systems Journal, 2019
Clustering is a branch of data mining which involves grouping similar data in a collection known ... more Clustering is a branch of data mining which involves grouping similar data in a collection known as cluster. Clustering can be used in many fields, one of the important applications is the intelligent text clustering. Text clustering in traditional algorithms was collecting documents based on keyword matching, this means that the documents were clustered without having any descriptive notions. Hence, non-similar documents were collected in the same cluster. The key solution for this problem is to cluster documents based on semantic similarity, where the documents are clustered based on the meaning and not keywords. In this research, fifty papers which use semantic similarity in different fields have been reviewed, thirteen of them that are using semantic similarity based on document clustering in five recent years have been selected for a deep study. A comprehensive literature review for all the selected papers is stated. A comparison regarding their algorithms, used tools, and evaluation methods is given. Finally, an intensive discussion comparing the works is presented.
Uploads
Papers by rowaida ibrahim