Papers by International Journal of Artificial Intelligence (IJAIA)

International Journal of Artificial Intelligence & Applications (IJAIA), 2022
Rapid technological growth has made Artificial Intelligence (AI) and application of robots common... more Rapid technological growth has made Artificial Intelligence (AI) and application of robots common among human lives. The advancements undertaken to make designs with human similarities or adaptations to the society are elaborated in detail. The increasing manufacturing and use of robots for industrial purposes have been related to their operating mechanisms. The experiments and laboratory testing of these devices is analysed in form tables to show the statistical side of the technology. This report explains the technological aspects and laboratory experiments that have been advanced to increase knowledge on these digital technologies. This study aims to present an overview of two developing technologies: artificial intelligence (AI) and robots and their potential applications. The product variety is a primary characteristic of each of these specialties. In addition, they may be described as disruptive, facilitating, and transdisciplinary.

International Journal of Artificial Intelligence & Applications (IJAIA), 2022
COVID-19's outbreak affected and compelled people from all walks of life to self-quarantine in th... more COVID-19's outbreak affected and compelled people from all walks of life to self-quarantine in their houses in order to prevent the virus from spreading. As a result of adhering to the exceedingly strict guideline, many people developed mental illnesses. Because the educational institution was closed at the time, students remained at home and practiced self-quarantine. As a result, it is necessary to identify the students who developed mental illnesses at that time. To develop AiPsych, a mobile application-based artificial psychiatrist, we train supervised and deep learning algorithms to predict the mental illness of students during the COVID-19 situation. Our experiment reveals that supervised learning outperforms deep learning, with a 97% accuracy of the Support Vector Machine (SVM) for mental illness prediction. Random Forest (RF) achieves the best accuracy of 91% for the recovery suggestion prediction. Our android application can be used by parents, educational institutes, or the government to get the predicted result of a student's mental illness status and take proper measures to overcome the situation.

International Journal of Artificial Intelligence & Applications (IJAIA), 2022
Forged documents specifically passport, driving licence and VISA stickers are used for fraud purp... more Forged documents specifically passport, driving licence and VISA stickers are used for fraud purposes including robbery, theft and many more. So detecting forged characters from documents is a significantly important and challenging task in digital forensic imaging. Forged characters detection has two big challenges. First challenge is, data for forged characters detection is extremely difficult to get due to several reasons including limited access of data, unlabeled data or work is done on private data. Second challenge is, deep learning (DL) algorithms require labeled data, which poses a further challenge as getting labeled is tedious, time-consuming, expensive and requires domain expertise. To end these issues, in this paper we propose a novel algorithm, which generates the three datasets namely forged characters detection for passport (FCD-P), forged characters detection for driving licence (FCD-D) and forged characters detection for VISA stickers (FCD-V). To the best of our knowledge, we are the first to release these datasets. The proposed algorithm starts by reading plain document images, simulates forging simulation tasks on five different countries' passports, driving licences and VISA stickers. Then it keeps the bounding boxes as a track of the forged characters as a labeling process. Furthermore, considering the real world scenario, we performed the selected data augmentation accordingly. Regarding the stats of datasets, each dataset consists of 15000 images having size of 950 x 550 of each. For further research purpose we release our algorithm code 1 and, datasets i.
International Journal of Artificial Intelligence & Applications (IJAIA), 2022
Process Mining (PM) emerged from business process management but has recently been applied to edu... more Process Mining (PM) emerged from business process management but has recently been applied to educational data and has been found to facilitate the understanding of the educational process. Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on the techniques of model discovery, conformance checking and extension of existing process models. We present a systematic review of the recent and current status of research in the EPM domain, focusing on application domains, techniques, tools and models, to highlight the use of EPM in comprehending and improving educational processes.

International Journal of Artificial Intelligence & Applications (IJAIA), 2022
The face expression is the first thing we pay attention to when we want to understand a person’s ... more The face expression is the first thing we pay attention to when we want to understand a person’s state of mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research field. In this paper, because the small size of available training datasets, we propose a novel data augmentation technique that improves the performances in the recognition task. We apply geometrical transformations and build from scratch GAN models able to generate new synthetic images for each emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with different architectures. To measure the generalization ability of the models, we apply extra-database protocol approach, namely we train models on the augmented versions of training dataset and test them on two different databases. The combination of these techniques allows to reach average accuracy values of the order of 85% for the InceptionResNetV2 model.

International Journal of Artificial Intelligence & Applications (IJAIA), 2022
Movies are among the most prominent contributors to the global entertainment industry today, and ... more Movies are among the most prominent contributors to the global entertainment industry today, and they are among the biggest revenue-generating industries from a commercial standpoint. It's vital to divide films into two categories: successful and unsuccessful. To categorize the movies in this research, a variety of models were utilized, including regression models such as Simple Linear, Multiple Linear, and Logistic Regression, clustering techniques such as SVM and K-Means, Time Series Analysis, and an Artificial Neural Network. The models stated above were compared on a variety of factors, including their accuracy on the training and validation datasets as well as the testing dataset, the availability of new movie characteristics, and a variety of other statistical metrics. During the course of this study, it was discovered that certain characteristics have a greater impact on the likelihood of a film's success than others. For example, the existence of the genre action may have a significant impact on the forecasts, although another genre, such as sport, may not. The testing dataset for the models and classifiers has been taken from the IMDb website for the year 2020. The Artificial Neural Network, with an accuracy of 86 percent, is the best performing model of all the models discussed.

Computer vision plays a crucial role in Advanced Assistance Systems. Most computer vision systems... more Computer vision plays a crucial role in Advanced Assistance Systems. Most computer vision systems are based on Deep Convolutional Neural Networks (deep CNN) architectures. However, the high computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on architecture reduction found in the literature have not yet been achieved satisfactory results for embedded real-time system applications. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. After the training process, the generated features maps are used to create vector feature space. We use this new vector space to make projections of any new sample to classify them. Our method, named AMFC, uses the transfer learning from pre-trained CNN to reduce the classification time of new sample image, with minimal
accuracy loss. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database
and the German Traffic Sign Recognition Benchmark, we compared the classification time of the original VGG-16 model with the AMFC method, and our method is, on average, 17 times faster. The fast
classification time reduces the computational and memory demands in embedded applications requiring a large CNN architecture.
International Journal of Artificial Intelligence & Applications (IJAIA), 2021
Question Answering (QA) is a subfield of Natural Language Processing (NLP) and computer science f... more Question Answering (QA) is a subfield of Natural Language Processing (NLP) and computer science focused on building systems that automatically answer questions from humans in natural language. This
survey summarizes the history and current state of the field and is intended as an introductory overview of QA systems. After discussing QA history, this paper summarizes the different approaches to the
architecture of QA systems -- whether they are closed or open-domain and whether they are text-based, knowledge-based, or hybrid systems. Lastly, some common datasets in this field are introduced and
different evaluation metrics are discussed.

International Journal of Artificial Intelligence & Applications (IJAIA), 2021
Research on explanation is currently of intense interest as documented in the DARPA 2021 investme... more Research on explanation is currently of intense interest as documented in the DARPA 2021 investments reported by the USA Department of Defense. An emerging theme for explanation techniques research is
their application to the improvement of human-system interfaces for autonomous anti-drone or C-UAV defense systems. In the present paper a novel proposal based on natural language processing technology
concerning explanatory discourse using relations is briefly described. The proposal is based on the use of relations pertaining to the possible malicious actions of an intruding alien drone swarm and the defense
decisions proposed by an autonomous anti-drone system. The aim of such an interface is to facilitate the supervision that a user must exercise on an autonomous defense system in order to minimize the risk of wrong mitigation actions and unnecessary spending of ammunition.

International Journal of Artificial Intelligence & Applications (IJAIA), 2021
This paper presents the final results of the research project that aimed for the construction of ... more This paper presents the final results of the research project that aimed for the construction of a tool which is aided by Artificial Intelligence through an Ontology with a model trained with Machine Learning, and is aided by Natural Language Processing to support the semantic search of research projects of the Research System of the University of Nariño. For the construction of NATURE, as this tool is called, a methodology was used that includes the following stages: appropriation of knowledge, installation and configuration of tools, libraries and technologies, collection, extraction and preparation of research projects, design and development of the tool. The main results of the work were three: a) the complete construction of the Ontology with classes, object properties (predicates), data properties (attributes) and individuals (instances) in Protegé, SPARQL queries with Apache Jena Fuseki and the respective coding with Owlready2 using Jupyter Notebook with Python within the virtual environment of anaconda; b) the successful training of the model for which Machine Learning algorithms were used and specifically Natural Language Processing algorithms such as: SpaCy, NLTK, Word2vec and Doc2vec, this was also performed in Jupyter Notebook with Python within the virtual environment of anaconda and with Elasticsearch; and c) the creation of NATURE by managing and unifying the queries for the Ontology and for the Machine Learning model. The tests showed that NATURE was successful in all the searches that were performed as its results were satisfactory.

International Journal of Artificial Intelligence & Applications (IJAIA), 2021
In this work, deep CNN based model have been suggested for face recognition. CNN is employed to e... more In this work, deep CNN based model have been suggested for face recognition. CNN is employed to extract unique facial features and softmax classifier is applied to classify facial images in a fully connected layer of CNN. The experiments conducted in Extended YALE B and FERET databases for smaller batch sizes and low value of learning rate, showed that the proposed model has improved the face recognition
accuracy. Accuracy rates of up to 96.2% is achieved using the proposed model in Extended Yale B database. To improve the accuracy rate further, preprocessing techniques like SQI, HE, LTISN, GIC and
DoG are applied to the CNN model. After the application of preprocessing techniques, the improved accuracy of 99.8% is achieved with deep CNN model for the YALE B Extended Database. In FERET
Database with frontal face, before the application of preprocessing techniques, CNN model yields the maximum accuracy of 71.4%. After applying the above-mentioned preprocessing techniques, the accuracy
is improved to 76.3%

International Journal of Artificial Intelligence & Applications (IJAIA), 2021
Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) in... more Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) into a low-dimensional vector space, which has made steady progress in recent years. Conventional KGE methods, especially translational distance-based models, are trained through discriminating positive samples from negative ones. Most KGs store only positive samples for space efficiency. Negative sampling
thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks,
such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.

International Journal of Artificial Intelligence & Applications (IJAIA), 2020
Poor selection of employees can be a first step towards a lack of motivation, poor performance, a... more Poor selection of employees can be a first step towards a lack of motivation, poor performance, and high turnover, to name a few. It's no wonder that organizations are trying to find the best ways to avoid these slippages by finding the best possible person for the job. Therefore, it is very important to understand the context of hiring process to help to understand which recruiting mistakes are most damaging to the organization in order to reduce the recruiting challenges faced by Human resource managers by building their capacity to ensure optimal HR performance. This paper initiates a research about how Contextual Graphs Formalism can be used for improving the decision making in the process of hiring potential candidates. An example of a typical procedure for visualization of recruiting phases is presented to show how to add contextual elements and practices in order to communicate the recruitment policy in a concrete and memorable way to both hiring teams and candidates.

International Journal of Artificial Intelligence & Applications (IJAIA), 2021
Core periphery structures exist naturally in many complex networks in the real-world like social,... more Core periphery structures exist naturally in many complex networks in the real-world like social, economic, biological and metabolic networks. Most of the existing research efforts focus on the identification of a meso scale structure called community structure. Core periphery structures are another equally important meso scale property in a graph that can help to gain deeper insights about the relationships between different nodes. In this paper, we provide a definition of core periphery structures suitable for weighted graphs. We further score and categorize these relationships into different types based upon the density difference between the core and periphery nodes. Next, we propose an algorithm called CP-MKNN (Core Periphery-Mutual K Nearest Neighbors) to extract core periphery structures from weighted graphs using a heuristic node affinity measure called Mutual K-nearest neighbors (MKNN). Using synthetic and real-world social and biological networks, we illustrate the effectiveness of developed core periphery structures.

International Journal of Artificial Intelligence & Applications (IJAIA), 2020
One of the most challenging aspects of reasoning, planning, and acting in an agent domain is reas... more One of the most challenging aspects of reasoning, planning, and acting in an agent domain is reasoning about what an agent knows about their environment to consider when planning and acting. There are various proposals that have addressed this problem using modal, epistemic and other logics. In this paper we explore how to take advantage of the properties of Answer Set Programming for this purpose. The Answer Set Programming's property of non-monotonicity allow us to express causality in an elegant fashion. We begin our discussion by showing how Answer Set Programming can be used to model the frog's problem. We then illustrate how this problem can be represented and solved using these concepts. In addition, our proposal allows us to solve the generalization of this problem, that is, for any number of frogs.

International Journal of Artificial Intelligence & Applications (IJAIA), 2020
This piece of research introduces a purely data-driven, directly reconfigurable, divide-and-conqu... more This piece of research introduces a purely data-driven, directly reconfigurable, divide-and-conquer on-line monitoring (OLM) methodology for automatically selecting the minimum number of neutron detectors (NDs)-and corresponding neutron noise signals (NSs)-which are currently necessary, as well as sufficient, for inspecting the entire nuclear reactor (NR) in-core area. The proposed implementation builds upon the 3-tuple configuration, according to which three sufficiently pairwise-correlated NSs are capable of on-line (I) verifying each NS of the 3-tuple and (II) endorsing correct functioning of each corresponding ND, implemented herein via straightforward pairwise comparisons of fixed-length sliding time-windows (STWs) between the three NSs of the 3-tuple. A pressurized water NR (PWR) model-developed for H2020 CORTEX-is used for deriving the optimal ND/NS configuration, where (i) the evident partitioning of the 36 NDs/NSs into six clusters of six NDs/NSs each, and (ii) the high cross-correlations (CCs) within every 3-tuple of NSs, endorse the use of a constant pair comprising the two most highly CC-ed NSs per cluster as the first two members of the 3-tuple, with the third member being each remaining NS of the cluster, in turn, thereby computationally streamlining OLM without compromising the identification of either deviating NSs or malfunctioning NDs. Tests on the in-core dataset of the PWR model demonstrate the potential of the proposed methodology in terms of suitability for, efficiency at, as well as robustness in ND/NS selection, further establishing the "directly reconfigurable" property of the proposed approach at every point in time while using one-third only of the original NDs/NSs.
Detection and description of keypoints from an image is a well-studied problem in Computer Vision... more Detection and description of keypoints from an image is a well-studied problem in Computer Vision. Some methods like SIFT, SURF or ORB are computationally really efficient. This paper proposes a solution for a particular case study on object recognition of industrial parts based on hierarchical classification. Reducing the number of instances leads to better performance, indeed, that is what the use of the hierarchical classification is looking for. We demonstrate that this method performs better than using just one method like ORB, SIFT or FREAK, despite being fairly slower.
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open ... more The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.

A collaborative system for cataloging sea turtles activity that supports picture/video content de... more A collaborative system for cataloging sea turtles activity that supports picture/video content demands automated solutions for data classification and analysis. This work assumes that the color characteristics of the carapace are sufficient to classify each species of sea turtles, unlikely to the traditional method that classifies sea turtles manually based on the counting of their shell scales, and the shape of their head. Particularly, the aim of this study is to compare two features extraction techniques based on color, Color Histograms and Chromaticity Moments, combined with two classification methods, K-nearest neighbors (KNN) and Support Vector Machine (SVM), identifying which combination of techniques has a higher effectiveness rate for classifying the five species of sea turtles found along the Brazilian coast. The results showed that the combination using Chro-maticity Moments with the KNN classifier presented quantitatively better results for most species of turtles with global accuracy value of 0.74 and accuracy of 100% for the Leatherback sea turtle, while the descriptor of Color Histograms proved to be less precise, independent of the classifier. This work demonstrate that is possible to use a statistical approach to assist the job of a specialist when identifying species of sea turtle. K EYWORDS
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular ... more Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models, our previous research work has developed a system of a regular ontology that models learning structures in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed for inductive learning processes and decision making in a multiagent system. But not all processes or models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict the required number of rules of a non-regular ontology model given some defined parameters.
Uploads
Papers by International Journal of Artificial Intelligence (IJAIA)
accuracy loss. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database
and the German Traffic Sign Recognition Benchmark, we compared the classification time of the original VGG-16 model with the AMFC method, and our method is, on average, 17 times faster. The fast
classification time reduces the computational and memory demands in embedded applications requiring a large CNN architecture.
survey summarizes the history and current state of the field and is intended as an introductory overview of QA systems. After discussing QA history, this paper summarizes the different approaches to the
architecture of QA systems -- whether they are closed or open-domain and whether they are text-based, knowledge-based, or hybrid systems. Lastly, some common datasets in this field are introduced and
different evaluation metrics are discussed.
their application to the improvement of human-system interfaces for autonomous anti-drone or C-UAV defense systems. In the present paper a novel proposal based on natural language processing technology
concerning explanatory discourse using relations is briefly described. The proposal is based on the use of relations pertaining to the possible malicious actions of an intruding alien drone swarm and the defense
decisions proposed by an autonomous anti-drone system. The aim of such an interface is to facilitate the supervision that a user must exercise on an autonomous defense system in order to minimize the risk of wrong mitigation actions and unnecessary spending of ammunition.
accuracy. Accuracy rates of up to 96.2% is achieved using the proposed model in Extended Yale B database. To improve the accuracy rate further, preprocessing techniques like SQI, HE, LTISN, GIC and
DoG are applied to the CNN model. After the application of preprocessing techniques, the improved accuracy of 99.8% is achieved with deep CNN model for the YALE B Extended Database. In FERET
Database with frontal face, before the application of preprocessing techniques, CNN model yields the maximum accuracy of 71.4%. After applying the above-mentioned preprocessing techniques, the accuracy
is improved to 76.3%
thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks,
such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.
accuracy loss. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database
and the German Traffic Sign Recognition Benchmark, we compared the classification time of the original VGG-16 model with the AMFC method, and our method is, on average, 17 times faster. The fast
classification time reduces the computational and memory demands in embedded applications requiring a large CNN architecture.
survey summarizes the history and current state of the field and is intended as an introductory overview of QA systems. After discussing QA history, this paper summarizes the different approaches to the
architecture of QA systems -- whether they are closed or open-domain and whether they are text-based, knowledge-based, or hybrid systems. Lastly, some common datasets in this field are introduced and
different evaluation metrics are discussed.
their application to the improvement of human-system interfaces for autonomous anti-drone or C-UAV defense systems. In the present paper a novel proposal based on natural language processing technology
concerning explanatory discourse using relations is briefly described. The proposal is based on the use of relations pertaining to the possible malicious actions of an intruding alien drone swarm and the defense
decisions proposed by an autonomous anti-drone system. The aim of such an interface is to facilitate the supervision that a user must exercise on an autonomous defense system in order to minimize the risk of wrong mitigation actions and unnecessary spending of ammunition.
accuracy. Accuracy rates of up to 96.2% is achieved using the proposed model in Extended Yale B database. To improve the accuracy rate further, preprocessing techniques like SQI, HE, LTISN, GIC and
DoG are applied to the CNN model. After the application of preprocessing techniques, the improved accuracy of 99.8% is achieved with deep CNN model for the YALE B Extended Database. In FERET
Database with frontal face, before the application of preprocessing techniques, CNN model yields the maximum accuracy of 71.4%. After applying the above-mentioned preprocessing techniques, the accuracy
is improved to 76.3%
thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks,
such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.