Talks by Dr. Ajeet Kumar Pandey
Papers by Dr. Ajeet Kumar Pandey

2018 International Conference on Computer Communication and Informatics (ICCCI), 2018
This paper discusses a new topic model based approach for opinion mining and sentiment analysis o... more This paper discusses a new topic model based approach for opinion mining and sentiment analysis of text reviews posted in web forums or social media site which are mostly in unstructured in nature. In recent years, opinions are exchanged in clouds about any product, person, event or any interested topic. These opinions help in decision making for choosing a product or getting feedback about any topic. Opinion mining and sentiment analysis are related in a sense that opining mining deals with analyzing and summarizing expressed opinions whereas sentiment analysis classifies opinionated text into positive and negative. Aspect extraction is a crucial problem in sentiment analysis. Model proposed in the paper utilizes topic model for aspect extraction and support vector machine learning technique for sentiment classification of textual reviews. The goal is to automate the process of mining attitudes, opinions and hidden emotions from text.
The maximum clique problem (MCP) is to determine a sub graph of maximum cardinality. A clique is ... more The maximum clique problem (MCP) is to determine a sub graph of maximum cardinality. A clique is a sub graph in which all pairs of vertices are mutually adjacent. Based on existing surveys, the main goal of this paper is to provide a simplified version and comprehensive review on Maximum clique problem. This review intends to encourage and motivate new researchers in this area. Though capturing the complete literature in this regard is beyond scope of the paper, but it is tried to capture most of the representative papers from similar approaches.

International Journal of System Assurance Engineering and Management, 2021
Huge amount of unstructured data is posted on the cloud from various sources for the purpose of f... more Huge amount of unstructured data is posted on the cloud from various sources for the purpose of feedback and reviews. These review needs require classification for many a reasons and sentiment classification is one of them. Sentiment classification of these reviews quite difficult as they are arriving from many sources. A robust classifier is needed to deal with different data distributions. Traditional supervised machine learning approaches not works well as they require retraining when domain is changed. Deep learning techniques perform well to handle these situations, but they are more data hungry and computationally expensive. Transfer learning is a feature in the cross-domain sentiment classification where features are transferred from one domain to another without any training. Moreover, transfer learning allows the domains, tasks, and distributions used in training and testing to be different. Therefor transfer learning mechanism is required to transfer the sentiment features across the domains. This paper presents a transfer learning approach using pretrained language model, ELMO which helps in transferring sentiment features across domains. This model has been tested on text reviews posted on twitter data set and compared with deep learning methods with and without pretraining process, also our model delivers promising results. This model permits flexibility to plug and play parameters with target models with easier domain adaptivity and transfer sentiment features. Also, model enables sentiment classifiers by using the transferred features from an already trained domain and reuse the sentiment features by saving the time and training cost.

System level RAMS requirements are usually not sufficient to scale the design effort. The require... more System level RAMS requirements are usually not sufficient to scale the design effort. The requirement process for a large and complex system usually involves allocating the reliability requirements to subsystem level to equipment levels. The apportionment of RAMS target is one of the first phases of the system life cycle and it is fundamental to translate the overall system RAMS requirements into RAMS requirements of each subsystem subsystems. This works has focused on the RAMS Management and Safety Assurance for a Metro Rail System. Metro Rails are one of the most preferred public transports in the metropolis today’s era. The two key challenges before the Metro Rails are safety and service availability. Availability and Safety are the keys across the spectrum of RAMS objective. Availability is tightly coupled with reliability where the key requirements are to make system failure-free; whereas safety requirements are concerned with making is mishap-free. Both safety and availability...

2015 International Conference on Industrial Instrumentation and Control (ICIC), 2015
Due to the rapid urbanization, rising travel needs and accessibility, Metro Rail becomes one of t... more Due to the rapid urbanization, rising travel needs and accessibility, Metro Rail becomes one of the preferred modes of transport for commuters. A safe and reliable Metro System with specified reliability, availability, maintainability and safety (RAMS) requirements are of prime concern. RAMS requirements can be clubbed together to derive service punctuality which can be seen as the core KPI (Key Performance Indicator of the Mass Rapid Transit Systems (MRTS). Service punctuality is tightly coupled with the RAM requirements and therefore shall be specified cautiously. Once the KPI has been specified, the next step is to decide the RAM requirements of various subsystems to meet or exceed the specified KPI target. In other words, the system punctuality target is translated in the terms of Reliability, Availability, and Maintainability at system and subsystems level. This paper presents a top-down RAM apportionment model for generic MRTS systems. The proposed model endorses the system level KPI by translating and apportioning the system level RAM requirements into requirements for each subsystem, equipment or component. As a case study, the proposed model has been applied to one of key Metro Rail Project and its various subsystems in India. Methods for deriving mean time between failures (MTBF), mean time to repair (MTTR) and % availability of all systems are also discussed.

International Journal of System Assurance Engineering and Management, 2011
Regression testing is one of the important software maintenance activities that let the software ... more Regression testing is one of the important software maintenance activities that let the software tester to ensure the quality and reliability of modified program. Although the regression testing is expensive, it is necessary to validate the software after every modification. To reduce the cost of regression testing, software tester may utilize test cases prioritization techniques. One potential goal prioritization is to increase a test suite's rate of fault detection. An improved rate of fault detection can provide earlier feedback on the system, enabling earlier debugging. APFD (Average Percentage of Fault Detected) metric is used to measure the test suite's fault detection rate. This paper presents an integrated test case prioritization approach to increase the test suite's fault detection rate. Three important factors such as program change level (PCL), test suite change level (TCL) and test suite size (TS) are considered to prioritize test cases. Proposed approach is applied on different in-house programs to validate its accuracy. Model results are found to be promising when compared with optimal prioritization techniques which always results an upper bound of APFD values. Keywords Regression testing Á Regression test metrics (RTMs) Á Fuzzy inference system (FIS) Á Test cases Á Software fault Á Test case prioritization Á APFD metric

2012 IEEE 23rd International Symposium on Software Reliability Engineering Workshops, 2012
ABSTRACT Automotive Electronic Control Units (ECUs) are usually the most complicated and powerful... more ABSTRACT Automotive Electronic Control Units (ECUs) are usually the most complicated and powerful embedded systems in an automobile. Verification and validation of these ECUs poses challenges due to exponential growth in the size and complexity of ECUs's software. This paper provides a new functional complexity metric that can be derived from the specification and can be used to estimate the total number of test cases required. Further, ECUs need to be verified and validated in accordance with their operational profile. Testing with operational profile will be cost effective with better reliability confidence. At the same time operational profile based testing can speed-up the V&V process by allocating the resources in relation to use and criticality. In this paper, the operational profile method is applied, as a case study, to the front and rear fog light ECU. Methods for deriving function points, functional complexity, computing the number of test cases, developing operational profiles, and test case allocation are presented. Additional benefits are also shown by integrating the functional complexity metric with the operational profile.
Communications in Dependability and …, 2010
This paper discusses a fault prediction model with specific focus on process level software metri... more This paper discusses a fault prediction model with specific focus on process level software metrics during software development. The model helps software development team to optimally allocate resources and achieve more reliable software within the time and cost constraints. The ...
csjournals.com
Abstract: Knowing the faults early during software development helps software manager to optimall... more Abstract: Knowing the faults early during software development helps software manager to optimally allocate resources and achieve more reliable software within the time and cost constraints. A model is proposed in this paper to predict total number of faults before testing using a ...

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2013
Transportation industries are growing not only in volume but in technology as well. To keep pace ... more Transportation industries are growing not only in volume but in technology as well. To keep pace with changing business paradigms, automotive manufactures needs to use latest information technology and tools to make the transportation system economically viable, safe and reliable. Safety is the most important concern for today's railway system. Various subsystems of modern rail are safety critical and could result in loss of life, significant property damage or damage to the environment, if failure occurs. This paper presents the systematic approach to counter the risk in such system by analyzing the failure mode and its effect. The automatic door operation subsystem which forms one of the major safety critical systems in metro train is discussed along with a case study by analyzing various failure modes and its effect. Analysis processes as well as the significance of different metrics are also elaborated.
Studies in Fuzziness and Soft Computing, 2013
Software systems require various changes, throughout their lifetime, based on its faults, changes... more Software systems require various changes, throughout their lifetime, based on its faults, changes of user requirements, changes of environments, and so forth. It is very important to ensure that these changes are incorporated properly without any adverse effect on the quality and reliability of the software. In general, changing the software to correct faults or add new functionality can cause existing functionality to depart, introducing new faults. Therefore, software is regularly retested after subsequent changes, in the form of regression testing. This chapter presents a reliability centric test case prioritization approach to detect software faults early during regression testing.
Studies in Fuzziness and Soft Computing, 2013
Software testing (both development and regression) is costly and therefore must be conducted in a... more Software testing (both development and regression) is costly and therefore must be conducted in a planned and systematic way to optimize overall testing objectives. One of the key objectives is to assure the reliability of software system that needs an extensive testing. To plan and guide software testing, the very first requirement is to find the number of tests (size) and thereafter allocationg the tests to various functions/modules. This chapter presents a model based approach to plan, guide, and allocate test cases to achieve more reliable system.
Studies in Fuzziness and Soft Computing, 2013
Development of reliable software is challenging as system engineers have to deal with a large num... more Development of reliable software is challenging as system engineers have to deal with a large number of conflicting requirements such as cost, time, reliability, safety, maintainability, and many more. These days, most of the software development tasks are performed in labor-intensive way. This may introduce various faults across the development, causing failures in the near future. The impact of these failures ranges from marginal to catastrophic consequences. Therefore, there is a growing need to ensure the reliability of these software systems as early as possible. A model for early prediction of software fault is presented in this chapter.

Studies in Fuzziness and Soft Computing, 2013
Size, complexity, and human dependency on software--based products have grown dramatically during... more Size, complexity, and human dependency on software--based products have grown dramatically during past decades. Software developers are struggling to deliver reliable software with acceptable level of quality, within given budget and schedule. One measure of software quality and reliability is the number of residual faults. Therefore, researchers are focusing on the identification of the number of fault presents in the software or identification of program modules that are most likely to contain faults. A lot of models have been developed using various techniques. A common approach is followed for software reliability prediction utilizing failure data. Software reliability and quality prediction is highly desired by the stakeholders, developers, managers, and end users. Detecting software faults early during development will definitely improve the reliability and quality in cost-effective way.
Uploads
Talks by Dr. Ajeet Kumar Pandey
Papers by Dr. Ajeet Kumar Pandey