Papers by mehmet sahinoglu
International Journal of Computer Theory and Engineering

International Journal of Computer Theory and Engineering, 2022
A critical step in hypothesis testing at the computer theory and/or engineering decision-making s... more A critical step in hypothesis testing at the computer theory and/or engineering decision-making stage is to optimally compute and use type-I (α) and type-II (β) error probabilities. The article's first research objective is to optimize α and β errors, or producer's and consumer's risks, or risks of false positives (FP) and false negatives (FN) by employing the merits of a game-theoretical framework. To achieve this goal, the cross-products of errors and non-errors model is proposed. The second objective is to apply the proposed model to an industrial manufacturing quality control mechanism, i.e. sequential sampling plans (SSP). The article proposes an alternative technique compared to prematurely selecting the conventionally pre-specified type-I and type-II error probabilities. One studies mixed strategy, two-players and zerosum games' minimax rule derived by von Neumann and executed by Dantzig's linear programming (LP) algorithm. Further, one equation for one unknown scenario yielding simple algebraic roots validate the computationally-intensive LP optimal solutions. The cost and utility constants are elicited through company-specific input data management. The contrasts between conventional and proposed results are favorably illustrated by tables, figures, individual and comparative plots, and Venn diagrams in order to modify and improve the traditionally executed SSP's final decisions. Index Terms-Cross-products of errors, minimax rule, accept-reject-continue-terminate, cost and utility.
9. Metrics for Software Reliability Failure-Count Models in Cyber-Risk
Stopping Rules for Reliability and Security Tests in Cyber‐Risk

Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering
Testing behavioral models before they are released to the synthesis and logic design phase is a t... more Testing behavioral models before they are released to the synthesis and logic design phase is a tedious process, to say the least. A common practice is the test-it-to-death approach in which millions or even billions of vectors are applied and the results are checked for possible bugs. The vectors applied to behavioral models include functional vectors, but the significant amount of the vectors are random in nature, including random combinations of instructions. In this paper, we present and evaluate a stopping rule that can be used to determine when to stop the current testing phase using a given testing technique, and move on to the next phase using a different testing technique. We demonstrate the use of the stopping rule on two complex VHDL models that were tested for branch coverage with 4 different testing phases. We compare savings and quality of testing both with and without using the stopping rule.
Lecture Notes in Computer Science, 2004
A Verilog HDL-based fault simulator for testing embedded cores-based synchronous sequential circu... more A Verilog HDL-based fault simulator for testing embedded cores-based synchronous sequential circuits is proposed in the paper to detect single stuck-line faults The simulator emulates a typical BIST (built-in self-testing) environment with test pattern generator that sends its outputs to a CUT (circuit under test) and the output streams from the CUT are fed into a response data analyzer. The fault simulator is suitable for testing sequential circuits described in Verilog HDL. The subject paper describes in detail the architecture and applications of the fault simulator along with the models of sequential elements used. Results on some simulation experiments on ISCAS 89 full-scan sequential benchmark circuits are also provided.
Reliability Index Evaluations of Integrated Software Systems (Internet) for Insufficient Software Failure and Recovery Data
Lecture Notes in Computer Science, 2000
ABSTRACT
High-Assurance Systems, 2004
This paper proposes testability enhancements in architectural design for embedded cores-based sys... more This paper proposes testability enhancements in architectural design for embedded cores-based system-on-a-chip (SoC). There exist methods to ensure correct SoC functionality in both hardware and software, but one of the most reliable ways to realize this is through the use of design for testability approaches. Specifically, applications of built-in self-test (BIST) methodology for testing embedded cores are considered in the

Hospital Healthcare Service Risk Assessment and Management with Risk-O-Meter’s Software Metrics for a Field Application
Journal of Integrated Design and Process Science
This applied research paper implements a practical methodology about how to assess and improve pa... more This applied research paper implements a practical methodology about how to assess and improve patient-centered quality of care in the light of nationwide healthcare quality mandate to disseminate and utilize results for the “most bang for the buck”. Patient-centered quality of care risk assessment and management are inseparable aspects of healthcare in a hospital, yet both are frequently overlooked. In the State of Alabama, a 2004 study by the Kaiser Family Foundation found substantial dissatisfaction with the quality of healthcare as well as other related national reports and managing insurance companies. The primary author’s automated software, Risk-O-Meter (RoM), supported by a simulation analysis to verify the analytical outcomes, will provide a patient-centered metric of hospital health-care risk, and risk mitigation advice for vulnerabilities and threats associated with automated management of healthcare quality in a hospital or clinic. The RoM will be demonstrated to assess ...
Achieving the quality of verification for behavioral models with minimum effort
Proceedings IEEE 2000 First International Symposium on Quality Electronic Design (Cat. No. PR00525)
Abstract When designing a system in the behavioral level, one of the most important steps to be t... more Abstract When designing a system in the behavioral level, one of the most important steps to be taken is verifying its functionality before it is released to the logic/PD design phase. One may consider behavioral models as oracles in industries to test against when the final chip is produced. In this work, we use branch coverage as a measure for the quality of verifying/testing behavioral models. Minimum effort for achieving a given quality level can be realized by using the proposed stopping rule. The stopping rule guides the process to ...
1999 IEEE Aerospace Conference. Proceedings (Cat. No.99TH8403), 1999
When testing software, testers rarely use only one technique to generate tests that, they hope, w... more When testing software, testers rarely use only one technique to generate tests that, they hope, will fulfill their testing criteria. Malaiya showed that testers switch strategies when testing yield saturates. We present and evaluate a stopping rule that can be used to determine when it is time to switch to a different testing technique, because the current one is not likely to increase criteria fulfilment. We demonstrate use of the stopping rule on a program that is being tested for branch coverage with five different testing techniques. We compare savings and accuracy of stopping both with and without using the stopping rule. TABLE OF CONTENTS 1. INTRODUCTION 2. BACKGROUND 3. STATISTICAL METHODOLOGY 4. EXPERIMENT FOR MIXED STRATEGY TESTING 5. CONCLUSION

Procedia - Social and Behavioral Sciences, 2012
With the advent and unprecedented popularity of the now ubiquitous social networking sites such a... more With the advent and unprecedented popularity of the now ubiquitous social networking sites such as Google Friend, Facebook, MySpace, Twitter etc. in the personal sphere, and others such as LinkedIn in business circles, undesirable security and privacy risk issues have come to the forefront as a result of this extraordinary rapid growth. The most salient issues are mainly lack of trustworthiness; namely, those of security and privacy. We will address these issues by employing a quantitative approach to assess security and privacy risks for social networks already under pressure by users and policymakers for breaches in both quality and sustainability; and will also demonstrate, using a cost-optimal game-theoretical solution, how to manage and monitor risk. The applicability of this research to diverse fields from security to privacy and health care, as well as the currently popular social networks is an additional asset. A number of real people (not simulated) were interviewed and the results are discussed. Ramifications of this quantitative risk assessment of privacy and security breaches in social networks will be summarized.
Monte Carlo Simulation on Software Mutation Test-Case Adequacy
Computational Statistics, 1992
ABSTRACT

WIREs Computational Statistics, 2010
This article, beyond presenting a spectrum of network reliability methods studied in the past dec... more This article, beyond presenting a spectrum of network reliability methods studied in the past decades, describes a scalable innovative ‘overlap technique’ to tackle large complex networks' reliability evaluation difficulties, which cannot be handled by straightforward reliability block diagramming (RBD) techniques used for the simple parallel‐series topologies. Examples are shown on how to apply the overlap algorithm to compute the ingress‐egress reliability. Monte Carlo simulations demonstrate the methods discussed. (1) Static (time independent), (2) dynamic (time dependent) using a versatile Weibull distribution to represent the multiple stages of network components from infancy to useful life period and to wear‐out, and (3) multistate versions to include derated behavior beyond conventional working and nonworking states, are illustrated for calculating the directional source‐target (s‐t) reliability of complex networks by using the Java software ERBDC: Exact Reliability Block...
Quantitative risk assessment for dependent vulnerabilities
RAMS '06. Annual Reliability and Maintainability Symposium, 2006.
In actual life scenarios, the components of the big risk picture are interdependent rather than p... more In actual life scenarios, the components of the big risk picture are interdependent rather than purely independent. Moreover, the quantitative risk measurements are needed to objectively compare alternatives and calculate monetary figures to budget for reducing or minimizing the existing risk. A detailed treatment of the proposed security-meter, a quantitative risk assessment technique, has been recently studied and published when

WIREs Computational Statistics, 2012
Risk analysis, comprising risk assessment and risk management stages, is one of the most popular ... more Risk analysis, comprising risk assessment and risk management stages, is one of the most popular and challenging topics of our times because security and privacy, and availability and usability culminating at the trustworthiness of cybersystems and cyber information is at stake. The precautionary need derives from the existence of defenders versus adversaries, in an everlasting Darwinian scenario dating back to early human history of warriors fighting for their sustenance to survive. Fast forwarding to today's information warfare, whether in networks or healthcare or national security, the currently dire situation necessitates more than a hand calculator to optimize (maximize gains or minimize losses) risk due to prevailing scarce economic resources. This article reviews the previous works completed on this specialized topic of game‐theoretic computing, its methods and applications toward the purpose of quantitative risk assessment and cost‐optimal management in many diverse dis...

As more and more complex and sophisticated hardware and software tools are available, complex pro... more As more and more complex and sophisticated hardware and software tools are available, complex problems described by consistent mathematical models are successfully approached by numerical simulation: modelling and simulation are present at almost each level in education, research, and production. Numerical "experiments" have predictive value, and complement physical experiments. They are unique in providing valuable insights in Gedankenexperiment-class (thought experiment) investigations. This chapter presents numerical simulation results related to a structural optimization problem that arises in systems with gradients and fluxes. Although the discussion concerns the optimal electrical design of photovoltaic systems, it may be extended to a larger class of applications in electrical and mechanical engineering: diffusion and conduction problems. The first concern in simulation is the proper formulation of the physical model of the system under investigation that should lead to consistent mathematical models, or well-posed problems (in Hadamard sense) (Morega, 1998). When available, analytic solutions-even for simplified mathematical models-may outline useful insights into the physics of the processes, and may also help deciding the numerical approach to the solution to more realistic models for the systems under investigation. Homemade and third party simulation tools are equally useful as long as they are available and provide for accurate solutions. Recent technological progresses brought into attention the Spherical PhotoVoltaic Cells (SPVC), known for their capability of capturing light three-dimensionally not only from direct sunlight but also as diffuse light scattered by the clouds or reflected by the buildings. This chapter reports the structural optimization of several types of spherical photovoltaic cells (SPVC) by applying the constructal principle to the minimization of their electrical series resistance. A numerically assisted step-by-step construction of optimal, minimum series resistance SPVC ensembles, from the smallest cell (called elemental) to the largest assembly that relies on the minimization of the maximum voltage drop subject to volume (material) constraints is presented. In this completely deterministic approach the SPVC ensembles shapes and structures are the outcome of the optimization of a volume to point access problem imposed as a design request. Specific to the constructal theory, the optimal shape (geometry) and structure of both natural and engineered systems are morphed out of their functionality and resources, and of the constraints to which they are subject.

Software Testing, Verification and Reliability, 1997
The 'compound Poisson' (CP) software reliability model was proposed previously by the first named... more The 'compound Poisson' (CP) software reliability model was proposed previously by the first named author for time-between-failure data in terms of CPU seconds, using the 'maximum likelihood estimation' (MLE) method to estimate unknown parameters; hence, CPMLE. However, another parameter estimation technique is proposed under 'nonlinear regression analysis' (NLR) for the compound Poisson reliability model, giving rise to the name CPNLR. It is observed that the CP model, with different parameter estimation methods, produces equally satisfactory or more favourable results as compared to the Musa-Okumoto (M-O) model, particularly in the event of grouped or clustered (clumped) software failure data. The sampling unit may be a week, day or month within which the failures are clumped, as the error recording facilities dictate in a software testing environment. The proposed CPNLR and CPMLE yield comparatively more favourable results for certain software failure data structures where the frequency distribution of the cluster (clump) size of the software failures per week displays a negative exponential behaviour. Average relative error (ARE), mean squared error (MSE) and average Kolmogorov-Smirnov (K-S Av.D n) statistics are used as measures of forecast quality for the proposed and competing parameterestimation techniques in predicting the number of remaining future failures expected to occur until a target stopping time. Comparisons on five different simulated data sets that contain weekly recorded software failures are made to emphasize the advantages and disadvantages of the competing methods by means of the chronological prediction plots around the true target value and zero per cent relative error line. The proposed generalized compound Poisson (MLE and NLR) methods consistently produce more favourable predictions for those software failure data with negative exponential frequency distribution of the failure clump size versus number of weeks. Otherwise, the popularly used competing M-O log-Poisson model is a better fit for those data with a uniform clump size distribution to recognize the log-Poisson effect while the logarithm of the Poisson equation is a constant, hence uniform. The software analyst is urged to perform exploratory data analysis to recognize the nature of the software failure data before favouring a particular reliability estimation method.
IEEE Transactions on Instrumentation and Measurement, 2008
The need for information security is self-evident. The pervasiveness of this critical topic requi... more The need for information security is self-evident. The pervasiveness of this critical topic requires primarily risk assessment and management through quantitative means. To do an assessment, repeated security probes, surveys, and input data measurements must be taken and verified toward the goal of risk mitigation. One can evaluate risk using a probabilistically accurate statistical estimation scheme in a quantitative security meter (SM) model that mimics the events of the breach of security. An empirical study is presented and verified by discrete-event and Monte Carlo simulations. The design improves as more data are collected and updated. Practical aspects of the SM are presented with a realworld example and a risk-management scenario.

IEEE Transactions on Instrumentation and Measurement, 2007
This paper aims to develop an approach to test analog and mixed-signal embedded-core-based system... more This paper aims to develop an approach to test analog and mixed-signal embedded-core-based system-on-chips (SOCs) with built-in hardware. In particular, oscillation-based built-in self-test (OBIST) methodology for testing analog components in mixed-signal circuits is implemented in this paper. The proposed OBIST structure is utilized for on-chip generation of oscillatory responses corresponding to the analog-circuit components. A major advantage of the OBIST method is that it does not require stimulus generators or complex response analyzers, which makes it suitable for testing analog circuits in mixed-signal SOC environments. Extensive simulation results on sample analog and mixed-signal benchmark circuits and other circuits described by netlist in HSPICE format are provided to demonstrate the feasibility, usefulness, and relevance of the proposed implementations. Index Terms-Built-in self-test (BIST), circuit under test (CUT), design-for-testability (DFT), mixed-signal test, oscillationbased BIST (OBIST), system-on-chip (SOC), test-pattern generator (TPG). I. INTRODUCTION E VER-INCREASING applications of the analog and mixed-signal embedded-core-based system-on-chips (SOCs) [1], in recent years, have motivated system designers and test engineers to shift their research direction to embrace this particular area of very large-scale integrated circuits and systems to develop specifically their effective test strategies. The modern technology of manufacturing high-volume products demands that substantial efforts be directed toward the design, test, and evaluation of the prototypes before the start of
Uploads
Papers by mehmet sahinoglu