Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, 2011 IEEE 22nd International Symposium on Software Reliability Engineering
Stochastic models are often employed to study dependability of critical systems and assess various hardware and software fault-tolerance techniques. These models take into account the randomness in the events of interest (aleatory uncertainty) and are generally solved at fixed parameter values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). This paper discusses methods for computing the uncertainty in output metrics of dependability models, due to epistemic uncertainties in the model input parameters. Methods for epistemic uncertainty propagation through dependability models of varying complexity are presented with illustrative examples. The distribution, variance and expectation of model output, due to epistemic uncertainty in model input parameters are derived and analyzed to understand their limiting behavior.
2011
Systems in critical applications employ various hardware and software fault-tolerance techniques to ensure high dependability. Stochastic models are often used to analyze the dependability of these systems and assess the effectiveness of the fault-tolerance techniques employed. Measures like performance and performability of systems are successful attack) of a network routing session, expected number of jobs in a queueing system with breakdown and repair of servers and call handoff probability of a cellular wireless communication cell.
Handbook of Research on Emerging Advancements and Technologies in Software Engineering
In recent years, reliability assessment is an essential process in system quality assessments. However, the best practice of software engineering for reliability analysis is not yet of its matured stage. The existing works are only capable to explicitly apply a small portion of reliability analysis in a standard software development process. In addition, an existing reliability assessment is based on an assumption provided by domain experts. This assumption is often exposed to errors. An effective reliability assessment should be based on reliability requirements that could be quantitatively estimated using metrics. The reliability requirements can be visualized using reliability model. However, existing reliability models are not expressive enough and do not provide consistence-modeling mechanism to allow developers to estimate reliability parameter values. Consequently, the reliability estimation using those parameters is usually oversimplified. With this situation, the inconsistency problem could happen between different estimation stages. In this chapter, a new Model-Based Reliability Estimation (MBRE) methodology is developed. The methodology consists of reliability model and reliability estimation model. The methodology provides a systematic way to estimate system reliability, emphasizing the reliability model for producing reliability parameters which will be used by the reliability estimation model. These models are built upon the timing properties, which is the primary input value for reliability assessment.
2001
& CONCLUSIONS In reliability analysis of computer systems, models such as fault trees, Markov chains, and stochastic Petri nets(SPN) are built to evaluate or predict the reliability of the system. In general, the parameters in these models are usually obtained from field data, or by the data from systems with similar functionality, or even by guessing. In this paper, we address the parameter uncertainty problem. First, we review and classify three ways to describe the parameter uncertainty in the model: reliability bounds, confidence intervals, and probability distributions. Second, by utilizing the second-order approximation and the normal approximation, we propose an analytic method to derive the confidence interval of the system reliability from the confidence intervals of parameters in the transient solution of Markov models. Then, we study the Monte Carlo simulation method to derive the uncertainty in the system reliability, and use it to validate our proposed analytic method. Our effort makes the reliability prediction more realistic compared with the result without the uncertainty analysis.
Safe Comp 95, 1995
For safety-critical systems, the required reliability (or safety) is often extremely high. Assessing the system, to gain confidence that the requirement has been achieved, is correspondingly hard, particularly when the system depends critically upon extensive software. In practice, such an assessment is often carried out rather informally, taking account of many different types of evidence-experience of previous, similar systems; evidence of the efficacy of the development process; testing; expert judgement, etc. Ideally, the assessment would allow all such evidence to be combined into a final numerical measure of reliability in a scientifically rigorous way. In this paper we address one part of this problem: we present a means whereby our confidence in a new product can be augmented beyond what we would believe merely from testing that product, by using evidence of the high dependability in operation of previous products. We present some illustrative numerical results that seem to suggest that such experience of previous products, even when these have shown very high dependability in operational use, can improve our confidence in a new product only modestly.
1979 International Workshop on Managing Requirements Knowledge (MARK), 1979
Boehm, Brown, and Lipow 1 have characterized the multidimensional nature of software quality in terms of a hierarchy of attributes. One of the high-level attributes is reliability, which they define qualitatively as the satisfactory performance of intended functions. This definition may be refined to the quantitative statement "probability of failure- free operation in a specified environment for a specified time." A "failure" is an unacceptable departure of program operation from program requirements, where, as in the case of hardware, "unacceptable" must ultimately be defined by the user. The term "fault" will be used to indicate the program defect that causes the failure.
Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2016
In reliability analysis, comparing system reliability is an essential task when designing safe systems. When the failure probabilities of the system components (assumed to be independent) are precisely known, this task is relatively simple to achieve, as system reliabilities are precise numbers. When failure probabilities are ill-known (known to lie in an interval) and we want to have guaranteed comparisons (i.e., declare a system more reliable than another when it is for any possible probability value), there are different ways to compare system reliabilities. We explore the computational problems posed by such extensions, providing first insights about their pros and cons.
2006
There are many probabilistic and statistical approaches to modelling software reliability. Software reliability estimates are used for various purposes: during development, to make the release decision; and after the software has been taken into use, as part of system reliability estimation, as a basis of maintenance recommendations, and further improvement, or a basis of the recommendation to discontinue the use of the software. This report reviews proposed software reliability models, ways to evaluate them, and the role of software reliability estimation. Both frequentist and Bayesian approaches have been proposed. The advantage of Bayesian models is that various important but nonmeasurable factors, such as software complexity, architecture, quality of verification and validation activities, and test coverage are easily incorporated in the model. Despite their shortcomings – excessive data requirements for even modest reliability claims, difficulty of taking relevant nonmeasurable...
1996
A number of software reliability models have been proposed for assessing the reliability of a software system. In this paper, we discuss the time-domain and data-domain approaches to software reliability modeling, and classify the previously reported models into these two classes based on their underlying assumptions. The data-domain models are further classi ed into fault-seeding and input domain models, while the time-domain models are further classi ed into homogeneous Markov, non-homogeneous Markov and semi-Markov models. We present some representative models belonging to each of the classes, and then discuss the relative merits and limitations of the time and data-domain approaches.
1991
This report presents the results of the first phase of the ongoing EG&G Idaho, Inc., Software Reliability Research Program. The program is studying the existing software reliability mc:lels and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for saitware reliability enhancement, and (3) general discussion and future research.
Software Quality Assurance, 2016
Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models-typically state automata-to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis.
Digest of Papers. Twenty-Eighth Annual International Symposium on Fault-Tolerant Computing (Cat. No.98CB36224)
We address problems in modelling the reliability of multiple-version software, and present models intended to improve the understanding of the various ways failure dependence between versions can arise. The previous models, by Eckhardt and Lee and by Littlewood and Miller, described what behaviour could be expected "on average" from a randomly chosen pair of "independently generated" versions. Instead, we address the problem of predicting the reliability of a specific pair of versions. The concept of "variation of difficulty" between situations to which software may be subject is central to the previous models cited. We show that it has even more far-reaching implications than previously found. In particular, we consider the practical implications of two phenomena: varying probabilities of failure over input sub-domains or operating regimes; and positive correlation between successive executions of control software. Our analysis provides some practical advice for regulators, and useful insight into non-intuitive aspects of the failure process of diverse software.
Software reliability analysis is performed at various stages during the process of engineering software as an attempt to evaluate if the software reliability requirements have been (or might be) met. In this report, I present a summary of some fundamental black-box and white-box software reliability models. I also present some general shortcomings of these models and suggest avenues for further research.
Software Engineering, IEEE …, 1978
This paper examines the most widely used reliability modeLs. The models discussed fall into two categories, the data domain and the time domain. Besides tracing the historical development of the various models their advantages and disadvantages are analyzed. This includes models based on discrete as weil as continuous probability distributions. How well a given model performs its purpose in a specific economic environment will determine the usefulness of the model. Each of the models is examined with actual data as to the applicability of the error fmding process.
IEEE Transactions on Software Engineering, 1990
In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Worse, we are not even in a position to be able to decide a priori which of the many models is most suitable in a particular context. Our own recent work has tried to resolve this problem by developing techniques whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, which we call the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. In this paper we show how this can be used to improve reliability predictions in a very general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used. Indeed, although this work arose from the need to address the poor performance of soffware reliability models, it is likely to have applicability in other areas such as reliability growth modeling for hardware.
1994
We consider the dependability of programs of an iterative nature. The dependability of software structures is usually analysed using models that are strongly limited in their realism by the assumptions made to obtain mathematically tractable models and by the lack of experimental data. Among the assumptions made, the independence between the outcomes of successive executions, which is often false, may lead to significant deviations of the result obtained from the real behaviour of the program under analysis. Experiments and theoretical justifications show the existence of contiguous failure regions in the program input space and that, for many applications, the inputs often follow a trajectory of contiguous points in the input space. In this work we present a model in which dependencies among input values of successive iterations are taken into account in studying the dependability of iterative software. We consider also the possibility that repeated, non fatal failures may together cause mission failure. We evaluate the effects of these different hypotheses on 1) the probability of completing a fixed-duration mission, and 2) a performability measure.
Software Engineering, IEEE Transactions on, 1985
A number of analytical models have been proposed during the past 15 years for assessing the reliability of a software system. In this paper we present an overview of the key modeling approaches, provide a critical analysis of the underlying assumptions, and assess the limitations and applicability of these models during the software development cycle. We also propose a step-by-step procedure for fitting a model and illustrate it via an analysis of failure data from a mediumsized real-time command and control software system.
2002
Software reliability models (SRMs) are classified into time domain models and counting process models. The time domain model is the stochastic model based on the sequence of inter-failure times. Jelinski and Moranda model [4] and Schick and Wolverton model [13] are the most classical models belonging to this class. On the other hand, the counting process models have gained popularity for describing the stochastic behavior of the number of software failures observed in the testing phase. The most well-known and tractable models are non-homogeneous Poisson process (NHPP) models. Goel and Okumoto [3], Yamada, Ohba and Osaki [15], Musa and Okumoto [9] develop representative NHPP models. These SRMs are based on the different debugging scenarios, and can catch qualitatively typical (but not general) reliability growth phenomena observed in the testing phase of software products. It should be noted, however, that the SRMs based on past observations may not be always useful in the software testing process. Because one cannot catch the global trend of the software failure occurrence phenomena in the initial testing phase. In other words, aunified modeling framework comprising some typical reliability growth patterns should be developed for robust software reliability assessment. Langberg and Singpurwalla [7] show that several SRMs can be comprehensively viewed by adopting aBayesian point of view. Miller [8] extends the Langberg and Singpurwalla's idea and considers an exponential order statistics model. Raftery [12], Kuo and Yang [5] investigate the modeling framework based on the generalized order statistics (GOS), and discuss several parameter estimation methods from the standpoint of both Bayesian and non-Bayesian statistics. In the GOS modeling framework, the SRMs can be characterized by only the fault detection time distribution. This article proposes phase-type SRMs based on the GOS of software failure data. To unify some existing SRMs, the phase-type distribution [10], which represents the software fault detection time distribution, is used to represent the GOS of software failure data. Also, we provide aunified estimation method for model parameters in the phase-type SRMs. The usual estimation method, such as the maximum likelihood estimation (MLE) based on the Newton's method, does not function well in many cases, since the phase-type SRMs often have many model parameters. To overcome this problem, we develop the EM (expectation-maximization) algorithms [1, 2, 11] to compute the maximum likelihood estimates of model parameters.
Handbook of Reliability Engineering
... Siddhartha R. Dalal ... t)/[1 − F(t)]. These models are Markovian but not strongly Markovian, except when F is exponential; minor variations of this case were studied by Jelinski and Moranda [15], Shooman [16], Schneidewind [17], Musa [18], Moranda [19], and Goel and Okomoto ...
IEEE Int'l Symp. Software Reliability Eng, 1991
2009
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.