Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, 2011 IEEE 22nd International Symposium on Software Reliability Engineering
…
10 pages
1 file
Stochastic models are often employed to study dependability of critical systems and assess various hardware and software fault-tolerance techniques. These models take into account the randomness in the events of interest (aleatory uncertainty) and are generally solved at fixed parameter values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). This paper discusses methods for computing the uncertainty in output metrics of dependability models, due to epistemic uncertainties in the model input parameters. Methods for epistemic uncertainty propagation through dependability models of varying complexity are presented with illustrative examples. The distribution, variance and expectation of model output, due to epistemic uncertainty in model input parameters are derived and analyzed to understand their limiting behavior.
2011
Systems in critical applications employ various hardware and software fault-tolerance techniques to ensure high dependability. Stochastic models are often used to analyze the dependability of these systems and assess the effectiveness of the fault-tolerance techniques employed. Measures like performance and performability of systems are successful attack) of a network routing session, expected number of jobs in a queueing system with breakdown and repair of servers and call handoff probability of a cellular wireless communication cell.
Handbook of Research on Emerging Advancements and Technologies in Software Engineering
In recent years, reliability assessment is an essential process in system quality assessments. However, the best practice of software engineering for reliability analysis is not yet of its matured stage. The existing works are only capable to explicitly apply a small portion of reliability analysis in a standard software development process. In addition, an existing reliability assessment is based on an assumption provided by domain experts. This assumption is often exposed to errors. An effective reliability assessment should be based on reliability requirements that could be quantitatively estimated using metrics. The reliability requirements can be visualized using reliability model. However, existing reliability models are not expressive enough and do not provide consistence-modeling mechanism to allow developers to estimate reliability parameter values. Consequently, the reliability estimation using those parameters is usually oversimplified. With this situation, the inconsistency problem could happen between different estimation stages. In this chapter, a new Model-Based Reliability Estimation (MBRE) methodology is developed. The methodology consists of reliability model and reliability estimation model. The methodology provides a systematic way to estimate system reliability, emphasizing the reliability model for producing reliability parameters which will be used by the reliability estimation model. These models are built upon the timing properties, which is the primary input value for reliability assessment.
2001
& CONCLUSIONS In reliability analysis of computer systems, models such as fault trees, Markov chains, and stochastic Petri nets(SPN) are built to evaluate or predict the reliability of the system. In general, the parameters in these models are usually obtained from field data, or by the data from systems with similar functionality, or even by guessing. In this paper, we address the parameter uncertainty problem. First, we review and classify three ways to describe the parameter uncertainty in the model: reliability bounds, confidence intervals, and probability distributions. Second, by utilizing the second-order approximation and the normal approximation, we propose an analytic method to derive the confidence interval of the system reliability from the confidence intervals of parameters in the transient solution of Markov models. Then, we study the Monte Carlo simulation method to derive the uncertainty in the system reliability, and use it to validate our proposed analytic method. Our effort makes the reliability prediction more realistic compared with the result without the uncertainty analysis.
Safe Comp 95, 1995
For safety-critical systems, the required reliability (or safety) is often extremely high. Assessing the system, to gain confidence that the requirement has been achieved, is correspondingly hard, particularly when the system depends critically upon extensive software. In practice, such an assessment is often carried out rather informally, taking account of many different types of evidence-experience of previous, similar systems; evidence of the efficacy of the development process; testing; expert judgement, etc. Ideally, the assessment would allow all such evidence to be combined into a final numerical measure of reliability in a scientifically rigorous way. In this paper we address one part of this problem: we present a means whereby our confidence in a new product can be augmented beyond what we would believe merely from testing that product, by using evidence of the high dependability in operation of previous products. We present some illustrative numerical results that seem to suggest that such experience of previous products, even when these have shown very high dependability in operational use, can improve our confidence in a new product only modestly.
1979 International Workshop on Managing Requirements Knowledge (MARK), 1979
Boehm, Brown, and Lipow 1 have characterized the multidimensional nature of software quality in terms of a hierarchy of attributes. One of the high-level attributes is reliability, which they define qualitatively as the satisfactory performance of intended functions. This definition may be refined to the quantitative statement "probability of failure- free operation in a specified environment for a specified time." A "failure" is an unacceptable departure of program operation from program requirements, where, as in the case of hardware, "unacceptable" must ultimately be defined by the user. The term "fault" will be used to indicate the program defect that causes the failure.
Digest of Papers. Twenty-Eighth Annual International Symposium on Fault-Tolerant Computing (Cat. No.98CB36224)
We address problems in modelling the reliability of multiple-version software, and present models intended to improve the understanding of the various ways failure dependence between versions can arise. The previous models, by Eckhardt and Lee and by Littlewood and Miller, described what behaviour could be expected "on average" from a randomly chosen pair of "independently generated" versions. Instead, we address the problem of predicting the reliability of a specific pair of versions. The concept of "variation of difficulty" between situations to which software may be subject is central to the previous models cited. We show that it has even more far-reaching implications than previously found. In particular, we consider the practical implications of two phenomena: varying probabilities of failure over input sub-domains or operating regimes; and positive correlation between successive executions of control software. Our analysis provides some practical advice for regulators, and useful insight into non-intuitive aspects of the failure process of diverse software.
Software reliability analysis is performed at various stages during the process of engineering software as an attempt to evaluate if the software reliability requirements have been (or might be) met. In this report, I present a summary of some fundamental black-box and white-box software reliability models. I also present some general shortcomings of these models and suggest avenues for further research.
Software Engineering, IEEE …, 1978
This paper examines the most widely used reliability modeLs. The models discussed fall into two categories, the data domain and the time domain. Besides tracing the historical development of the various models their advantages and disadvantages are analyzed. This includes models based on discrete as weil as continuous probability distributions. How well a given model performs its purpose in a specific economic environment will determine the usefulness of the model. Each of the models is examined with actual data as to the applicability of the error fmding process.
IEEE Transactions on Software Engineering, 1990
In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Worse, we are not even in a position to be able to decide a priori which of the many models is most suitable in a particular context. Our own recent work has tried to resolve this problem by developing techniques whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, which we call the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. In this paper we show how this can be used to improve reliability predictions in a very general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used. Indeed, although this work arose from the need to address the poor performance of soffware reliability models, it is likely to have applicability in other areas such as reliability growth modeling for hardware.
1994
We consider the dependability of programs of an iterative nature. The dependability of software structures is usually analysed using models that are strongly limited in their realism by the assumptions made to obtain mathematically tractable models and by the lack of experimental data. Among the assumptions made, the independence between the outcomes of successive executions, which is often false, may lead to significant deviations of the result obtained from the real behaviour of the program under analysis. Experiments and theoretical justifications show the existence of contiguous failure regions in the program input space and that, for many applications, the inputs often follow a trajectory of contiguous points in the input space. In this work we present a model in which dependencies among input values of successive iterations are taken into account in studying the dependability of iterative software. We consider also the possibility that repeated, non fatal failures may together cause mission failure. We evaluate the effects of these different hypotheses on 1) the probability of completing a fixed-duration mission, and 2) a performability measure.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Software Engineering, IEEE Transactions on, 1985
Handbook of Reliability Engineering
IEEE Int'l Symp. Software Reliability Eng, 1991
Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2016
Software Quality Assurance, 2016
Problems of Information Technology, 2017
International journal of engineering research and technology, 2018
IEEE Transactions on Software Engineering, 2000