Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001, International Journal of Systems Science
The objective of the paper is to lay down a foundation for imprecise system reliability assessments based on coherent imprecise probabilities that are a particular case of coherent imprecise previsions. Previous attempts of wide implementation of other theories of imprecise probabilities in reliability analyses have not succeeded. The recent theory of coherent imprecise previsions appeared to be a promising tool for reliability and risk assessments and devoid of drawbacks of its predecessors. This paper describes an approach how the coherent imprecise reliabilities of series and parallel systems can be calculated. A set of theorems is proven to allow calculating the imprecise reliability of a system of an arbitrary structure, in particular with fault trees. An approach to calculate imprecise reliability based on purely comparative judgements is described.
The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time to failure which is in reality a rather arbitrary value. The practical meaning of the models of this kind is brought to question. We suggest an approach that overcomes the issue of having to impose an upper bound on time to failure and makes the calculated lower and upper reliability measures more precise. The main assumption consists in that failure rate is bounded. Lagrange method is used to solve the non-linear program. Finally, an example is provided.
In this paper the judgement consisting in choosing a function that is believed to dominate the true probability distribution of a continuous random variable is explored. This kind of judgement can significantly increase precision in constructed imprecise previsions of interest, which of great importance for applications. New formulae for computing system reliability are derived on the basis of the technique developed.
Journal of Loss Prevention in the Process Industries, 2012
Existing risk in production systems has a direct relationship with unreliability of these systems. Under such circumstances, the approach to maximize the reliability should be replaced with a risk-based reliability assessment approach. Calculating the absolute reliability for systems and complex processes, when we are not provided with any data on failure, is extremely complex and difficult. Until now, studies of reliability assessment have been based on the probability theory, in which the failure time is anticipated after determining the type of size distributions. However, in this paper, the researchers have developed an approach to apply the possibility theory instead of the probability theory. Instead of using absolutely qualitative methods, this new approach applies the Dempstere Shafer Theory. It is obvious when there are insufficient data; an index is needed to make a decision. Then, a novel method is proposed and used in a real case study in order to determine the reliability of production systems based on risk when the available data are not sufficient, helping us to make decisions. After calculating the failure probability and analyzing the assessment matrix and risk criteria, we may conclude that the failure risk of equipment is reduced while the system reliability is increased.
International Journal of Industrial and Systems Engineering, 2011
Existing reliability evaluation methods are based on the availability of knowledge about component states. However, component states are often uncertain or unknown, especially during the early stages of the development of new systems. In such cases it is important to understand how uncertainties will affect system reliability assessment. Another shortcoming of existing methods is that they only consider systems whose components have discrete states. For those whose components have continuous states, these methods may not be applicable. Using Monte-Carlo simulation, this paper proposed a method to assess the reliability of systems with continuous distribution of component states. This method will also be useful when we do not have enough knowledge on component states and related probabilities. Comparison of two examples proves that component uncertainty has significant influence on the assessment of system reliability.
Reliability Engineering & System Safety, 2021
In this work, the reliability of complex systems under consideration of imprecision is addressed. By joining two methods coming from different fields, namely, structural reliability and system reliability, a novel methodology is derived. The concepts of survival signature, fuzzy probability theory and the two versions of non-intrusive stochastic simulation (NISS) methods are adapted and merged, providing an efficient approach to quantify the reliability of complex systems taking into account the whole uncertainty spectrum. The new approach combines both of the advantageous characteristics of its two original components: 1. a significant reduction of the computational effort due to the separation property of the survival signature, i.e., once the system structure has been computed, any possible characterization of the probabilistic part can be tested with no need to recompute the structure and 2. a dramatically reduced sample size due to the adapted NISS methods, for which only a single stochastic simulation is required, avoiding the double loop simulations traditionally employed. Beyond the merging of the theoretical aspects, the approach is employed to analyze a functional model of an axial compressor and an arbitrary complex system, providing accurate results and demonstrating efficiency and broad applicability.
IEEE Transactions on Reliability, 1970
A computer program, which provides bounds for system Shooman [3] provides further mathematical background reliability, is described. The algorithms are based on the concepts of success material and an entire chapter on combinatorial reliability. paths and cut sets. A listing of the elements in the system, their predecessors, In addition, he gives many references to previous work in and the probability of successful operation of each element are the inputs. this area of reliability computation. The outputs are the success paths, the cut sets, and a series of upper and lower reliability bounds; these bounds converge to the reliability which would be calculated if all the terms in the model were evaluated. The algo-II. REVIEW OF USEFUL RESULTS rithm for determining the cuts from the success paths is based on Boolean logic and is relatively simple to understand. Two examples are described, The success probability of a system, typically called the one of which is very simple and the computation can be done by hand, and a system reliability, is defined as the probability of successful second for which there are 55 success paths and 10 cuts and thus machine function of all of the elements in at least one tie set or as the computation is desirable. probability that all cut sets are good. A tie set (success path) Reader Aids: is a directed path from input to output as indicated in the Purpose: Helpful hints simple system in Fig. l(b). The tie sets are (2, 5), (1, 3, 5), Special math needed for explanations: Probability (1 4 5). A cut set is a set of elements which literally cuts all Special math needed for results: None
2008
Analyzing reliability of a system in design stage requires expertpsilas estimations and statistical data with various degrees of epistemic uncertainty and doing aggregation in a coherent framework. Dempster-Shafer (DS) theory is theorypotentially valuable tool for combination of evidence obtained from multiple different sources. One approach for fuzzy reliability assessment is using Vague set (VS) theory. DS theory has many similarities with VS theory. Uncertain raw data about the component reliability of a system can be combined using different combination methods of DS theory and can be represented in the form of triangular fuzzy vague number. Using the proper methods and equations, the fuzzy reliability of the system can be computed with triangular vague numbers of components reliability. Combining these two theories eliminates the gap between the representation of combined evidences and the way of representing the reliability of components in the VS theory for reliability assessment. Our proposed method eliminates this gap in very convenient form. Because of closed relevance of these two theories we can represent the output of DS combination in the form of vague triangular number in the VS theory. With this method we eliminate the loss of meaningful information in this conversion.
IEEE Access, 2018
Critical technological systems exhibit complex dynamic characteristics such as time-dependent behavior, functional dependencies among events, sequencing and priority of causes that may alter the effects of failure. Dynamic fault trees (DFTs) have been used in the past to model the failure logic of such systems, but the quantitative analysis of DFTs has assumed the existence of precise failure data and statistical independence among events, which are unrealistic assumptions. In this paper, we propose an improved approach to reliability analysis of dynamic systems, allowing for uncertain failure data and statistical and stochastic dependencies among events. In the proposed framework, DFTs are used for dynamic failure modeling. Quantitative evaluation of DFTs is performed by converting them into generalized stochastic Petri nets. When failure data are unavailable, expert judgment and fuzzy set theory are used to obtain reasonable estimates. The approach is demonstrated on a simplified model of a cardiac assist system.
2001
& CONCLUSIONS In reliability analysis of computer systems, models such as fault trees, Markov chains, and stochastic Petri nets(SPN) are built to evaluate or predict the reliability of the system. In general, the parameters in these models are usually obtained from field data, or by the data from systems with similar functionality, or even by guessing. In this paper, we address the parameter uncertainty problem. First, we review and classify three ways to describe the parameter uncertainty in the model: reliability bounds, confidence intervals, and probability distributions. Second, by utilizing the second-order approximation and the normal approximation, we propose an analytic method to derive the confidence interval of the system reliability from the confidence intervals of parameters in the transient solution of Markov models. Then, we study the Monte Carlo simulation method to derive the uncertainty in the system reliability, and use it to validate our proposed analytic method. Our effort makes the reliability prediction more realistic compared with the result without the uncertainty analysis.
Annals of Operations Research, 2002
The main contribution of this paper consists in providing different ways to value importance measures for components in a given reliability system or in an electronic circuit. The main tool used is a certain type of semivalues and probabilistic values. One of the results given here extends the indices given by Birnbaum [3] and Barlow and Proschan [2], which respectively
2009
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2016
In reliability analysis, comparing system reliability is an essential task when designing safe systems. When the failure probabilities of the system components (assumed to be independent) are precisely known, this task is relatively simple to achieve, as system reliabilities are precise numbers. When failure probabilities are ill-known (known to lie in an interval) and we want to have guaranteed comparisons (i.e., declare a system more reliable than another when it is for any possible probability value), there are different ways to compare system reliabilities. We explore the computational problems posed by such extensions, providing first insights about their pros and cons.
Microelectronics Reliability, 1985
It has been established that symbolic reliability evaluation problem is an N.P. complete problem and as such is computationally infeasible for large networks. This has led to increased efforts in search for more fast exact techniques as well as better approximate methods. This paper proposes an algorithm for determining improved (tighter) upper and lower bounds on system reliability of general (i.e. non-series-parallel) systems. The proposed algorithm, like most existing methods, requires the knowledge of all simple paths and minimal cutsets of the system. The system sUccess (failure) function is the union of all simple paths (minimal cutsets). The system success and failure functions are then modified to multinominal form and these expressions are interpreted as proper probability expressions using some approximations. The proposed algorithm gives better bounds as compared to the min-max method, the method of successive bounds, Esary-Proschan bounds and Shogan bounds and is illustrated by an example.
This paper provides an approach for assessing the uncertainty associated with the estimate of the availability of a two-state repairable system. During the design stage it is often necessary to allocate scarce testing resources among various components in an efficient manner. Although there are a variety of importance and uncertainty measures for the reliability of a system, there are limited measures for systems availability. This study attempts to fill the gaps on availability importance measures and provide insights for techniques to reduce the variance of a system-level availability estimate efficiently. The variance importance measure is constructed such that it provides a measure of the improvement in the variance of the system level availability estimate through the reduction of the variance of the various component availability estimates. In addition, a cost model is developed that trades-off cost and uncertainty. The measure is illustrated for five common system structures. Monte Carlo Simulation is used to illustrate the use of the assessment tools on a specific problem. Observations conclude that results are consistent with reliability importance measures.
2017
This paper presents fuzzy lower and upper probabilities for the reliability of series systems. Attention is restricted to series systems with exchangeable components. In this paper, we consider the problem of the evaluation of system reliability based on the nonparametric predictive inferential (NPI) approach, in which the defining the parameters of reliability function as crisp values is not possible and parameters of reliability function are described using a triangular fuzzy number. The formula of a fuzzy reliability function and its α-cut set are present. The fuzzy reliability of structures defined based on a fuzzy number. Furthermore, the fuzzy reliability functions of series systems discussed. Finally, some numerical examples are present to illustrate how to calculate the fuzzy reliability function and its α-cut set. In other words, the aim of this paper is present a new method titled fuzzy nonparametric predictive inference for the reliability of series systems.
Reliability Engineering & System Safety, 1988
In th& paper we discuss some main topics within reliability theory and its applications. These include for example, modelling of systems of dependent components, identification of critical components, modelling of repairable systems, and the use of multistate models. The starting point for the discussion is Bergman's review paper on reliability theory and its application (Scand.
2016
Research in traditional reliability theory is based mainly on probist reliability, which uses a binary state assumption and classical reliability distributions. In the present paper the binary state assumption has been replaced by a fuzzy state assumption, thereby leading to profust reliability estimates of a repairable system, which is modeled as a four unit gracefully degradable system using Markov process. The effect of variations of system coverage factor and repair rates on the fuzzy availability is also studied.
A generalised probabilistic framework is proposed for reliability assessment and uncertainty quantification under a lack of data. The developed computational tool allows the effect of epistemic uncertainty to be quantified and has been applied to assess the reliability of an electronic circuit and a power transmission network. The strength and weakness of the proposed approach are illustrated by comparison to traditional probabilistic approaches. In the presence of both aleatory and epistemic uncertainty, classic probabilis-tic approaches may lead to misleading conclusions and a false sense of confidence which may not fully represent the quality of the available information. In contrast, generalised probabilistic approaches are versatile and powerful when linked to a computational tool that permits their applicability to realistic engineering problems.
A simple method is described for deriving system failure probability distributions from component failure rate uncertainties which obey either gamma or log normal distributions. The formalism is applicable to series-parallel systems and includes the effects of coupling between the failure rates of individual components. Coupling means that the failure rates of identical components under the same conditions scatter significantly less than the failure rates of just any two components of the same type. Examples are used to illustrate that there are various interpretations for the coupling phenomenon and alternative ways to use observed failure data in reliability and uncertainty analysis. Both the mean value and the variance of the distribution of the failure probability of a redundant system tend to increase with increasing coupling.
IEEE Transactions on Reliability, 1991
Special math needed for explanations: Boolean algebra, Probability.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.