Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019, HAL (Le Centre pour la Communication Scientifique Directe)
N n=1 ψ(Y (n)), where Y (1) ,. .. , Y (N) are N independent copies of Y , it can happen that we never, or almost never, sample in A, leading to a very poor estimation of µ, another form of the rareness problem.
Stochastic Modelling and Applied Probability, 2013
, except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone.
Technometrics, 2000
Statistica! and probabilistic models in reliability / D.C. Ionescu, N. Limnios, editors. p. cm. Includes bibliographical references and index.
2011
Systems in critical applications employ various hardware and software fault-tolerance techniques to ensure high dependability. Stochastic models are often used to analyze the dependability of these systems and assess the effectiveness of the fault-tolerance techniques employed. Measures like performance and performability of systems are successful attack) of a network routing session, expected number of jobs in a queueing system with breakdown and repair of servers and call handoff probability of a cellular wireless communication cell.
Reliability. The phrase reliability in engineering is used to refer to the probability that a system will operate and perform its function under a set of conditions over an established period of time. Reliability is quantified under test conditions by recording a set of time-to-failure data; if there are a large number of elements at time (t), the Reliability (R) of the elements at time (t) is given by the following equation:
IEEE Software, 2000
Handbook of Reliability Engineering
... Siddhartha R. Dalal ... t)/[1 − F(t)]. These models are Markovian but not strongly Markovian, except when F is exponential; minor variations of this case were studied by Jelinski and Moranda [15], Shooman [16], Schneidewind [17], Musa [18], Moranda [19], and Goel and Okomoto ...
2011 IEEE 22nd International Symposium on Software Reliability Engineering, 2011
Stochastic models are often employed to study dependability of critical systems and assess various hardware and software fault-tolerance techniques. These models take into account the randomness in the events of interest (aleatory uncertainty) and are generally solved at fixed parameter values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). This paper discusses methods for computing the uncertainty in output metrics of dependability models, due to epistemic uncertainties in the model input parameters. Methods for epistemic uncertainty propagation through dependability models of varying complexity are presented with illustrative examples. The distribution, variance and expectation of model output, due to epistemic uncertainty in model input parameters are derived and analyzed to understand their limiting behavior.
1992
For many researchers, the literature of reliability coefficients seems bewildering although the methodological problem in which they are embedded is reasonably clear: Since we can never know what it is that we claim to see independent of our seeing it, or, translated into the language of science, since we can not test hypotheses about reality without first generating the observations or data to talk about, the accuracy by which primary data "represent" an unobserved nature remains unascertainable in principle (Krippendorff, 1991). Yet, to assure that the data that go into scientific inquiries are not accidental, it is important to demonstrate that the data-generating procedures are reproducible under varying circumstances and by several observers. All reliability measures are intended to express the degree to which several observers, several measuring instruments, or several interrogations of the same units of analysis yield the same descriptive accounts, category assignme...
2007
Increasing interest is being paid to quantitative evaluation based on measurements of dependability attributes and metrics of computer systems and infrastructures. Despite measurands are generally sensibly identified, different approaches make it difficult to compare different results. Moreover, measurement tools are seldom recognized for what they are: measuring instruments. In this paper, many measurement tools, present in the literature, are critically evaluated at the light of metrology concepts and rules. With no claim of being exhaustive, the paper i) investigates if and how deeply such tools have been validated in accordance to measurement theory, and ii) tries to evaluate (if possible) their measurement properties. The intention is to take advantage of knowledge available in a recognized discipline such as metrology and to propose criteria and indicators taken from such discipline to improve the quality of measurements performed in evaluation of dependability attributes.
Journal of Statistical Planning and Inference, 1996
The 5th International …, 2008
Abstract. Many stochastic models have been used in solving reliability problems, motivated by a high degree of variability or randomness of the studied phenomena. Therefore different types of stochastic laws derived from the basic distributions are proposed for modelling a ...
HAL (Le Centre pour la Communication Scientifique Directe), 2018
International Journal of Systems Science, 2001
The objective of the paper is to lay down a foundation for imprecise system reliability assessments based on coherent imprecise probabilities that are a particular case of coherent imprecise previsions. Previous attempts of wide implementation of other theories of imprecise probabilities in reliability analyses have not succeeded. The recent theory of coherent imprecise previsions appeared to be a promising tool for reliability and risk assessments and devoid of drawbacks of its predecessors. This paper describes an approach how the coherent imprecise reliabilities of series and parallel systems can be calculated. A set of theorems is proven to allow calculating the imprecise reliability of a system of an arbitrary structure, in particular with fault trees. An approach to calculate imprecise reliability based on purely comparative judgements is described.
Communications in Statistics - Theory and Methods, 2005
The reliability polynomial R p of a collection of subsets of a finite set X has been extensively studied in the context of network theory. There, X is the edge set of a graph V X and the collection of the edge sets of certain subgraphs. For example, we may take to be the collection of edge sets of spanning trees. In that case, R p is the probability that, when each edge is included with the probability p, the resulting subgraph is connected. The second author defined R p in an entirely different way enabling one to glean additional information about the collection from R p. Illustrating the extended information available in the reliability polynomial is the main focus of this article while demonstrating the equivalence of these two definitions is the main theoretical result.
2004
In this paper, we describe an experience of dependability assessment of a typical industrial Programmable Logic Controller (PLC). The PLC is based on a two out of three voting policy and it is intended to be used for safety functions. Safety assessment of computer based systems performing safety functions is regulated by standards and guidelines. In all of them there is a common agreement that no single method can be considered sufficient to achieve and assess safety. The paper addresses the PLC assessment by probabilistic methods to determine its dependability attributes related to Safety Integrity Levels as defined by IEC61508 standard. The assessment has been carried out by independent teams, starting from the same basic assumptions and data. Diverse combinatorial and state space probabilistic modelling techniques, implemented by public tools, have been used. Even if the isolation of teams was not formally granted, the experience has shown different topics worthwhile to be described. First of all, the usage of different modelling techniques has led to diverse models. Moreover models focus on different system details also due the diverse teams skill. Also slight differences in understanding PLC assumptions have been occurred. In spite of all, the numerical results of the diverse models are comparable. The experience has also allowed a comparison of the different modelling techniques as implemented by the considered public tools.
Information Processing and Management of Uncertainty in Knowledge-Based Systems, 2016
In reliability analysis, comparing system reliability is an essential task when designing safe systems. When the failure probabilities of the system components (assumed to be independent) are precisely known, this task is relatively simple to achieve, as system reliabilities are precise numbers. When failure probabilities are ill-known (known to lie in an interval) and we want to have guaranteed comparisons (i.e., declare a system more reliable than another when it is for any possible probability value), there are different ways to compare system reliabilities. We explore the computational problems posed by such extensions, providing first insights about their pros and cons.
According to Knezevic [1] the purpose of the existence of any functional system is to do work. The work is done when the expected measurable function is performed through time. However, experience teaches us that expected work is frequently beset by failures, some of which result in hazardous consequences to: the users; the natural environment; the general population and businesses. During the last sixty years, Reliability Theory has been used to create failure predictions and try to identify where reductions in failures could be made throughout the life cycle phases of maintainable systems. However, mathematically and scientifically speaking, the accuracy of these predictions, at best, were only ever valid to the time of occurrence of the first failure, which is far from satisfactory in the respect of its expected life. Consequently, the main objective of this paper is to raise the question how reliable are reliability predictions of maintainable systems based on the Reliability Function.
1997
is charge de recherche at CNRS. He joined the LAAS-CNRS's Fault Tolerance and Dependable Computing group in 1988. His current research interests include dependability modeling and computing systems evaluation with a focus on software reliability growth evaluation and operational systems' security assessment. He also works on the definition of process models for the development of dependable systems. Kaâniche received a CE from the French National School of Civil Aviation and a PhD in computer science and automation from the University of Toulouse. He is a member of AFCET.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.