Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004
In this paper, we describe an experience of dependability assessment of a typical industrial Programmable Logic Controller (PLC). The PLC is based on a two out of three voting policy and it is intended to be used for safety functions. Safety assessment of computer based systems performing safety functions is regulated by standards and guidelines. In all of them there is a common agreement that no single method can be considered sufficient to achieve and assess safety. The paper addresses the PLC assessment by probabilistic methods to determine its dependability attributes related to Safety Integrity Levels as defined by IEC61508 standard. The assessment has been carried out by independent teams, starting from the same basic assumptions and data. Diverse combinatorial and state space probabilistic modelling techniques, implemented by public tools, have been used. Even if the isolation of teams was not formally granted, the experience has shown different topics worthwhile to be described. First of all, the usage of different modelling techniques has led to diverse models. Moreover models focus on different system details also due the diverse teams skill. Also slight differences in understanding PLC assumptions have been occurred. In spite of all, the numerical results of the diverse models are comparable. The experience has also allowed a comparison of the different modelling techniques as implemented by the considered public tools.
2000
In this paper, we describe an experience of dependability assessment of a typical industrial Programmable Logic Controller (PLC). The PLC is based on a two out of three voting policy and it is intended to be used for safety functions. Safety assessment of computer based systems performing safety functions is regulated by standards and guidelines. In all of them there is a common agreement that no single method can be considered sufficient to achieve and assess safety. The paper addresses the PLC assessment by probabilistic methods to determine its dependability attributes related to Safety Integrity Levels as defined by IEC61508 standard. The assessment has been carried out by independent teams, starting from the same basic assumptions and data. Diverse combinatorial and state space probabilistic modelling techniques, implemented by public tools, have been used. Even if the isolation of teams was not formally granted, the experience has shown different topics worthwhile to be described. First of all, the usage of different modelling techniques has led to diverse models. Moreover models focus on different system details also due the diverse teams skill. Also slight differences in understanding PLC assumptions have been occurred. In spite of all, the numerical results of the diverse models are comparable. The experience has also allowed a comparison of the different modelling techniques as implemented by the considered public tools.
Proc. 12th European …, 2001
Computer based systems, which are devoted to control critical functions, may incur in safety and dependability problems. In the safety area a new standard is currently emerging, IEC 61508, which is intended to provide a unified framework which may deserve as guideline for the analysis of safety related systems. The present paper deals with the safety and dependability analysis of a Programmable Logic Controller (PLC) according to the requirements of IEC 61508. In order to gain insight on the system characteristics and on the used methodologies, different probabilistic techniques of increasing modeling power (Fault Tree (FT), Bayesian Networks (BN), Generalized Stochastic and Stochastic Well formed Petri Nets (GSPN and SWN) have been compared.
4th International Workshop on Soft Computing Applications, 2010
The paper evaluates the dependability of PLC systems based on assuming the failure modes of the system components. It is introduced a model of the service provided by a system to a single user as a sequence of pairs value-time which has to be recognized by the user. The service model is then improved in order to be applied for the case of a single service provided to multiple users. The formalism used for the failure modes assertions are ordered by the means of an implication graph, where each path represents a more relaxed ordered set of assertions regarding the system behaviour. The probability that a specific assumed failure mode proves to be true in the real system operation can be formalized by the concept of assumed failure mode coverage. Its effects on the system dependability are illustrated by a case study. It is demonstrated that more relaxed assertions made on components failure modes do not necessarily lead to an increase of the PLC system dependability.
Quality and Reliability Engineering International, 2019
The transition from analog to digital safety-critical instrumentation and control (I&C) systems has introduced new challenges for software experts to deliver increased software reliability. Since the 1970s, researchers are continuing to propose software reliability models for reliability estimation of software. However, these approaches rely on the failure history for the assessment of reliability. Due to insufficient failure data, these models fail to predict the reliability of safety critical systems. This paper utilizes the Bayesian update methodology and proposes a framework for the reliability assessment of the safety-critical systems (SCSs). The proposed methodology is validated using experiments performed on real data of 12 safety-critical control systems of nuclear power plants.
Petri Nets and …, 2001
The case-study presented in this paper is aimed at assessing the dependability of a Programmable Logic Controller (PLC) devoted to safety functions. This case study has been brought to our attention by a national environmental agency and has been partially abstracted and anonymized to protect proprietary information. The PLC consists of a triplicated channel with a (2 : 3) majority voting logic and is modeled by means of a recently proposed extension of the classical Fault Tree (FT) formalism called Parametric Fault Tree (PFT). In the PFT replicated units are folded and parameterized so that only one representative of the various similar replicas is explicitly included in the model. The quantitative analysis of the PFT assumes s-independence among components and is based on combinatorial formulas. In order to include dependencies both in the failure and repair process, the PFT is directly converted into a particular class of High Level Petri Nets, called SWN. The paper illustrates the PFT formalism and the automatic conversion algorithm from a PFT into a SWN. Moreover, it is shown how various kind of dependencies can be accommodated in the obtained SWN model.
2007
Dependability evaluation is an important, often indispensable, step in (critical) systems design and analysis processes. The introduction of control and/or computing systems to automate processes increases the overall system complexity and therefore has an impact in terms of dependability. When a system grows, dynamic effects, not present or manifested before, could arise or become significant in terms of reliability/availability: the system could be affected by common cause failures, the system components could interfere, effects due to load sharing arise and therefore should be considered. Moreover it is of interest to evaluate redundancy and maintenance policies. In those cases it is not possible to recur to notations as reliability block diagrams (RBD), fault trees (FT) or reliability graphs (RG) to represent the system, since the statistical independence assumption is not satisfied. Also more enhanced formalisms as dynamic FT (DFT) could result not adequate to the goal.
In Proceedings of the 15th Conference on …, 1999
Bayesian Networks (BN) provide robust probabilistic methods of reasoning under uncertainty and are then now widely used in a variety of real-world tasks. Despite their formal grounds are strictly based on the notion of conditional dependence, not much attention has been paid so far to the use of BN for the so-called dependability analysis, i.e. the analysis of various reliability parameters in physical systems design and maintenance. The aim of this paper is to propose BN as a suitable tool for dependability analysis, by challenging the formalism with basic issues arising in dependability tasks. We will discuss how both modeling and analysis issues can be naturally dealt with by BN. Moreover, we will show how some limitations intrinsic to classical combinatorial dependability methods such as Fault Trees (FT) can be overcome using BN, opening new opportunities of modeling and analysis for complex systems. This will be pursued through the study of a real-world example concerning the reliability analysis of a redundant digital Programmable Logic Controller (PLC) with majority voting 2:3
Reliability Engineering & System Safety, 2009
A large number of safety-critical control systems are based on N-modular redundant architectures, using majority voters on the outputs of independent computation units. In order to assess the compliance of these architectures with international safety standards, the frequency of hazardous failures must be analyzed by developing and solving proper formal models. Furthermore, the impact of maintenance faults has to be considered, since imperfect maintenance may degrade the safety integrity level of the system. In this paper, we present both a failure model for voting architectures based on Bayesian networks and a maintenance model based on continuous time Markov chains, and we propose to combine them according to a compositional multiformalism modeling approach in order to analyze the impact of imperfect maintenance on the system safety. We also show how the proposed approach promotes the reuse and the interchange of models as well the interchange of solving tools.
IEEE Transactions on Fuzzy Systems, 2000
The process industry has always been faced with the difficult task of determining the required integrity of safeguarding systems such as Safety Instrumented Systems (SIS). The ANSI/ISA S84.01-1996 and IEC 61508 safety standards provide guidelines for the design, installation, operation, maintenance, and test of SIS. However, in the field, there is a considerable lack of understanding of how to apply these standards to both determine and achieve the required Safety Integrity Level (SIL) for SIS. Moreover, in certain situations, the SIL evaluation is further complicated due to the uncertainty on reliability parameters of SIS components. This paper proposes a new approach to evaluate the "confidence" of the SIL determination when there is an uncertainty about failure rates of SIS components. This approach is based on the use of failure rates and fuzzy probabilities to evaluate the SIS failure probability on demand and the SIL of the SIS. Furthermore, we provide guidance on reducing the SIL uncertainty based on fuzzy probabilistic importance measures.
IFAC Proceedings Volumes, 1991
The paper provides guidelines for the definition of reliability requirements for computerized safety shutdown systems in the process industries. The main question discuued is how to derive safety system requirements which ensure that the reliability of field devices and control logic modules is balanced from a safety point of view. Reliability figures of example safety systems are presented for various configurations of senson, input/output cards, Central Processing Units (CPUs) , and actuating elements. The figures are based on quantitative reliability analyses using a model and methodology for probabilistic safety assessment developed by SINTEF. The main new feature of this model compared to other models is that the effect of all types of failures occurring during field operation is considered in an integrated manner. Thus, failures due to excessive environmental stresses and human-made mistakes during engineering and operation are included in addition to failures due to natural aging of components (irtherent failures). The effect of self-test is also included in the model.
— Providing the high availability level for the Instrumentation and Control (I&C) Systems in Nuclear Power Plants (NPP) is highly important. The availability of the critical NPP I&C systems depends on the hardware and software reliability behavior. The high availability of the I&C systems is ensured by the following measures: structural redundancy with choice of the I&C system configurations (two comparable subsystems in the I&C system, majority voting "2oo3", "2oo4", etc.); maintenance of the I&C system, which implies the repair (changing) of no operational modules; using the N-version programming; software updates; automatic software restart after temporary interrupts caused by the hardware fault. This paper proposes solution of the following case: the configuration of the fault-tolerant I&C system with known reliability indexes of hardware (failure rate and temporary failure rate) is chosen, the maintenance strategy of hardware (mean time to repair, numbers of repair) is specified. In these circumstances it is important to determine quantitative requirements to software reliability: number of software updates during operation I&C system; acceptable duration of the new software version development; acceptable duration of the automatic software restart; determination of acceptable failure rate for each software version. The value of the operational software parameters is determined for the specified availability level of the I&C system. The planned number of software updates determines the duration of testing in order to identify and correct the design faults. Duration of the software testing is limited to the moment when predicted model shows a specified number of hidden (undetected) design faults. To solve this issue, the availability model of the fault-tolerant I&C system was developed in the discrete-continuous stochastic system form. We have estimated the influence of the I&C system on the operational software parameters. Two configurations of I&C systems are presented in this paper: two comparable subsystems in I&C system, and I&C system with majority voting "2oo3".
Lecture Notes in Computer Science, 2005
Analytical and simulative modeling for dependability and performance evaluation has been proven to be a useful and versatile approach in all the phases of the system life cycle. Indeed, a widely used approach in performability modeling is to describe the system by state space models (like Markov models). However, for large systems, the state space of the system model may result extremely large, hardening its solution, both when pursuing an analytical and a simulation approach. Taking advantage of the characteristics of a particular class of systems, this paper develops a methodology to construct an efficient, scalable and easily maintainable architectural model for such class, especially tailored to dependability analysis and evaluation. Although limited in its applicability, the proposed methodology shows very attractive because of its ability to master complexity, both in the design phase of the model and, then, in its solution. To better illustrate the proposed methodology, we also offer a case study, selected from the class of systems our work is directed to. Abstract Stage_k+1 Model Abstract Stage_k Model Abstract Stage_k+2 Model Abstract RMU Model Abstract ITMU Model Abstract GMU Model • The probability of correct and incorrect emission. • The probability of correct and incorrect omission. • The probability of not undertaking wrong actions.
2009 35th Annual Conference of IEEE Industrial Electronics, 2009
Programmable logic controller is becoming the most important device adopted for controlling productive systems classified as safety-related. The main reason for that is associated to advances in technology that improve the reliability of the hardware and software components of such a controller. While in the hardware context the increase in reliability is attained by using electronic components with redundancy, diversity, and low probability of failure on demand, for the safety-related software these advances are mostly dependent on the use of techniques and procedures to reduce or eliminate design errors in control programs. In this direction the IEC61508 was established, a worldwide recognized reference in functional safety that significantly contributes to the aforementioned advances. This work benefits from the recommendations of such a standard, and proposes an extension of our previous approach, where PLC control programs written in LD are modeled as extended finite state machines that are afterward formally verified. From this verification process, it is possible to identify functional errors in these machines and, consequently, the related errors in the control programs.
2012
This paper introduces the new approach for reliability estimation of control systems software. First we provide the basic starting points and definitions required for understanding the approach. Next we define the main parameters and variables, which are necessary for reliability estimation. The last section deals with the model for assessing the reliability of control systems software.
2015
The reliability of the software represents one of the most important attributes of software quality, and the estimation of the reliability of the software is a problem hard to solve with accuracy. Nevertheless, in order to manage the quality of the software and of the standard practices in an organization, it is important to achieve an estimation of the reliability as accurate as possible. In the present work there are described the principles and techniques which underlie the estimation of the reliability of the software, starting from the definition of the concepts which express the attributes of software quality. It is taken into account the issue of the estimation of a software part. The presumed objective of the estimation of the reliability consists in the analysis of the risk and of the reliability of the software-based systems. Supposedly, a documented opinion of the expert exists regarding the reliability of the software and an update of the defined estimation of the reliab...
Nuclear Engineering and Technology
To assess the risk of nuclear power plant operation and to determine the risk impact of digital systems, there is a need to quantitatively assess the reliability of the digital systems in a justifiable manner. The Probabilistic Risk Analysis (PRA) is a tool which can reveal shortcomings of the NPP design in general and PRA analysts have not had sufficient guiding principles in modelling particular digital components malfunctions. Currently digital I&C systems are mostly analyzed simply and conventionally in PRA, based on failure mode and effects analysis and fault tree modelling. More dynamic approaches are still in the trial stage and can be difficult to apply in full scale PRA-models. As basic events CPU failures, application software failures and common cause failures (CCF) between identical components are modelled.The primary goal is to model dependencies. However, it is not clear which failure modes or system parts CCF:s should be postulated for. A clear distinction can be made...
Quality and Reliability Engineering International, 1993
Various models which may be used for quantitative assessment of hardware, software and human reliability are compared in this paper. Important comparison criteria are the system life cycle phase in which the model is intended to be used, the failure category and reliability means considered in the model, model purpose, and model characteristic such as model construction approach, model output and model input. The main objective is to present limitations in the use of current models for reliability assessment of computer-based safety shutdown systems in the process industry and to provide recommendations on further model development. Main attention is given to presenting the overall concept of various models from a user's point of view rather than technical details of specific models. A new failure classification scheme is proposed which shows how hardware and software failures may be modelled in a common framework.
Nuclear Engineering and Design, 2008
In the recent times, computer-based systems are frequently used for protection and control of Nuclear Power Plants (NPPs). In the conventional Probabilistic Safety Assessment (PSA), the contribution from software in these computer-based systems was not given necessary attention. However, from operating experience, it has been found failures in such systems can also result in initiating events that have the potential for leading into Core Damage in the event of unavailability of the respective Engineered Safety Features. The impact of a typical computer-based system on PSA of Indian Nuclear Power Plant is demonstrated.
Brazilian Journal of Radiation Sciences, 2021
Safety analysis uses probability combinatorial models like fault tree and/or event tree. Such methods have static basic events and do not consider complex scenarios of dynamic reliability, leading to conservative results. Reliability, availability, and maintainability (RAM) analysis using reliability block diagram (RBD) experience the same limitations. Continuous Markov chains model dynamic reliability scenarios but suffer from other limitations like states explosion and restriction of exponential life distribution only. Markov Regenerative Stochastic Petri Nets oblige complex mathematical formalism and still subject to state explosions for large systems. In the design of complex systems, distinct teams make safety and RAM analyses, each one adopting tools better fitting their own needs. Teams using different tools turns obscure the detection of problems and their correction is even harder. This work aims to improve design quality, reduce design conservatism, and ensure consistency by proposing a single and powerful tool to perform any probabilistic analysis. The suggested tool is the Stochastic Colored class of Petri Nets, which supplies hierarchical organization, a set of options for life distributions, dynamic reliability scenarios and simple and easy construction for large systems. This work also proposes more quality rules to assure model consistency. Such method for probabilistic analysis may have the effect of shifting systems design from "redundancy, segregation and independency" approach to "maintainability, maintenance and contingency procedures" approach. By modeling complex human and automated interventional scenarios, this method reduces capital costs and keeps safety and availability of systems.
2012
Safety Instrumented Systems (SIS) are designed to prevent and / or mitigate accidents, avoiding undesirable high potential risk scenarios, assuring protection of people's health, protecting the environment and saving costs of industrial equipment. Standards such as ANSI/ISA S.84.01; IEC 61508, IEC 61511, among others, guide different activities related to Safety Life Cycle (SLC) design of SIS: mathematical methods are strongly recommended for desired safety integrity level (SIL). In this context, this paper considers control algorithm development and validation and proposes a mathematical method for modeling SIS including diagnostic and treatment of critical faults based on Bayesian networks (BN) and Petri nets (PN). This approach considers diagnostic and treatment for each safety instrumented function (SIF) including hazard and operability (HAZOP) studies in the equipment or system under control. It also uses Bayesian network (BN) and Behavioral Petri net (BPN) for diagnoses and decision-making and the interpreted Petri net (PN) for the synthesis, modeling and control to be implemented by Safety PLC as a layer of risk reduction separated from the Basic Process Control System (BPCS). Finally, a case study of a natural gas compression station considering diagnostic and treatment of critical faults is presented.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.