Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, Proceedings of the IEEE/ …
AI
Integrating model verification and self-adaptation presents a new paradigm for enhancing the reliability of autonomous systems. This paper discusses a hybrid neural-symbolic system framework that utilizes model checking alongside adaptive learning to improve system models according to specified properties. Through a case study involving a pump system, the framework demonstrates the capability to adapt to various scenarios and optimize verification processes, addressing the challenges posed by noise and incorrect knowledge in real-world applications.
Application and Theory of Petri Nets and Concurrency, 2018
Model checking is becoming a popular verification method that still suffers from combinatorial explosion when used on large industrial systems. Currently, experts can, in some cases, overcome this complexity by selecting appropriate modeling and verification techniques, as well as an adapted representation of the system. Unfortunately, this cannot yet be done automatically, thus hindering the use of model checking in industry. The objective of this paper is to sketch a way to tackle this problem by introducing self-adaptive model checking. This is a long term goal that could lead the community to elaborate a new generation of model checkers able to successfully push forwards the scale of the systems they can deal with.
Communications of the ACM, 2010
The increasing popularity of model-based development and the growing power of model checkers are making it practical to use formal verification for important classes of software designs. A barrier to doing this in an industrial setting has been the need to translate the commercial modeling languages developers use into the input languages of the verification tools. This paper describes a translator framework that enables the use of model checking and theorem proving on complex avionics systems and describes its use in three industrial case studies.
Lecture Notes in Computer Science, 2004
In the classic approach to logic model checking, software verification requires a manually constructed artifact (the model) to be written in the language that is accepted by the model checker.T he construction of such a model typically requires good knowledge of both the application being verified and of the capabilities of the model checker that is used for the verification. Inadequate knowledge of the model checker can limit the scope of verification that can be performed; inadequate knowledge of the application can undermine the validity of the verification experiment itself. In this paper we explore a different approach to software verification. With this approach, a software application can be included, without substantial change, into a verification test-harness and then verified directly, while preserving the ability to apply data abstraction techniques.O nly the test-harness is written in the language of the model checker.T he test-harness is used to drive the application through all its relevant states, while logical properties on its execution are checked by the model checker.T oa llowt he model checker to track state, and avoid duplicate work, the test-harness includes definitions of all data objects in the application that contain state information. The main objective oft his paper is to introduce a powerful extension of the SPIN model checker that allows the user to directly define data abstractions in the logic verification of application levelprograms.
Software Testing, Verification and Reliability, 2001
To formally verify a large software application, the standard method is to invest a considerable amount of time and expertise into the manual construction of an abstract model, which is then analyzed for its properties by either a mechanized or by a human prover. There are two main problems with this approach. The first problem is that this verification method can be no more reliable than the humans that perform the manual steps. If rate of error for human work is a function of problem size, this holds not only for the construction of the original application, but also for the construction of the model. This means that the verification process tends to become unreliable for larger applications. The second problem is one of timing and relevance. Software applications built by teams of programmers can change rapidly, often daily. Manually constructing a faithful abstraction of any one version of the application, though, can take weeks or months. The results of a verification, then, can quickly become irrelevant to an ongoing design effort. In this paper we sketch a verification method that aims to avoid these problems. This method, based on automated model extraction, was first applied in the verification of the call processing software for a new Lucent Technologies' system called PathStar. (Invited paper to the FORTE/PSTV Conference, October 1999, Beijing, China.) 1.
IEEE Transactions on Software Engineering, 2002
AbstractÐSoftware verification methods are used only sparingly in industrial software development today. The most successful methods are based on the use of model checking. There are, however, many hurdles to overcome before the use of model checking tools can truly become mainstream. To use a model checker, the user must first define a formal model of the application, and to do so requires specialized knowledge of both the application and of model checking techniques. For larger applications, the effort to manually construct a formal model can take a considerable investment of time and expertise, which can rarely be afforded. Worse, it is hard to secure that a manually constructed model can keep pace with the typical software application, as it evolves from the concept stage to the product stage. In this paper, we describe a verification method that requires far less specialized knowledge in model construction. It allows us to extract models mechanically from source code. The model construction process now becomes easily repeatable, as the application itself continues to evolve. Once the model is constructed, existing model checking techniques allow us to perform all checks in a mechanical fashion, achieving nearly complete automation. The level of thoroughness that can be achieved with this new type of software testing is significantly greater than for conventional techniques. We report on the application of this method in the verification of the call processing software for a new telephone switch that was recently developed at Lucent Technologies.
Acknowledgments During my dissertation research, many people have oered their generous help. Without their help, I could never finish this dissertation. I am sincerely grateful to all of them. My advisor, Prof. James C. Browne, has been the most important person to my dissertation research. He has oered,me all the guidance and help that I
CIB-W078 Conference in Cairo, 2010
This paper presents an overview of concepts for model checking. This have resulted in a description of four different concepts based on the intention for checking. The four concepts are: a) Validating systems, b) Guiding systems, c) Adaptive systems and d) Content based checking. By use of an ontological approach we propose a four level taxonomy of model checking 1) Intention, 2) Result, 3) Rule set and 4) Type of products. Model checking should be regarded as a knowledge system for support of the design process.
2006
Model checking has proven to be an effective technology for verification and debugging in hardware and more recently in software domains. With the proliferation of multicore architectures and a greater emphasis on distributed computing, model checking is an increasingly important software quality assurance technique that can complement existing testing and inspection methods.
IEEE Transactions on Software Engineering, 1998
In this paper, we present our experiences in using symbolic model checking to analyze a specification of a software system for aircraft collision avoidance. Symbolic model checking has been highly successful when applied to hardware systems. We are interested in whether model checking can be effectively applied to large software specifications. To investigate this, we translated a portion of the state-based system requirements specification of Traffic Alert and Collision Avoidance System II (TCAS II) into input to a symbolic model checker (SMV). We successfully used the symbolic model checker to analyze a number of properties of the system. We report on our experiences, describing our approach to translating the specification to the SMV language, explaining our methods for achieving acceptable performance, and giving a summary of the properties analyzed. Based on our experiences, we discuss the possibility of using model checking to aid specification development by iteratively applying the technique early in the development cycle. We consider the paper to be a data point for optimism about the potential for more widespread application of model checking to software systems.
Logic Journal of IGPL, 2006
We consider the case where inconsistencies are present between a system and its corresponding model, used for automatic verification. Such inconsistencies can be the result of modeling errors or recent modifications of the system. Despite such discrepancies we can still attempt to perform automatic verification. In fact, as we show, we can sometimes exploit the verification results to assist in automatically learning the required updates to the model. In a related previous work, we have suggested the idea of black box checking, where verification starts without any model, and the model is obtained while repeated verification attempts are performed. Under the current assumptions, an existing inaccurate (but not completely obsolete) model is used to expedite the updates. We use techniques from black box testing and machine learning. We present an implementation of the proposed methodology called AMC (for Adaptive Model Checking). We discuss some experimental results, comparing various tactics of updating a model while trying to perform model checking.
2005
Model checking has proven to be an effective technology for verification and debugging in hardware and more recently in software domains. We believe that recent trends in both the requirements for software systems and the processes by which systems are developed suggest that domain-specific model checking engines may be more effective than general purpose model checking tools. To overcome limitations of existing tools which tend to be monolithic and non-extensible, we have developed an extensible and customizable model checking framework called Bogor. In this tool paper, we summarize (a) Bogor’s direct support for modeling object-oriented designs and implementations, (b) its facilities for extending and customizing its modeling language and algorithms to create domain-specific model checking engines, and (c) pedagogical materials that we have developed to describe the construction of model checking tools built on top of the Bogor infrastructure.
2010
Model checking techniques have been applied widely for verifying hardware designs and protocols since they can check if the system operates as desired or not without actually running the system. Recently, the usage of model checking for software verification has also been increasingly considered. One notable advantage of the model checking approach is the ability of producing counter-example when detecting undesired problem. However, model checking also suffered some prominent disadvantages which are (i) state explosion problem with non-trivial input space and (ii) over-specific model-based representation of verification results. In this paper, we propose a framework known as MAFSE (Model-bAsed Framework for Software vErification) which is still able to make full use of model checking capability for verifying software programs yet overcoming those typical drawbacks by applying appropriate methods. Our framework has been tested with some lab-scaled data and is promising to be applied for industrial software engineering.
Automated Software Engineering, 2008
Model checkers were originally developed to support the formal verification of high-level design models of distributed system designs. Over the years, they have become unmatched in precision and performance in this domain. Research in model checking has meanwhile moved towards methods that allow us to reason also about implementation level artifacts (e.g., software code) directly, instead of hand-crafted representations of those artifacts. This does not mean that there is no longer a place for the use of high-level models, but it does mean that such models are used in a different way today. In the approach that we describe here, high-level models are used to represent the environment for which the code is to be verified, but not the application itself. The code of the application is now executed as is by the model checker, while using powerful forms of abstraction on-the-fly to build the abstract state space that guides the verification process. This model-driven code checking method allows us to verify implementation level code efficiently for high-level safety and liveness properties. In this paper, we give an overview of the methodology that supports this new paradigm of code verification.
Electronic Notes in Theoretical Computer Science, 2005
We propose a new development scheme for quality-aware applications, quality-driven development (QDD), based on the Model-Driven Architecture (MDA) of OMG. We argue that software development in areas, such as real-time systems, should not only rely on code verification, but also on design verification, and show that a slightly extended MDA process offers the opportunity to integrate system development together with design verification.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.