Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, Computer Communications
Reliable protocols require early-stage validation and testing. Due to the state explosion problem in validation methods such as model checking [IEEE Trans. Software Engng 19 (1993) 24], sometimes it is not possible to test all the system states. We apply our state-of-the-art algorithm in computing the most critical states and branches to be tested. We prioritize this information to guide the validation of the protocol. We implemented this technology in a tool that visualizes the specifications of protocols with their testing priorities. Such a tool can also be used to identify faulted place in the protocol when some tests failed. It provides information such as where in the protocol is most likely to have bugs. Our tool provides many benefits, including (1) early detection and recovery of protocol faults, (2) visualization and simulation of the protocol specifications, (3) quantification of the reliability confidence of protocols, (4) making code generation directly from protocol specifications more possible, and (5) reduction of the number of introduced faults. This paper considers the case when the specification of the protocol is given in Specification and Description Language (International Telecommunication Union standard). Our technology is based on both the control flow and the data flow of the specifications. It first generates a control flow diagram from the specification and then automatically analyses the coverage features of the diagram. It collects the corresponding flow data during the simulation time to be mapped to the control flow diagram. The coverage information for the original specification is then obtained from the coverage information of the flow diagram.
1990
This paper studies the four basic types of algorithm that, over the last ten years, have been developed for the automated validation of the logical consistency of data communication protocols. The algorithms are compared on memory usage, CPU time requirements, and the quality, or coverage, of the search for errors.
2010
Fault-injection (FI) based techniques for dependability assessment of distributed protocols face certain limitations in providing state-space coverage and also incur high operational cost. This is mainly due to lack of complete knowledge of fault-distribution at the protocol level which in turn limits the use of statistical approaches in deriving and estimating the number of test cases to inject. In practice, formal techniques have effectively being used in proving the correctness of dependable distributed protocols, and these techniques traditionally have not been directly associated with experimental validation techniques such as FI-based testing. There exists a gap between these two well-established approaches, viz. formal verification and FI-based validation techniques. If there exists an approach which utilizing a rich set of information pertaining to the protocol operation generated through formal verification process can provide guided-support to perform FI-based validation, then the overall effectiveness of such validation techniques can be greatly improved. With this viewpoint, in this paper, we propose a methodology which utilizes the theorem-proving technique as an underlying formal-engine, and is composed of two novel structured and graphical representation schemes (interactive user-interfaces) for (a) capturing/visualizing information generated over the formal verification process, (b) facilitating interactive analysis through the chosen formal-engine (any theorem-proving tool) and database, and (c) user-guided identification of influential parameters, those eventually used for generating test cases for FI-based testing. A case study of an on-line diagnosis protocol is used to illustrate and establish the viability of the proposed methodology.
IEEE/ACM Transactions on Networking, 1994
A theoretical analysis of fault coverage of conformance test sequences for communication protocols specified as finite state machines (FSM's) is presented. Faults of different types are considered and their effect on testing analyzed. The interaction between faults of different categories and the impact it has on conformance testing is investigated. Fault coverage is defined for testing of incompletely specified machines and also for testing of completely specified machines. An algorithm is presented to generate test sequences with maximal fault coverage for testing of incompletely specified machines. It is then augmented for testing of completely specified machines and finally a technique for generating test sequences which provide guaranteed maximal fault coverage for conformance testing of communication protocols is presented.
Computer Aided Verification, 1996
Computer Networks, Architecture and Applications, 1995
Testing is an integral part of protocol development cycle. In this paper, we will briefly discuss the protocol conformance testing methodologies and framework proposed by the International Standards Organization. Many efficient test sequence generation methods have been proposed to check the conformance of an implementation of a protocol to its standards. We will discuss these methods briefly. Finally, we will compare different test methodologies based on their fault coverage and the length of the test sequence.
The development of communications systems demands testing. This paper presents a framework for testing on-thefly, which relies on the definition of 3 types of tests and on their sequential execution. The ioco conformance relation was considered in order to assign verdicts.
1993
Dans Ie cadre de mes etudes~l'Ecole Polytechnique d'Eindhoven il est necessaire d'effectuer un stage de fin d'etudes concretisant rna formation d'ingenieur. J'ai effectue Ie premier mois (decembre 1992)~Philips Research Laboratories en Eindhoven, et les huit autres mois Ganvier 1993 jusqu'
ACM SIGCOMM Computer Communication Review, 1988
This paper presents results on the application of four protocol test sequence generation techniques (T-, U-, D-, and W-methods) to the NBS Class 4 transport protocol (TP4). The ability of a test sequence to decide whether a protocol implementation conforms to its specification depend on the range of faults that it can capture. The study shows that a test sequence produced by the T-method has a poor fault detection capability whereas test sequences produced by the U-, D- and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines. The lengths of test sequences produced by the four methods tend to be different. The length of a test sequence produced by the T-method (W-method) is the smallest (largest). The length of a test sequence from the U-method is smaller than that for the D-method and lengths for both are greater than that for the T-method and less than that for the W-method.
IEEE Transactions on Software Engineering, 2000
SPANNER is a software package for the specification, analysis, and evaluation of protocols. It is based on a mathematical model of coordinating processes called the selection/resolution model. SPANNER presently comprises three modules. The parser module checks a formal specification (in the SPANNER specification language) for syntactic correctness. The reachable graph module generates a database that consists of reachable states, transitions, and other information useful for analysis. The analysis module, with a user-friendly interface, allows a user to query the database interactively and evaluate the behavior of the protocol. This paper discusses the selection/resolution model, describes the specification language, and shows how SPANNER can be used for the development and analysis of protocols.
2012
In the world of designing network protocols, verification is a crucial step to eliminate weaknesses and inaccuracies of effective network protocols. There are many models and tools to verify network protocols, including, Finite State Machines (FSM), Colored Petri Nets (CP-Nets), Temporal Logic, Predicate Logic, Estelle Specification, Path based Approach etc. This paper presents a survey of various techniques for verifying correctness properties of communications protocol
2008
This paper describes two major steps in model-based system design and implementation: 1) the process involved in converting a text-based system specification into a UML-compliant, graphical statechart, and 2) the use of automatic code generation tools to convert the statechart into a C or C++ implementation. We also describe how to use the graphical, interactive "test harness" to test the behavior of the statechart's generated code, a very useful tool for system (protocol) design refinement. Finally, we describe how to automatically generate a Promela version of the statechart model that can be verified using the SPIN model checker. Throughout the paper, we focus on how these tools can be used to make the communications protocol development process more streamlined and reliable.
2006
It is not likely that many traveling salesmen can be discouraged from their job by a lecture on its complexity . Not surprisingly, writers of automated protocol analyzers are much the same. The problem of determining whether an arbitrary message passing system contains deadlocks is PSPACEcomplete at best (for bounded queue lengths) . Yet for any given formal analysis model it is easy to derive state space exploration routines that can find such errors with certainty -given a sufficient amount of time and space. In practice, therefore, one of the main problems is to optimize the speed and memory usage of an automated validator. To phrase it differently: it is not hard to validate protocols, it is hard to do it (sufficiently) fast. In reachability analyses, the limits of what can be analyzed in practice can be moved substantially if the traditional finite state machine model is abandoned. To illustrate this, we introduce a simple symbolic execution method based on vector addition. It is extended into a full protocol validator, carefully avoiding known performance bottlenecks. Compared with previous methods the performance of this validator is roughly two orders of magnitude in speed faster and allows validation of protocol systems up to 10 6 states in only minutes of CPU time on a medium size computer.
Computer Networks, 1999
In this paper we give an introduction to methods and tools for testing communication protocols and distributed systems. In this context, we try to answer the following questions: Why are we testing? What are we testing? Against what are we Ž testing?... We present the different approaches of test automation and explain the industrial point of view automatic test . Ž . execution and the research point of view automatic test generation . The complete automation of the testing process requires the use of formal methods for providing a model of the required system behavior. We show the importance of Ž . modelling the aspects to be tested the right model for the right problem! and point out the different aspects of interest Ž . control, data, time and communication . We present the problem of testing based on models, in the form of finite state Ž . machines FSMs , extended FSMs, timed FSMs and communicating FSMs, and give an overview of the proposed solutions and their limitations. Finally, we present our own experience in automatic test generation based on SDL specifications, and discuss some related work and existing tools. q Ž . The Open Systems Interconnection OSI Reference Model has been useful in placing existing protocols in an overall communication architecture and the development of new protocol standards. The term open systems means that if a system conforms to a standard, it is open to all other systems conforming to the same standard for communication.
… symposium on Software testing and …, 1994
Abstract. Communication protocols are the rules that govern the communication between the differ-ent components within a distributed computer sys-tem. Since protocols are implemented in software and/or hardware, the question arises whether the existing hardware and ...
Proceedings of the 10th International Conference on Quality Software (QSIC '10), IEEE Computer Society, Los Alamitos, CA, USA, pp. 62-71, 2010
Model-based testing helps test engineers automate their testing tasks so that they can be more cost-effective. When the model is changed due to the evolution of the specification, it is important to maintain the test suites up to date for regression testing. A complete regeneration of the whole test suite from the new model, although inefficient, is still frequently used in practice. To handle specification evolution effectively, we propose a test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis, so that we can generate new test cases to cover only the change-related parts of the new model. Our experiment on four large protocol document testing projects shows that the technique can significantly reduce regression testing time when compared with complete regeneration of the test suites.
IEEE Design and Test of Computers, 2004
Proceedings - International Conference on Quality Software, 2010
Model-based testing helps test engineers automate their testing tasks so that they can be more cost-effective. When the model is changed due to the evolution of the specification, it is important to maintain the test suites up to date for regression testing. A complete regeneration of the whole test suite from the new model, although inefficient, is still frequently used in practice. To handle specification evolution effectively, we propose a test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis, so that we can generate new test cases to cover only the change-related parts of the new model. Our experiment on four large protocol document testing projects shows that the technique can significantly reduce regression testing time when compared with complete regeneration of the test suites. Keywords-model-based testing; regression testing; protocol document testing I.
paginas.fe.up.pt
The development of communications systems demands testing. This paper presents a framework for testing onthe-fly, which relies on the identification of 3 types of tests and on their sequential execution. The ioco conformance relation was adopted in order to assign verdicts.
Software Testing, Verification and Reliability, 2011
Microsoft is producing interoperability documentation for Windows client-server and server-server protocols. The Protocol Engineering Team in the Windows organization is responsible for verifying the documentation to ensure that it is of the highest quality. Various test-driven methods are being applied including, when appropriate, a model-based approach. This paper describes core aspects of the quality assurance process and tools that were put in place, and specifically focuses on model-based testing (MBT). Experience so far confirms that MBT works and that it scales, provided it is accompanied by sound tool support and clear methodological guidance.
In this paper we describe how verification tools, which are based on model checking, were used in a real-life communication protocol design project. Parallel composition, abstraction, reduction and visualisation tools were used to examine the behaviour of the protocol. We performed all verification and debugging visually with the figures that the tools produced. A figure represents the behaviour of the system in a certain point of view, which is selected by choosing a set of system's actions to be externally observable. Visualisation is a user-friendly approach to verifying and validating systems, which does not compromise the completeness of verification. We present how the protocol was modelled and how both safety and liveness failures in the model were found.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.