Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, Science of Computer Programming
A hierarchical approach for modelling the adaptability features of complex systems is introduced. It is based on a structural level S, describing the adaptation dynamics of the system, and a behavioural level B accounting for the description of the admissible dynamics of the system. Moreover, a unified system, called S[B], is defined by coupling S and B. The adaptation semantics is such that the S level imposes structural constraints on the B level, which has to adapt whenever it no longer can satisfy them. In this context, we introduce weak and strong adaptability, i.e. the ability of a system to adapt for some evolution paths or for all possible evolutions, respectively. We provide a relational characterisation for these two notions and we show that adaptability checking, i.e. deciding if a system is weak or strong adaptable, can be reduced to a CTL model checking problem. We apply the model and the theoretical results to the case study of a motion controller of autonomous transport vehicles.
A hierarchical model for multi-level adaptive systems is built on two basic levels: a lower behavioural level B accounting for the actual behaviour of the system and an upper structural level S describing the adaptation dynamics of the system. The behavioural level is modelled as a state machine and the structural level as a higher-order system whose states have associated logical formulas (constraints) over observables of the behavioural level. S is used to capture the global and stable features of B, by a defining set of allowed behaviours. The adaptation semantics is such that the upper S level imposes constraints on the lower B level, which has to adapt whenever it no longer can satisfy them. In this context, we introduce weak and strong adaptability, i.e. the ability of a system to adapt for some evolution paths or for all possible evolutions, respectively. We provide a relational characterisation for these two notions and we show that adaptability checking, i.e. deciding if a system is weak or strong adaptable, can be reduced to a CTL model checking problem. We apply the model and the theoretical results to the case study of motion control of autonomous transport vehicles.
Proceedings of the Fifth International C* Conference on Computer Science and Software Engineering - C3S2E '12, 2012
Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.
We present adaptable transition systems (atss), an essential model of adaptive systems inspired by white-box approaches to adaptation (like control data) and based on foundational models of component based systems (like I/O automata and interface automata). The key feature of atss are control propositions, the formal counterpart of control data. The choice of control proposition is arbitrary, but it imposes a clear separation between ordinary, functional behaviors and adaptive ones. We discuss how control propositions can be exploited in the specification and analysis of adaptive systems, focusing on various notions proposed in the literature, like adaptability, feedback loops, and control synthesis.
2006
The next generation of networked mechatronic systems will be characterized by complex coordination and structural adaptation at run-time. Crucial safety properties have to be guaranteed for all potential structural configurations. Testing cannot provide safety guarantees, while current model checking and theorem proving techniques do not scale for such systems. We present a verification technique for arbitrarily large multi-agent systems from the mechatronic domain, featuring complex coordination and structural adaptation. We overcome the limitations of existing techniques by exploiting the local character of structural safety properties. The system state is modeled as a graph, system transitions are modeled as rule applications in a graph transformation system, and safety properties of the system are encoded as inductive invariants (permitting the verification of infinite state systems). We developed a symbolic verification procedure that allows us to perform the computation on an efficient BDD-based graph manipulation engine, and we report performance results for several examples.
Open Computer Science
Self-adaptive systems (SASs) have the capability to evaluate and change their behavior according to changes occurring in the environment. Research in this field is being held since mid-60, and over the last decade, the importance of self-adaptivity is being increased. In the proposed research, colored petri nets (CPN) formal language is being used to model self-adaptive multiagent system. CPN is increasingly used to model self-adaptive complex concurrent systems due to its flexible formal specification and formal verification behavior. CPN being visually more expressive than simple, Petri Nets enable diverse modeling approaches and provides a richer framework for such a complex formalism. The main goal of this research is to apply self-adaptive multi-agent concurrent system (SMACS) for complex architectures. In our previous research, the SMACS framework is proposed and verified through traffic monitoring system. All agents of SMACS are also known as intelligent agents due to their s...
2009
Abstract Modern software systems should be more and more designed with adaptation and run-time evolution in mind. But even with good reactions to changes, the triggered adaptation should be performed preserving some properties that we call invariants. This position paper presents a step towards the definition of a theoretical assume-guarantee framework that allows one to efficiently define under which conditions adaptation can be performed by still preserving the desired invariant.
ACM SIGBED Review, 2009
Model-based verification of adaptive embedded systems is a promising approach to deal with the increased complexity that adaptation imposes on system design. Properties of embedded systems typically depend on the environment in which they are deployed. Thus, the environment has to be considered for verification. In this paper, we propose a technique to verify properties of design-level models of adaptive embedded systems under environment constraints. We transfer ideas originating from assume-guarantee reasoning for Kripke structures to design-level models of adaptive embedded systems in order to reduce conditional validity checking to standard model checking.
Electronic Proceedings in Theoretical Computer Science, 2012
This work introduces a general multi-level model for self-adaptive systems. A self-adaptive system is seen as composed by two levels: the lower level describing the actual behaviour of the system and the upper level accounting for the dynamically changing environmental constraints on the system. In order to keep our description as general as possible, the lower level is modelled as a state machine and the upper level as a second-order state machine whose states have associated formulas over observable variables of the lower level. Thus, each state of the second-order machine identifies the set of lower-level states satisfying the constraints. Adaptation is triggered when a second-order transition is performed; this means that the current system no longer can satisfy the current high-level constraints and, thus, it has to adapt its behaviour by reaching a state that meets the new constraints. The semantics of the multi-level system is given by a flattened transition system that can be statically checked in order to prove the correctness of the adaptation model. To this aim we formalize two concepts of weak and strong adaptability providing both a relational and a logical characterization. We report that this work gives a formal computational characterization of multi-level self-adaptive systems, evidencing the important role that (theoretical) computer science could play in the emerging science of complex systems.
2015
Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance for them. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach. Nomenclature a = formal specification α = terminal state-reachable set of states A = software system Act = set of actions AP = atomic propositions
Self-adaptive embedded systems autonomously adapt to changing environment conditions to improve their functionality and to increase their dependability by downgrading functionality in case of failures. However, adaptation behaviour of embedded systems significantly complicates system design and poses new challenges for guaranteeing system correctness, in particular vital in the automotive domain. Formal verification as applied in safety-critical applications must therefore be able to address not only temporal and functional properties, but also dynamic adaptation according to external and internal stimuli. In this paper, we introduce a formal semantic-based framework to model, specify and verify the functional and the adaptation behaviour of synchronous adaptive systems. The modelling separates functional and adaptive behaviour to reduce the design complexity and to enable modular reasoning about both aspects independently as well as in combination. By an example, we show how to use this framework in order to verify properties of synchronous adaptive systems. Modular reasoning in combination with abstraction mechanisms makes automatic model checking efficiently applicable.
IET Conference on Control and Automation 2013: Uniting Problems and Solutions, 2013
Closed loop systems are traditionally analysed using simulation. We show how a formal approach, namely model checking, can be used to enhance and inform this analysis. Specifically, model checking can be used to verify properties for any execution of a system, not just a single experimental path. We describe how model checking has been used to investigate a system consisting of a robot navigating around an environment, avoiding obstacles using sequence learning. We illustrate the power of this approach by showing how a previous assumption about the system, gained though vigorous simulation, was demonstrated to be incorrect, using the formal approach.
We propose to see adaptive systems as systems with highly dynamic features. We model as features both the reconfigurations of the system, but also the changes of the environment, such as failure modes. The resilience of the system can then be defined as the fact that the system can select an adequate reconfiguration for each possible change of the environment. We must take into account that reconfiguration is often a major undertaking for the system: it has a high cost and it might make functions of the system unavailable for some time. These constraints are domain-specific. In this paper, we therefore provide a modelling language to describe these aspects, and a property language to describe the requirements on the adaptive system. We design algorithms that determine how the system must reconfigure itself to satisfy its intended requirements.
2012
Abstract Although adaptivity is a central feature of agents and multi-agent systems (MAS), there is no precise definition of it in the literature. What does it mean for an agent or for a MAS to be adaptive? How can we reason about and measure the ability of agents and MAS to adapt? In this paper, we provide a formal definition of adaptivity of agents and MAS aimed at addressing these issues. The definition is independent of any particular mechanism for ensuring adaptivity.
Proceedings of the 31st Annual ACM Symposium on Applied Computing, 2016
We propose a formal framework to model an automated adaptation protocol based on Quantitative Partial Model Checking (QPMC). An agent seeks the collaboration of a second agent to satisfy some (fixed) condition on the actions to be executed. The provided protocol allows the two agents to automatically agree by iteratively applying QPMC.
Lecture Notes in Computer Science, 2007
Adaptation is important in dependable embedded systems to cope with changing environmental conditions. However, adaptation significantly complicates system design and poses new challenges to system correctness. We propose an integrated model-based development approach facilitating intuitive modelling as well as formal verification of dynamic adaptation behaviour. Our modelling concepts ease the specification of adaptation behaviour and improve the design of adaptive embedded systems by hiding the increased complexity from the developer. Based on a formal framework for representing adaptation behaviour, our approach allows to employ theorem proving, model checking as well as specialised verification techniques to prove properties characteristic for adaptive systems such as stability.
Infotech@Aerospace 2012, 2012
Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance for them. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.
IFAC Proceedings Volumes, 1998
A complex evolving system requires: (1) A suite of formal languages for representation of applications. (2) A transformation engine or the evolving system manager, which accepts formal specifications and goals and applys transformation to determine evolution. (3) A formal representation for the justification, which is denoted by a derivation history. A derivation history/trajectory represents a particular sequence of transformations. Upon applying a delta(change) to a specification, understanding derivation histories helps clarify the selection of rules/procedures/methods, since there may be an analog between the existing and new trajectories. (4) Integration of deltas into the design history. For example, to determine the success of evolution from a maintenance delta, the design history must be revised to reflect where or whatever delta has been applied. In this paper, a suite of formal languages is presented to describe complex evolving systems.'
Proceedings of the IEEE/ …, 2010
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.