Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, 2012 Design, Automation & Test in Europe Conference & Exhibition (DATE)
Modeling and analysis of timing information are essential to the design of real-time systems. In this domain, research related to probabilistic analysis is motivated by the desire to refine results obtained using worst-case analysis for systems in which the worst-case scenario is not the only relevant one, such as soft real-time systems. This paper presents an overview of the existing solutions for probabilistic timing analysis, focusing on challenges they have to face. We discuss in particular two new trends toward Probabilistic Real-Time Calculus and Typical-Case Analysis which rise to some of these challenges.
ArXiv, 2018
Modeling and analysis of non-functional properties, such as timing constraints, is crucial in automotive real-time embedded systems. EAST-ADL is a domain specific architectural language dedicated to safetycritical automotive embedded system design. We have previously specified EAST-ADL timing constraints in Clock Constraint Specification Language (CCSL) and proved the correctness of specification by mapping the semantics of the constraints into Uppaal models amenable to model checking. In most cases, a bounded number of violations of timing constraints in automotive systems would not lead to system failures when the results of the violations are negligible, called Weakly-Hard (WH). Previous work is extended in this paper by including support for probabilistic analysis of timing constraints in the context of WH: Probabilistic extension of CCSL, called PrCCSL, is defined and the EAST-ADL timing constraints with stochastic properties are specified in PrCCSL. The semantics of the extend...
Dependable Software Engineering. Theories, Tools, and Applications, 2017
Probabilistic approaches to timing analysis derive probability distributions to upper bound task execution time. The main purpose of probability distributions instead of deterministic bounds, is to have more flexible and less pessimistic worst-case models. However, in order to guarantee safe probabilistic worst-case models, every possible execution condition needs to be taken into account. In this work, we propose probabilistic representations which is able to model every task and system execution conditions, included the worstcases. Combining probabilities and multiple conditions offers a flexible and accurate representation that can be applied with mixed-critical task models and fault effect characterizations on task executions. A case study with single-and multi-core real-time systems is provided to illustrate the completeness and versatility of the representation framework we provide.
2014
Nowadays real-time systems are omnipresent and embedded systems thrive in a variety of application fields. When they are integrated into safety-critical systems, the verification of their properties becomes a crucial part. Dependability is a primary design goal in environments that use hard real-time systems, whereas general-use microprocessors were designed with a high performance goal. The average-throughput maximization design choice is intrinsically opposed to design goals such as dependability that benefit mostly from highly deterministic architectures without local optimizations. Besides the growth in complexity of the embedded systems, platforms are getting more and more heterogeneous. With regard to the respect of the timing constraints, real-time systems are classified in two categories: hard real-time systems (the non respect of a deadline can lead to catastrophic consequences) and soft real-time systems (missing a deadline can cause performance degradation and material lo...
Proceedings of the 20th International Conference on Real-Time and Network Systems - RTNS '12, 2012
Guaranteeing timing constraints is the main purpose of analyses for real-time systems. The satisfaction of these constraints may be verified with probabilistic methods (relying on statistical estimations of certain task parameters) offering both hard and soft guarantees. In this paper, we address the problem of sampling applied to the distributions of worst-case execution times. The pessimism of presented sampling techniques is then evaluated at the level of response times.
2018
23 This paper explores the probability of deadline misses for a set of constrained-deadline sporadic 24 soft real-time tasks on uniprocessor platforms. We explore two directions to evaluate the prob25 ability whether a job of the task under analysis can finish its execution at (or before) a testing 26 time point t. One approach is based on analytical upper bounds that can be efficiently com27 puted in polynomial time at the price of precision loss for each testing point, derived from the 28 well-known Hoeffding’s inequality and the well-known Bernstein’s inequality. Another approach 29 convolutes the probability efficiently over multinomial distributions, exploiting a series of state 30 space reduction techniques, i.e., pruning without any loss of precision, and approximations via 31 unifying equivalent classes with a bounded loss of precision. We demonstrate the effectiveness 32 of our approaches in a series of evaluations. Distinct from the convolution-based methods in the 33 lite...
2016 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), 2016
In probabilistic real-time modeling, diverse task exe cution conditions can be characterized with probabilistic distribu tions, where multiple execution time thresholds are represented, each with an exceeding probability. Comparing to traditional deterministic real-time, probabilistic approaches provide more flexibility in system behavior modeling, which may result in more precise schedulability analysis. With this work, we combine sensitivity analysis and probabilistic models of fault effects on task execution behaviors. The goal is to develop probabilistic schedulability analysis that is applicable to both faulty and non faulty execution conditions. While the probabilities accurately characterize faults and faults effects on worst-case execution times, the probabilistic schedulability analysis both qualifies and quantifies faults impacts on system schedulability.
2009
Abstract Probabilistic models are useful for analyzing systems which operate under the presence of uncertainty. In this paper, we present a technique for verifying safety and liveness properties for probabilistic timed automata. The proposed technique is an extension of a technique used to verify stochastic hybrid automata using an approximation with Markov Decision Processes. A case study for CSMA/CD protocol has been used to show case the methodology used in our technique.
The complexity of modern architectures has increased the timing variability of programs (or tasks). In this context new approaches based on probabilistic methods are proposed to decrease the pessimism by associating probabilities to the worst case values of the programs (tasks) time execution. In this paper we extend the original work of Chetto et al. on precedence constraints tasks to the case of tasks with worst case execution times described by probability distributions. The precedence constraints between tasks are defined by acyclic directed graphs and these constraints are transformed in appropriate release times and deadlines. The new release times and deadlines are built using new maximum and minimum relations between pairs of probability distributions. We provide a probabilistic schedulability condition based on these new relations.
2004
The exact stochastic analysis of most real-time systems is becoming unaffordable in current practice. On one side, the exact calculation of the response time distribution of the tasks is not possible except for simple periodic and independent task sets. On the other side, in practice, tasks introduce complexities like release jitter, blocking in shared resources, stochastic dependencies, etc, which can not be handled by the periodic and independent task set model. This paper introduces the concept of pessimism in the stochastic analysis of real-time systems in the following sense: the exact probability of missing any deadline is always lower than that derived from the pessimistic analysis. Therefore, if real-time constraints are expressed as probabilities of missing deadlines, the pessimistic stochastic analysis provides safe results. Some applications of the pessimism concept are presented. Firstly, the practical problems that arise in the stochastic analysis of periodic and independent task sets are addressed. Secondly, we extend to the stochastic case some well known techniques of the deterministic analysis, such as the blocking in shared resources, and the task priority assignment.
Nowadays real-time systems are omnipresent and embedded systems thrive in a variety of application fields. When they are integrated into safety-critical systems, the verification of their properties becomes a crucial part. Dependability is a primary design goal in environments that use hard real-time systems, whereas general-use microprocessors were designed with a high performance goal. The average-throughput maximization design choice is intrinsically opposed to design goals such as dependability that benefit mostly from highly deterministic architectures without local optimizations. Besides the growth in complexity of the embedded systems, platforms are getting more and more heterogeneous. With regard to the respect of the timing constraints, real-time systems are classified in two categories: hard real-time systems (the non respect of a deadline can lead to catastrophic consequences) and soft real-time systems (missing a deadline can cause performance degradation and material loss). We analyze hard real-time systems that need precise and safe determination of the worst-case execution time bounds in order to be certified. The validation of their non-functional properties is a complex and resource consuming task. One of the main reasons is that currently available solutions focus on delivering precise estimations through tools that are highly dependent on the underlying platform (in order to provide precise and safe results, the architecture of the system must be taken into account). In this thesis we address the above issues by introducing a timing analysis method that maintains a good level of precision while being applicable to a variety of platforms. This adaptability is achieved through separating as much as possible the worst-case execution time (WCET) estimation from the model of the hardware. Our approach consists in the introduction of a new formal modeling language that captures the complex behaviour of modern hardware and is guided by the timing analysis in order to achieve the needed previi viii Abstract cision to scalability tradeoff. The analysis drives a conjoint symbolic execution of the program's binary and the processor model using a dynamic prediction module that decides what states to merge in order to limit the state space explosion. Several state merging algorithms are introduced and applied that can also give an estimation of the introduced precision loss.
2016 Euromicro Conference on Digital System Design (DSD), 2016
The use of increasingly complex hardware and software platforms in response to the ever rising performance demands of modern real-time systems complicates the verification and validation of their timing behaviour, which form a time-and-effort-intensive step of system qualification or certification. In this paper we relate the current state of practice in measurement-based timing analysis, the predominant choice for industrial developers, to the proceedings of the PROXIMA 1 project in that very field. We recall the difficulties that the shift towards more complex computing platforms causes in that regard. Then we discuss the probabilistic approach proposed by PROXIMA to overcome some of those limitations. We present the main principles behind the PROXIMA approach as well as the changes it requires at hardware or software level underneath the application. We also present the current status of the project against its overall goals, and highlight some of the principal confidence-building results achieved so far.
2019
Early validation of software running on multi-core platforms is fundamental to guarantee functional correctness and that real-time constraints are fully met. In the domain of timing analysis of multi-core systems, existing simulation-based approaches and formal mathematical methods are hampered with scalability problems. In this context, probabilistic simulation techniques represent promising solutions to improve scalability of analysis approaches. However, creation of probabilistic SystemC models remains a difficult task and is not well supported for multi-core systems. In this technical report, we present a feasibility study of probabilistic simulation techniques considering different levels of platform complexity. The evaluated probabilistic simulation techniques demonstrate good potential to deliver fast yet accurate estimations for multi-core systems.
2013 25th Euromicro Conference on Real-Time Systems, 2013
Real-time computing and communication systems are often required to operate with prespecified levels of reliability in harsh environments, which may lead to the exposure of the system to random errors and random bursts of errors. The classical fault-tolerant schedulability analysis in such cases assumes a pseudo-periodic arrival of errors, and does not effectively capture any underlying randomness or burst characteristics. More modern approaches employ much richer stochastic error models to capture these behaviors, but this is at the expense of greatly increased complexity. In this paper, we develop a quantile-based approach to probabilistic schedulability analysis in a bid to improve efficiency whilst still retaining a rich stochastic error model capturing random errors and random bursts of errors. Our principal contribution is the derivation of a simple closed-form expression that tightly bounds the number of errors that a system must be able to tolerate at any time subsequent to its critical instant in order to achieve a specified level of reliability. We apply this technique to develop an efficient 'oneshot' schedulability analysis for a simple fault-tolerant EDF scheduler. The paper concludes that the proposed method is capable of giving efficient probabilistic scheduling guarantees, and may easily be coupled with more representative higherlevel job failure models, giving rise to efficient analysis procedures for safety-critical fault-tolerant real-time systems.
Design for Validation (DeVa) TR, 1997
Hard real-time systems are usually required to provide an absolute guarantee that all tasks will execute by their deadlines. In this paper we address fault tolerant hard realtime systems, and introduce the notion of a probabilistic guarantee. Schedulability analysis is used together with sensitivity analysis to establish the maximum fault frequency that a system can tolerate. The fault model is then used to derive a probability (likelihood) that, during the lifetime of the system, faults will not arrive faster than this maximum rate. The ...
Lecture Notes in Computer Science, 2010
Hard real-time systems have to satisfy strict timing constraints. To prove that these constraints are met, timing analyses aim to derive safe upper bounds on tasks' execution times. Processor components such as caches, out-of-order pipelines, and speculation cause a large variation of the execution time of instructions, which may induce a large variability of a task's execution time. The architectural platform also determines the precision and the complexity of timing analysis.
2014
This position paper outlines the innovative probabilistic approach being taken by the EU Integrated Project PROXIMA to the analysis of the timing behaviour of mixed criticality real-time systems. PROXIMA supports multi-core and mixed criticality systems timing analysis by use probabilistic techniques and hardware/software architectures that reduce dependencies which affect timing. The approach is being applied to DO-178B/C and ISO26262.
ETFA2011, 2011
A challenging research issue of analyzing a real-time system is to model the tasks composing the system and the resource provided to the system. In this paper, we propose a probabilistic component-based model which abstracts in the interfaces both the functional and non-functional requirements of such systems. This approach allows designers to unify in the same framework probabilistic scheduling techniques and compositional guarantees that go from soft to hard real-time. We provide sufficient schedulability tests for task systems using such framework when the scheduler is either preemptive Fixed-Priority or Earliest Deadline First.
Dependable Software Engineering. Theories, Tools, and Applications, 2019
In this paper we approach the problem of Mixed Criticality (MC) for probabilistic real-time systems where tasks execution times are described with probabilistic distributions. In our analysis, the task enters high criticality mode if its response time exceeds a certain threshold, which is a slight deviation from a more classical approach in MC. We do this to obtain an application oriented MC system in which criticality mode changes depend on actual scheduled execution. This is in contrast to classical approaches which use task execution time to make criticality mode decisions, because execution time is not affected by scheduling while the response time is. We use a graph-based approach to seek for an optimal MC schedule by exploring every possible MC schedule the task set can have. The schedule we obtain minimizes the probability of the system entering high criticality mode. In turn, this aims at maximizing the resource efficiency by the means of scheduling without compromising the execution of the high criticality tasks and minimizing the loss of lower criticality functionality. The proposed approach is applied to test cases for validation purposes.
Formal Methods in System Design, 2006
Probabilistic timed automata, a variant of timed automata extended with discrete probability distributions, is a modelling formalism suitable for describing formally both nondeterministic and probabilistic aspects of real-time systems, and is amenable to model checking against probabilistic timed temporal logic properties. However, the previously developed verification algorithms either suffer from high complexity, give only approximate results, or are restricted to a limited class of properties. In the case of classical (non-probabilistic) timed automata it has been shown that for a large class of real-time verification problems correctness can be established using an integral model of time (digital clocks) as opposed to a dense model of time. Based on these results we address the question of under what conditions digital clocks are sufficient for the performance analysis of probabilistic timed automata and show that this reduction is possible for an important class of systems and properties including probabilistic reachability and expected reachability. We demonstrate the utility of this approach by applying the method to the performance analysis of three probabilistic real-time protocols: the dynamic configuration protocol for IPv4 link-local addresses, the IEEE 802.11 wireless local area network protocol and the IEEE 1394 FireWire root contention protocol.
2013 IEEE 34th Real-Time Systems Symposium, 2013
Caches are key resources in high-end processor architectures to increase performance. In fact, most highperformance processors come equipped with a multi-level cache hierarchy. In terms of guaranteed performance, however, cache hierarchies severely challenge the computation of tight worstcase execution time (WCET) estimates. On the one hand, the analysis of the timing behaviour of a single level of cache is already challenging, particularly for data accesses. On the other hand, unifying data and instructions in each level, makes the problem of cache analysis significantly more complex requiring tracking simultaneously data and instruction accesses to cache. In this paper we prove that multi-level cache hierarchies can be used in the context of Probabilistic Timing Analysis and tight WCET estimates can be obtained. Our detailed analysis (1) covers unified data and instruction caches, (2) covers different cache-write policies (write-through and write back), write allocation policies (write-allocate and non-write-allocate) and several inclusion mechanisms (inclusive, non-inclusive and exclusive caches), and (3) scales to an arbitrary number of cache levels. Our results show that the probabilistic WCET (pWCET) estimates provided by our analysis technique effectively benefit from having multi-level caches. For a two-level cache configuration and for EEMBC benchmarks, pWCET reductions are 55% on average (and up to 90%) with respect to a processor with a single level of cache.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.