Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Proceedings of the 20th International Conference on Real-Time and Network Systems - RTNS '12
Guaranteeing timing constraints is the main purpose of analyses for real-time systems. The satisfaction of these constraints may be verified with probabilistic methods (relying on statistical estimations of certain task parameters) offering both hard and soft guarantees. In this paper, we address the problem of sampling applied to the distributions of worst-case execution times. The pessimism of presented sampling techniques is then evaluated at the level of response times.
Cornell University - arXiv, 2022
Real-time systems are a set of programs, a scheduling policy and a system architecture, constrained by timing requirements. Most of daily embedded devices are real-time systems, e.g. airplanes, cars, trains, spatial probes, etc.. The time required by a program for its end-to-end execution is called its response time. Usually, upper-bounds of response times are computed in order to provide safe deadline miss probabilities. In this paper, we propose a suited re-parametrization of the inverse Gaussian mixture distribution adapted to response times of real-time systems and the estimation of deadline miss probabilities. The parameters and their associated deadline miss probabilities are estimated with an adapted Expectation-Maximization algorithm.
2012 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2012
Modeling and analysis of timing information are essential to the design of real-time systems. In this domain, research related to probabilistic analysis is motivated by the desire to refine results obtained using worst-case analysis for systems in which the worst-case scenario is not the only relevant one, such as soft real-time systems. This paper presents an overview of the existing solutions for probabilistic timing analysis, focusing on challenges they have to face. We discuss in particular two new trends toward Probabilistic Real-Time Calculus and Typical-Case Analysis which rise to some of these challenges.
2014
Nowadays real-time systems are omnipresent and embedded systems thrive in a variety of application fields. When they are integrated into safety-critical systems, the verification of their properties becomes a crucial part. Dependability is a primary design goal in environments that use hard real-time systems, whereas general-use microprocessors were designed with a high performance goal. The average-throughput maximization design choice is intrinsically opposed to design goals such as dependability that benefit mostly from highly deterministic architectures without local optimizations. Besides the growth in complexity of the embedded systems, platforms are getting more and more heterogeneous. With regard to the respect of the timing constraints, real-time systems are classified in two categories: hard real-time systems (the non respect of a deadline can lead to catastrophic consequences) and soft real-time systems (missing a deadline can cause performance degradation and material lo...
2018
23 This paper explores the probability of deadline misses for a set of constrained-deadline sporadic 24 soft real-time tasks on uniprocessor platforms. We explore two directions to evaluate the prob25 ability whether a job of the task under analysis can finish its execution at (or before) a testing 26 time point t. One approach is based on analytical upper bounds that can be efficiently com27 puted in polynomial time at the price of precision loss for each testing point, derived from the 28 well-known Hoeffding’s inequality and the well-known Bernstein’s inequality. Another approach 29 convolutes the probability efficiently over multinomial distributions, exploiting a series of state 30 space reduction techniques, i.e., pruning without any loss of precision, and approximations via 31 unifying equivalent classes with a bounded loss of precision. We demonstrate the effectiveness 32 of our approaches in a series of evaluations. Distinct from the convolution-based methods in the 33 lite...
2008
Many component-based real-time systems have recently been proposed as a solution to modular and easily maintainable distributed realtime systems. This paper proposes a methodology for estimating probability distributions of execution times in the context of such component-based distributed real time systems, where no access to component internal code is assumed. In order to evaluate the proposed methodology, experiments have been conducted with components implemented over CIAO, and the related probability distributions estimated. The collected experimental data show that the proposed approach is indeed a good approximation for component execution time probability distributions.
Proceedings of the IEEE, 2003
Real-time systems are an important class of process control systems that need to respond to events under time constraints, or deadlines. Such systems may also be required to deliver service in spite of hardware or software faults in their components. This fault-tolerant characteristic is especially critical in systems whose failure can cause economic disaster and/or loss of lives. This paper reports recent research in the area of analytical modeling of the three major characteristics of real-time systems: timeliness, dependability, and external environmental dependencies. The paper starts with a brief introduction to analytical modeling frameworks such as Markov models and stochastic petri nets. This is followed by an examination of advances in modeling response-time distributions, reliability, distributed messaging services, and software fault-tolerance in real-time systems.
2017
Our work aims at facilitating the schedulability analysis of non-critical systems, in particular those that have soft real-time constraints, where WCETs can be replaced by less stringent probabilistic bounds, which we call Maximal Execution Times (METs). In our approach, we can obtain adequate probabilistic execution time models by separating the non-random input data dependency from a modeling error that is purely random. To achieve this, we propose to take advantage of the rich set of available statistical model-fitting techniques, in particular linear regression. Although certainly the proposed technique cannot directly achieve extreme probability levels that are usually expected for WCETs, it is an attractive alternative for MET analysis, since it can arguably guarantee safe probabilistic bounds. We demonstrate our method on a JPEG decoder running on an industrial SPARC V8 processor.
A Practitioner's Handbook for Real-Time Analysis, by Mark H. KLEIN et al.; Kluwer Academic Publishers; Boston, MA, USA; 1993; xiii + 694 pp.; $95; ISBN: 0-7923-9361-9 iRAT -The introspect Real-Time Analysis Tool,
In this paper, we present a conjecture for exact best-case response times of periodic released, independent real-time tasks with arbitrary deadlines that are scheduled by means of fixed-priority pre-emptive scheduling (FPPS). We illus-trate the analysis by means of an example. Apart from hav-ing a value on its own whenever timing constraints include lower bounds on response times of a system to events, the novel analysis allows for an improvement of existing end-to-end response time analysis in distributed systems, i.e. where the finalization of one task on a processor activates another task on another processor.
Sigplan Notices, 2001
Embedded systems often have real-time constraints. Traditional timing analysis statically determines the maximum execution time of a task or a program in a real-time system. These systems typically depend on the worst-case execution time of tasks in order to makestatic scheduling decisions so that tasks can meet their deadlines. Static determination of worst-case execution times imposes numerous restrictions on real-time programs, which include that the maximum number of iterations of each loop must be known statically.T hese restrictions can significantly limit the class of programs that would be suitable for a real-time embedded system. This paper describes work-in-progress that uses static timing analysis to aid in making dynamic scheduling decisions. Forinstance, different algorithms with varying levels of accuracym ay be selected based on the algorithm'sp redicted worst-case execution time and the time allotted for the task. We represent the worstcase execution time of a function or a loop as a formula, where the unknown values affecting the execution time are parameterized. This parametric timing analysis produces formulas that can then be quickly evaluated at run-time so dynamic scheduling decisions can be made with little overhead. Benefits of this work include expanding the class of applications that can be used in a real-time system, improving the accuracyo fd ynamic scheduling decisions, and more effective utilization of system resources.
In this paper, we present a simple recursive equation to determine the best-case response times of periodic tasks under fixed-priority preemptive scheduling and arbitrary phasings. The approach is of a similar nature to the one used to determine worst-case response times, in the sense that, where a critical instant is considered to determine the latter, we base our analysis on an optimal instant, in which all higher priority tasks have a simultaneous release that coincides with the bestcase completion of the task under consideration. The resulting recursive equation closely resembles the one for worst-case response times, apart from a term ¢ 1 difference, and the fact that the bestcase response times are approached from above. The resulting iterative procedure is illustrated by means of a small example. Finally, we discuss the effect of the best-case response times on completion jitter, as well as the effect of release jitter on the best-case response times.
2004
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Chapman & Hall/CRC Computer & Information Science Series, 2007
This chapter deals with the problem of how to estimate and analyze the execution time of embedded real-time software, in particular the worst-case execution time.
2009
In recent years, many component-based real-time systems have been proposed as a solution to modular and easily maintainable distributed real-time systems. This paper proposes a methodology for estimating probability distributions of execution times in the context of such systems, where no access to component internal code is assumed. In order to evaluate the proposed methodology, experiments were conducted with components, and related compositions, implemented over CIAO and ARCOS. CIAO is a known real-time component-based middleware and ARCOS is a software framework devoted to the construction of real-time control and supervision applications, also developed over CIAO. The collected experimental data show that the proposed approach is indeed a good approximation for component execution time probability distributions.
2007
This paper describes an algorithm to determine the performance of real-time systems with tasks using stochastic processing times. Such an algorithm can be used for guaranteeing Quality of Service of periodic tasks with soft real-time constraints. We use a discrete distribution model of processing times instead of worst case times like in hard real-time systems. Such a model gives a more realistic view on the actual requirements of the system. The presented algorithm works for all deterministic scheduling systems, which makes it more general than existing 6algorithms and allows us to compare performance between these systems. To demonstrate our method, we make a comparison between the performance of the well known scheduling algorithms Earliest Deadline First and Rate Monotonic. We show that the complexity of our method can compete with other algorithms that work for a wide range of schedulers.
2016 Euromicro Conference on Digital System Design (DSD), 2016
The use of increasingly complex hardware and software platforms in response to the ever rising performance demands of modern real-time systems complicates the verification and validation of their timing behaviour, which form a time-and-effort-intensive step of system qualification or certification. In this paper we relate the current state of practice in measurement-based timing analysis, the predominant choice for industrial developers, to the proceedings of the PROXIMA 1 project in that very field. We recall the difficulties that the shift towards more complex computing platforms causes in that regard. Then we discuss the probabilistic approach proposed by PROXIMA to overcome some of those limitations. We present the main principles behind the PROXIMA approach as well as the changes it requires at hardware or software level underneath the application. We also present the current status of the project against its overall goals, and highlight some of the principal confidence-building results achieved so far.
2009 Design, Automation & Test in Europe Conference & Exhibition, 2009
In the design and development of embedded realtime systems the aspect of timing behavior plays a central role. Especially, the evaluation of different scheduling approaches, algorithms and configurations is one of the elementary preconditions for creating not only reliable but also efficient systems-a key for success in industrial mass production. This is becoming even more important as multi-core systems are more and more penetrating the world of embedded systems together with the large (and growing) variety of scheduling policies available for such systems. In this work simple mathematical concepts are used to define performance indicators allowing to quantify the benefit of different solutions of the scheduling challenge for a given application. As a sample application some aspects of analyzing the dynamic behavior of an combustion engine management system for the automotive domain are shown. However, the described approach is flexible in order to support the specific optimization needs arising from the timing requirements defined by the application domain and can be used with simulation data as well as target system measurements.
Lecture Notes in Computer Science, 2002
The main contribution of this paper is accurate analysis of real-time performance for dynamic real-time applications. A wrong system performance analysis can lead to a catastrophe in a dynamic real-time system. In addition, real-time performance guarantee combined with efficient resource utilization is observed by experiments, while the previous worst-case approaches primarily focused on performance guarantee but resulted in typically poor utilization. The subsequent contribution is schedulability analysis for a feasible allocation of resource management on the Solaris operating system. This is accomplished with a mathematical model and by accurate response time prediction for a periodic, dynamic distributed real-time application.
Dependable Software Engineering. Theories, Tools, and Applications, 2017
Probabilistic approaches to timing analysis derive probability distributions to upper bound task execution time. The main purpose of probability distributions instead of deterministic bounds, is to have more flexible and less pessimistic worst-case models. However, in order to guarantee safe probabilistic worst-case models, every possible execution condition needs to be taken into account. In this work, we propose probabilistic representations which is able to model every task and system execution conditions, included the worstcases. Combining probabilities and multiple conditions offers a flexible and accurate representation that can be applied with mixed-critical task models and fault effect characterizations on task executions. A case study with single-and multi-core real-time systems is provided to illustrate the completeness and versatility of the representation framework we provide.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.