Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008
Following the successful WCET Tool Challenge in 2006, the second event in this series was organized in 2008, again with support from the ARTIST2 Network of Excellence. The WCET Tool Challenge 2008 (WCC'08) provides benchmark programs and poses a number of "analysis problems" about the dynamic, runtime properties of these programs. The participants are challenged to solve these problems with their programanalysis tools. Two kinds of problems are defined: WCET problems, which ask for bounds on the execution time of chosen parts (sub programs) of the benchmarks, under given constraints on input data; and flowanalysis problems, which ask for bounds on the number of times certain parts of the benchmark can be executed, again under some constraints. We describe the organization of WCC'08, the benchmark programs, the participating tools, and the general results, successes, and failures. Most participants found WCC'08 to be a useful test of their tools. Unlike the 2006 Challenge, the WCC'08 participants include several tools for the same target (ARM7, LPC2138), and tools that combine measurements and static analysis, as well as pure staticanalysis tools.
8th Intl. Workshop on Worst-Case Execution Time (WCET) Analysis}, 2008
Following the successful WCET Tool Challenge in 2006, the second event in this series was organized in 2008, again with support from the ARTIST2 Network of Excellence. The WCET Tool Challenge 2008 (WCC'08) provides benchmark programs and poses a number of "analysis problems" about the dynamic, runtime properties of these programs. The participants are challenged to solve these problems with their programanalysis tools. Two kinds of problems are defined: WCET problems, which ask for bounds on the execution time of chosen parts (sub programs) of the benchmarks, under given constraints on input data; and flowanalysis problems, which ask for bounds on the number of times certain parts of the benchmark can be executed, again under some constraints. We describe the organization of WCC'08, the benchmark programs, the participating tools, and the general results, successes, and failures. Most participants found WCC'08 to be a useful test of their tools. Unlike the 2006 Challenge, the WCC'08 participants include several tools for the same target (ARM7, LPC2138), and tools that combine measurements and static analysis, as well as pure staticanalysis tools.
International Journal on Software Tools for Technology Transfer, 2009
The first international Worst Case Execution Time (WCET) Tool Challenge in 2006 used benchmark programs to evaluate academic and commercial WCET tools. It aimed to study the state-of-the-art in WCET analysis. The WCET Tool Challenge comprised two parallel evaluation approaches: an internal evaluation by the respective tool developers and an external test by a neutral person of an independent institute. The latter was conducted by the author of this paper. Focusing on the external test, we describe the rules, benchmarks, participants and discuss the obtained results.
11th IEEE Symposium on Object Oriented …, 2008
Worst-Case Execution Time (WCET) analysis means to compute a safe upper bound to the execution time of a piece of code. Parametric WCET analysis yields symbolic upper bounds: expressions that may contain parameters. These parameters may represent, for instance, values of input parameters to the program, or maximal iteration counts for loops. We describe a technique for fully automatic parametric WCET analysis, which is based on known mathematical methods: an abstract interpretation to calculate parametric constraints on program flow, a symbolic method to count integer points in polyhedra, and a symbolic ILP technique to solve the subsequent IPET calculation of WCET bound. The technique is capable of handling unstructured code, and it can find upper bounds to loop iteration counts automatically.
Analyzers (SSQSA) is a set of software tools for static analysis that is incorporated in the framework developed to target the common aim-consistent software quality analysis. The main characteristic of all integrated tools is the independency of the input computer language. Language independency is achieved by enriched Concrete Syntax Tree (eCST) that is used as an intermediate representation of the source code. This characteristic gives the tools more generality comparing to the other similar static analyzers. The aim of this paper is to describe an early idea for introducing support for static timing analysis and Worst Case Execution Time (WCET) calculation at code level in SSQSA framework.
2010
Abstract We have developed a new programming paradigm which, for conforming programs, allows the averagecase execution time (ACET) to be obtained automatically by a static analysis. This is achieved by tracking the data structures and their distributions that will exist during all possible executions of a program. This new programming paradigm is called MOQA and the tool which performs the static analysis is called Distritrack. In this paper we give an overview of both MOQA and Distritrack.
Abstract—We construct a fully automatic static WCET analysis suitable for real-time embedded systems applications by augmenting a high-level static analysis technique (originally aimed at heap-space) with a machine-level worst-case execution time tool. We evaluate this approach by studying two typical and realistic real-time control applications, using a readily available commercial microcontroller. Keywords-WCET; analysis; static; type system;
IFIP International Federation for Information Processing, 2004
… on Worst-Case Execution Time (WCET) Analysis, 2003
Worst-Case Execution Time (WCET) analysis means to compute a safe upper bound to the execution time of a piece of code. Parametric WCET analysis yields symbolic upper bounds: expressions that may contain parameters. These parameters may represent, for instance, values of input parameters to the program, or maximal iteration counts for loops. We describe a technique for fully automatic parametric WCET analysis, which is based on known mathematical methods: an abstract interpretation to calculate parametric constraints on program flow, a symbolic method to count integer points in polyhedra, and a symbolic ILP technique to solve the subsequent IPET calculation of WCET bound. The technique is capable of handling unstructured code, and it can find upper bounds to loop iteration counts automatically.
IEEE Transactions on Industrial Informatics, 2000
For hard real-time systems, static code analysis is needed to derive a safe bound on the worst-case execution time (WCET). Virtually all prior work has focused on the accuracy of WCET analysis without regard to the speed of analysis. The resulting algorithms are often too slow to be integrated into the development cycle, requiring WCET analysis to be postponed until a final verification phase.
Software - Practice and Experience, 2010
In this paper, we propose a solution for a worst-case execution time (WCET) analyzable Java system: a combination of a time predictable Java processor and a tool that performs WCET analysis at Java bytecode level. We present a Java processor, called JOP, designed for time-predictable execution of real-time tasks. The execution time of bytecodes, the instructions of the Java virtual machine, is known cycle accurately for JOP. Therefore, JOP simplifies the low-level WCET analysis. A method cache, which fills whole Java methods into the cache, simplifies cache analysis.
2006 27th IEEE International Real-Time Systems Symposium (RTSS'06), 2006
Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for statically deriving safe and tight WCET bounds is information on the possible program flow through the program. Such flow information can be provided manually by user annotations, or automatically by a flow analysis. To make WCET analysis as simple and safe as possible, it should preferably be automatically derived, with no or very limited user interaction.
2011 International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation, 2011
Obtaining tight worst-case execution-time (WCET) estimations of real-time tasks is crucial since overly-pessimistic estimations are deemed impractical. One way of making WCET estimations tighter is to incorporate more program-flow information e.g., context-sensitive loop bounds, infeasible-path and same-path information, etc. In this paper we present and evaluate a completely automatic analysis that dynamically derives program-flow information to use in WCET analysis. Flow information is derived by a combination of test-data generation and parsing of programexecution traces to obtain flow-fact hypotheses which are then fed to a model checker to establish their correctness. Experimental evaluation shows that our method help achieve considerable tightness in WCET estimations at a manageable cost.
Proc. 4th Euromicro International …, 2004
HAL (Le Centre pour la Communication Scientifique Directe), 2016
Reliable Software TechnologiesAda-Europe 2009, 2009
Knowledge about the Worst-Case Execution-Time (WCET) is of primordial importance in the validation of real-time systems. A WCET estimation must be safe and tight. Tightness in WCET estimation is highly desirable for an efficient utilisation of resources. In order to obtain accurate WCET values, more program execution-history must be accounted for. In this thesis we propose the use of Predicated WCET Analysis which uses constraint-logic programming to model context-sensitive execution-times of program segments. We prove that our predicated analysis is safe and very tight compared to contemporary analysis techniques.
ACM Transactions on Embedded Computing Systems, 2008
The determination of upper bounds on execution times, commonly called Worst-Case Execution Times (WCETs), is a necessary step in the development and validation process for hard real-time systems. This problem is hard if the underlying processor architecture has components such as caches, pipelines, branch prediction, and other speculative components. This article describes different approaches to this problem and surveys several commercially available tools and research prototypes.
Proc. 5th International Workshop on Worst-Case …, 2005
2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), 2008
Most software engineering methods require some form of model populated with appropriate information. Realtime systems are no exception. A significant issue is that the information needed is not always freely available and derived it using manual methods is costly in terms of time and money. Previous work showed how machine learning information derived during software testing can be used to derive loop bounds as part of the Worst-Case Execution Time analysis problem. In this paper we build on this work by investigating the issue of branch prediction.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.