Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006, Proceedings of the …
Failure of a safety-critical application on an embedded processor can lead to severe damage or even loss of life. Here we are concerned with two kinds of failure: stack overflow, which usually leads to runtime errors that are difficult to diagnose, and failure to ...
Proceedings of the 14th international symposium on Systems synthesis - ISSS '01, 2001
Page 1. Retargetable Static Timing Analysis for Embedded Software Kaiyu Chen Sharad Malik David I. August Department of Electrical Engineering Department of Electrical Engineering Princeton University Princeton University Princeton University ...
ACM SIGPLAN Notices, 2008
With continued CMOS scaling, future shipped hardware will be increasingly vulnerable to in-the-field faults. To be broadly deployable, the hardware reliability solution must incur low overheads, precluding use of expensive redundancy. We explore a cooperative hardware-software solution that watches for anomalous software behavior to indicate the presence of hardware faults. Fundamental to such a solution is a characterization of how hardware faults in different microarchitectural structures of a modern processor propagate through the application and OS.
Lecture Notes in Computer Science, 2006
Methods for Worst-Case Execution Time (WCET) analysis have been known for some time, and recently commercial tools have emerged. However, the technique has so far not been much used to analyse real production codes. Here, we present a case study where static WCET analysis was used to find upper time bounds for time-critical regions in a commercial real-time operating system. The purpose was not primarily to test the accuracy of the estimates, but rather to investigate the practical difficulties that arise when applying the current WCET analysis methods to this particular kind of code. In particular, we were interested in how labor-intense the analysis becomes, measured by the number of annotations to explicitly constrain the program flow which is necessary to perform the analysis. We also make some qualitative observations regarding what a WCET analysis method would need in order to perform a both convenient and tight analysis of typical operating systems code. In a second set of experiments, we analyzed some standard WCET benchmark codes compiled with different levels of optimization. The purpose of this study was to see how the different compiler optimizations affected the precision of the analysis, and again whether it affected the level of user intervention necessary to obtain an accurate WCET estimate.
Abstract—We construct a fully automatic static WCET analysis suitable for real-time embedded systems applications by augmenting a high-level static analysis technique (originally aimed at heap-space) with a machine-level worst-case execution time tool. We evaluate this approach by studying two typical and realistic real-time control applications, using a readily available commercial microcontroller. Keywords-WCET; analysis; static; type system;
2002
We report on a successful preliminary experience in the design and implementation of a special-purpose Abstract Interpretation based static program analyzer for the verification of safety critical embedded real-time software. The analyzer is both precise (zero false alarm in the considered experiment) and efficient (less than one minute of analysis for 10,000 lines of code). Even if it is based on a simple interval analysis, many features have been added to obtain the desired precision: expansion of small arrays, widening with several thresholds, loop unrolling, trace partitioning, relations between loop counters and other variables. The efficiency of the tool mainly comes from a clever representation of abstract environments based on balanced binary search trees. Dedicated to Neil Jones, for his 60 th birthday.
Debugging denotes the process of detecting root causes of unexpected observable behaviors in programs, such as a program crash, an unexpected output value being produced or an assertion violation. Debugging of program errors is a difficult task and often takes a significant amount of time in the software development life cycle. In the context of embedded software, the probability of bugs is quite high. Due to requirements of low code size and less resource consumption, embedded softwares typically do away with a lot of sanity checks during development time. This leads to high chance of errors being uncovered in the production code at run time. In this paper we propose a methodology for debugging errors in BusyBox, a de-facto standard for Linux in embedded systems. Our methodology works on top of Valgrind, a popular memory error detector and Daikon, an invariant analyzer. We have experimented with two published errors in BusyBox and report our findings in this paper.
ACM SIGPLAN Notices, 1998
Programs which manipulate pointers are hard to debug. Pointer analysis algorithms (originally aimed at optimizing compilers) may provide some remedy by identifying potential errors such as dereferencing NULL pointers by statically analyzing the behavior of programs on all their input data.
… Wien, Institut für …, 2009
The analysis of the worst-case execution time (WCET) requires detailed knowledge of the program behavior. On modern processors the instruction timing heavily depends on the processor state. WCET analysis therefore has to the model processor behavior in detail. This analysis is challenging in case of so called timing anomalies, which violate the continuity properties proportionality and continuity of the timing behavior.
Lecture Notes in Computer Science, 2005
Many tasks in safety-critical embedded systems have hard real-time characteristics. AbsInt's worst-case execution time analyzer aiT can estimate precise and safe upper bounds for the WCETs of program tasks, thus providing the basic input for verifying the real-time behavior of embedded applications.
2008
Following the successful WCET Tool Challenge in 2006, the second event in this series was organized in 2008, again with support from the ARTIST2 Network of Excellence. The WCET Tool Challenge 2008 (WCC'08) provides benchmark programs and poses a number of "analysis problems" about the dynamic, runtime properties of these programs. The participants are challenged to solve these problems with their programanalysis tools. Two kinds of problems are defined: WCET problems, which ask for bounds on the execution time of chosen parts (sub programs) of the benchmarks, under given constraints on input data; and flowanalysis problems, which ask for bounds on the number of times certain parts of the benchmark can be executed, again under some constraints. We describe the organization of WCC'08, the benchmark programs, the participating tools, and the general results, successes, and failures. Most participants found WCC'08 to be a useful test of their tools. Unlike the 2006 Challenge, the WCC'08 participants include several tools for the same target (ARM7, LPC2138), and tools that combine measurements and static analysis, as well as pure staticanalysis tools.
Proceedings 2022 Network and Distributed System Security Symposium
Despite vast research on defenses to protect stack objects from the exploitation of memory errors, much stack data remains at risk. Historically, stack defenses focus on the protection of code pointers, such as return addresses, but emerging techniques to exploit memory errors motivate the need for practical solutions to protect stack data objects as well. However, recent approaches provide an incomplete view of security by not accounting for memory errors comprehensively and by limiting the set of objects that can be protected unnecessarily. In this paper, we present the DATAGUARD system that identifies which stack objects are safe statically from spatial, type, and temporal memory errors to protect those objects efficiently. DATAGUARD improves security through a more comprehensive and accurate safety analysis that proves a larger number of stack objects are safe from memory errors, while ensuring that no unsafe stack objects are mistakenly classified as safe. DATAGUARD's analysis of server programs and the SPEC CPU2006 benchmark suite shows that DATAGUARD improves security by: (1) ensuring that no memory safety violations are possible for any stack objects classified as safe, removing 6.3% of the stack objects previously classified safe by the Safe Stack method, and (2) blocking exploit of all 118 stack vulnerabilities in the CGC Binaries. DATAGUARD extends the scope of stack protection by validating as safe over 70% of the stack objects classified as unsafe by the Safe Stack method, leading to an average of 91.45% of all stack objects that can only be referenced safely. By identifying more functions with only safe stack objects, DATAGUARD reduces the overhead of using Clang's Safe Stack defense for protection of the SPEC CPU2006 benchmarks from 11.3% to 4.3%. Thus, DATAGUARD shows that a comprehensive and accurate analysis can both increase the scope of stack data protection and reduce overheads.
1996
Abstract Many important classes of bugs result from invalid assumptions about the results of functions and the values of parameters and global variables. Using traditional methods, these bugs cannot be detected efficiently at compile-time, since detailed cross-procedural analyses would be required to determine the relevant assumptions. In this work, we introduce annotations to make certain assumptions explicit at interface points.
Runtime Verification, 2017
State-of-the-art memory debuggers have become efficient in detecting spatial memory errors -dereference of pointers to unallocated memory. These tools, however, cannot always detect errors arising from the use of stale pointers to valid memory (temporal memory errors). This paper presents an approach to reliable detection of temporal memory errors during a run of a program. This technique tracks allocated memory tagging allocated objects and pointers with tokens that allow to reason about their temporal properties. The technique further checks pointer dereferences and detects temporal (and spatial) memory errors before they occur. The present approach has been implemented in E-ACSL -a runtime verification tool for C programs. Experimentation with E-ACSL using TempLIST benchmark comprising small C programs seeded with temporal errors shows that the suggested technique detects temporal memory errors missed by state-of-the-art memory debuggers. Further experiments with computationally intensive runs of programs from SPEC CPU indicate that the overheads of the proposed approach are within acceptable range to be used during testing or debugging.
IFIP International Federation for Information Processing, 2004
Lecture Notes in Computer Science, 2002
We report on a successful preliminary experience in the design and implementation of a special-purpose Abstract Interpretation based static program analyzer for the verification of safety critical embedded real-time software. The analyzer is both precise (zero false alarm in the considered experiment) and efficient (less than one minute of analysis for 10,000 lines of code). Even if it is based on a simple interval analysis, many features have been added to obtain the desired precision: expansion of small arrays, widening with several thresholds, loop unrolling, trace partitioning, relations between loop counters and other variables. The efficiency of the tool mainly comes from a clever representation of abstract environments based on balanced binary search trees. Dedicated to Neil Jones, for his 60 th birthday.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2000
Embedded hard real-time systems need reliable guarantees for the satisfaction of their timing constraints. Experience with the use of static timing-analysis methods and the tools based on them in the automotive and the aeronautics industries is positive. However, both the precision of the results and the efficiency of the analysis methods are highly dependent on the predictability of the execution platform. In fact, the architecture determines whether a static timing analysis is practically feasible at all and whether the most precise obtainable results are precise enough. Results contained in this paper also show that measurement-based methods still used in industry are not useful for quite commonly used complex processors. This dependence on the architectural development is of growing concern to the developers of timing-analysis tools and their customers, the developers in industry. The problem reaches a new level of severity with the advent of multicore architectures in the embedded domain. This paper describes the architectural influence on static timing analysis and gives recommendations as to profitable and unacceptable architectural features.
Out-of-memory errors are a serious source of unreliability in most embedded systems. Applications run out of main memory because of the frequent difficulty of estimating the memory requirement before deployment, either because it depends on input data, or because certain language features prevent estimation. The typical lack of disks and virtual memory in embedded systems has two serious consequences when an out-of-memory error occurs. First, there is no swap space for the application to grow into, and the system crashes. Second, since protection from virtual memory is usually absent, the fact that a segment has exceeded its bounds is not even detected and hence no pre-crash remedial action is possible. This work improves system reliability in two ways. First it proposes a low-overhead system of run-time checks by which the outof-memory errors are detected just before they will happen, by using carefully optimized compiler-inserted run-time check code. Such error detection enables the designer to incorporate systemspecific remedial action, such as transfer to manual control, shutting down of non-critical tasks, or other actions. Second, this work proposes five related techniques that can grow the stack or heap segment after it is out of memory, into previously un-utilized space such as dead variables and space freed by compressed live variables. These techniques can avoid the out-of-memory error if the extra space recovered is enough to complete execution.
Reliable Software Technologies Ada-Europe 2006, 2006
This article describes a practical tool aimed at the C code of the Linux kernel, the tool having been first described as a prototype (in this forum) in 2004. Its continuing maturation means that it is now capable of treating millions of lines of code in a few hours on very modest platforms. It detects about two real and uncorrected deadlock situations per thousand C source files or million lines of source code in the Linux kernel, and three uncorrected accesses to freed memory. In distinction to model-checking techniques, the tool uses a configurable `3-phase' compositional programming logic to perform its analysis. The result is a user-customizable abstract interpretation of C code that executes several different analyses simultaneously. The article contrasts theory and practice.
2012
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute
Lecture Notes in Computer Science, 2011
SLAyer is a program analysis tool designed to automatically prove memory safety of industrial systems code. In this paper we describe SLAyer's implementation, and its application to Windows device drivers. This paper accompanies the first release of SLAyer.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.