Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs - CPP 2016
…
35 pages
1 file
Static code analysis is increasingly used to guarantee the absence of undesirable behaviors in industrial programs. Designing sound analyses is a continuing trade-off between precision and complexity. Notably, dataflow analyses often perform overly wide approximations when two control-flow paths meet, by merging states from each path. This paper presents a generic abstract interpretation based framework to enhance the precision of such analyses on join points. It relies on predicated domains, that preserve and reuse information valid only inside some branches of the code. Our predicates are derived from conditional statements, and postpone the loss of information. The work has been integrated into Frama-C, a C source code analysis platform. Experiments on real generated code show that our approach scales, and improves significantly the precision of the existing analyses of Frama-C.
Lecture Notes in Computer Science, 1994
Effect systems and abstract interpretation are two methods to perform static analysis of programs. We present a new technique that builds upon the type and effect information of module signatures to extend abstract interpretation in the context of separate compilation. We use control-flow analysis as an application of this idea to support our claim. Control-flow analysis strives to determine at compile time which functions, in a given call environment, may be called by a particular application expression. This static control-flow analysis can be expressed using either a type and effect system or abstract interpretation. The type and effect approach supports separate compilation but, being structural, collapses all call environments together, thus limiting the precision of control-flow information. By contrast, the abstract interpretation approach fails to support separate compilation but, because of its more operational nature, can distinguish between call environments, thus performing a more precise analysis. We present a new static control-flow analysis that combines both techniques in a single framework. This separate abstract interpretation is as effective as the abstract interpretation approach on closed expressions, but is also able to tackle expressions with free variables, using their types to approximate their abstract values. We prove that this separate abstract interpretation analysis is a conservative extension of abstract interpretation.
Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, 2014
Low level code is challenging: It lacks structure, it uses jumps and symbolic addresses, the control flow is often highly optimized, and registers and memory locations may be reused in ways that make typing extremely challenging. Information flow properties create additional complications: They are hyperproperties relating multiple executions, and the possibility of interrupts and concurrency, and use of devices and features like memory-mapped I/O requires a departure from the usual initial-state final-state account of noninterference. In this work we propose a novel approach to relational verification for machine code. Verification goals are expressed as equivalence of traces decorated with observation points. Relational verification conditions are propagated between observation points using symbolic execution, and discharged using first-order reasoning. We have implemented an automated tool that integrates with SMT solvers to automate the verification task. The tool transforms ARMv7 binaries into an intermediate, architecture-independent format using the BAP toolset by means of a verified translator. We demonstrate the capabilities of the tool on a separation kernel system call handler, which mixes handwritten assembly with gcc-optimized output, a UART device driver and a crypto service modular exponentiation routine.
Lecture Notes in Computer Science, 2002
In recent years, static analysis has increasingly been applied to the problem of program verification. Systems for program verification typically use precise and expensive interprocedural dataflow algorithms that are difficult to scale to large programs. An attractive way to scale these analyses is to use a preprocessing step to reduce the number of dataflow facts propagated by the analysis and/or the number of statements to be processed, before the dataflow analysis is run. This paper describes an approach that achieves this effect. We first run a scalable, control-flow-insensitive pointer analysis to produce a conservative representation of value flow in the program. We query the value flow representation at the program points where a dataflow solution is required, in order to obtain a conservative over-approximation of the dataflow facts and the statements that must be processed by the analysis. We then run the dataflow analysis on this "slice" of the program. We present experimental evidence in support of our approach by considering two client dataflow analyses for program verification: typestate analysis, and software model checking. We show that in both cases, our approach leads to dramatic speedups.
Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis, 2004
In this paper, we present a new algorithm for tracking the flow of values through a program. Our algorithm represents a substantial improvement over the state of the art. Previously described value flow analyses that are control-flow sensitive do not scale well, nor do they eliminate value flow information from infeasible execution paths (i.e., they are path-insensitive). Our algorithm scales to large programs, and it is path-sensitive. The efficiency of our algorithm arises from three insights: The value flow problem can be "bit-vectorized" by tracking the flow of one value at a time; dataflow facts from different execution paths with the same value flow information can be merged; and information about complex aliasing that affects value flow can be plugged in from a different analysis. We have incorporated our analysis in ESP, a software validation tool. We have used ESP to validate the Windows operating system kernel (a million lines of code) against an important security property. This experience suggests that our algorithm scales to large programs, and is accurate enough to trace the flow of values in real code.
Logical Methods in Computer Science, 2012
2007
Abstract This work presents a framework for fusing flow analysis and theorem proving called logic-flow analysis (LFA). The framework itself is the reduced product of two abstract interpretations:(1) an abstract state machine and (2) a set of propositions in a restricted first-order logic. The motivating application for LFA is the safe removal of implicit array-bounds checks without type information, user interaction or program annotation.
10th IEEE International Workshop on Object-Oriented Real-Time Dependable Systems, 2005
Reliable program Worst-Case Execution Time (WCET) estimates are a key component when designing and verifying real-time systems. One way to derive such estimates is by static WCET analysis methods, relying on mathematical models of the software and hardware involved. This paper describes an approach to static flow analysis for deriving information on the possible execution paths of C programs. This includes upper bounds for loops, execution dependencies between different code parts and safe determination of possible pointer values. The method builds upon abstract interpretation, a classical program analysis technique, which is adopted to calculate flow information and to handle the specific properties of the C programming language.
Proceedings International Conference on Software Maintenance ICSM-94, 1994
Although data pow analysis was first developed for use in compilers, its usefulness is now recognized i n many software tools. Because of its compiler origins, the computation of data pow for software tools is based on the traditional exhaustive data flow framework. However, although this framework is useful for computing data pow for compilers, it is not the most appropriate for sofsware tools, particularly those used in the maintenance stage. In maintenance, testing and debugging is typically performed in response to program changes. As such, the data pow required is demand driven from the changed program points. Rather than compute the data flow exhaustively using the traditional data flow framework, we present a framework f o r partial analysis. The framework includes a specification language enabling the specification of the demand driven data flow desired by a user. From the specification, a partial analysis algorithm is automatically generated using an L-attributed definition for the grammar of the specification language. A specification of a demand driven data pow problem expresses characteristics that define the kind of traversal needed in the partial analysis and the type of dependencies to be captured. The partial analyses algorithms are eficient in that only as much of the program is analyzed as actually needed, thus reducing the time and space requirements over exhaustively computing the data flow information. The algorithms are shown to be useful when debugging and testing programs during maintenance.
This paper reports on the correctness proof of compiler optimizations based on data-flow analysis. We formulate the optimizations and analyses as instances of a general framework for data-flow analyses and transformations, and prove that the optimizations preserve the behavior of the compiled programs. This development is a part of a larger effort of certifying an optimizing compiler by proving semantic equivalence between source and compiled code.
This paper reports on the design and soundness proof, using the Coq proof assistant, of Verasco, a static analyzer based on abstract interpretation for most of the ISO~C~1999 language (excluding recursion and dynamic allocation). Verasco establishes the absence of run-time errors in the analyzed programs. It enjoys a modular architecture that supports the extensible combination of multiple abstract domains, both relational and non-relational. Verasco integrates with the CompCert formally-verified C~compiler so that not only the soundness of the analysis results is guaranteed with mathematical certitude, but also the fact that these guarantees carry over to the compiled code.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Higher-Order and Symbolic Computation (formerly LISP and Symbolic Computation), 2003
ACM SIGPLAN Notices, 1986
2009 IEEE/ACM …, 2009
Electronic Notes in Theoretical Computer Science, 2005
ACM SIGPLAN Notices, 2005
2013 20th Asia-Pacific Software Engineering Conference (APSEC), 2013
Lecture Notes in Computer Science, 2017
Lecture Notes in Computer Science, 1998
Visual Languages and Human Centric Computing, 2006
Lecture Notes in Computer Science, 2012
Lecture Notes in Computer Science, 2006
Frontiers of Computer Science, 2013
ACM SIGPLAN Notices, 2002
Lecture Notes in Computer Science, 2008
Proceedings IEEE International Conference on Software Maintenance. ICSM 2001
International Journal on Software Tools for Technology Transfer, 2015
Proceedings of the 28th Annual ACM Symposium on Applied Computing - SAC '13, 2013
Embedded and Real- …, 2008
Lecture Notes in Computer Science, 2011
International Journal on Software Tools for Technology Transfer, 2013