Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, Lecture Notes in Computer Science
SLAyer is a program analysis tool designed to automatically prove memory safety of industrial systems code. In this paper we describe SLAyer's implementation, and its application to Windows device drivers. This paper accompanies the first release of SLAyer.
Memory corruption bugs continue to plague low-level systems software generally written in unsafe programming languages. In order to detect and protect against such exploits, many pre- and post-deployment techniques exist. In this position paper, we propose and motivate the need for a hybrid approach for the protection against memory safety vulnerabilities, combining techniques that can identify the presence (and absence) of vulnerabilities pre-deployment with those that can detect and mitigate such vulnerabilities post-deployment. Our hybrid approach involves three layers: hardware runtime protection provided by capability hardware, software runtime protection provided by compiler instrumentation, and static analysis provided by bounded model checking and symbolic execution. The key aspect of the proposed hybrid approach is that the protection offered is greater than the sum of its parts -- the expense of post-deployment runtime checks is reduced via information obtained during pre-...
2011
SUMMARY Memory access violations are a leading source of unreliability in C programs. As evidence of this problem, a variety of methods exist that retrofit C with software checks to detect memory errors at runtime. However, these methods generally suffer from one or more drawbacks including the inability to detect all errors, the use of incompatible metadata, the need for manual code modifications, and high runtime overheads.
Lecture Notes in Computer Science, 2004
Parameters of a program's runtime environment such as the machine architecture and operating system largely determine whether a vulnerability can be exploited. For example, the machine word size is an important factor in an integer overflow attack and likewise the memory layout of a process in a buffer or heap overflow attack. In this paper, we present an analysis of the effects of a runtime environment on a language's data types. Based on this analysis, we have developed Archerr, an automated one-pass source-to-source transformer that derives appropriate architecture dependent runtime safety error checks and inserts them in C source programs. Our approach achieves comprehensive vulnerability coverage against a wide array of program-level exploits including integer overflows/underflows. We demonstrate the efficacy of our technique on versions of C programs with known vulnerabilities such as Sendmail. We have benchmarked our technique and the results show that it is in general less expensive than other well-known runtime techniques, and at the same time requires no extensions to the C programming language. Additional benefits include the ability to gracefully handle arbitrary pointer usage, aliasing, and typecasting.
Formal Aspects of Computing
Runtime Assertion Checking (RAC) for expressive specification languages is a non-trivial verification task, that becomes even more complex for memory-related properties of imperative languages with dynamic memory allocation. It is important to ensure the soundness of RAC verdicts, in particular when RAC reports the absence of failures for execution traces. This paper presents a formalization of a program transformation technique for RAC of memory properties for a representative language with pointers and memory operations, including dynamic allocation and deallocation. The generated program instrumentation relies on an axiomatized observation memory model, which is essential to record and monitor memory-related properties. We prove the soundness of RAC verdicts with regard to the semantics of this language.
2016 IEEE International Conference on Software Quality, Reliability and Security (QRS), 2016
Memory errors such as buffer overruns are notorious security vulnerabilities. There has been considerable interest in having a compiler to ensure the safety of compiled code either through static verification or through instrumented runtime checks. While certifying compilation has shown much promise, it has not been practical, leaving code instrumentation as the next best strategy for compilation. We term such compilers Memory Error Sanitization Compilers (MESCs). MESCs are available as part of GCC, LLVM and MSVC suites. Due to practical limitations, MESCs typically apply instrumentation indiscriminately to every memory access, and are consequently prohibitively expensive and practical to only small code bases. This work proposes a methodology that applies state-of-the-art static analysis techniques to eliminate unnecessary runtime checks, resulting in more efficient and scalable defenses. The methodology was implemented on LLVM's Safecode, Integer Overflow, and Address Sanitize...
Journal of Automated Reasoning, 2016
While automated verification of imperative programs has been studied intensively, proving termination of programs with explicit pointer arithmetic fully automatically was still an open problem. To close this gap, we introduce a novel abstract domain that can track allocated memory in detail. We use it to automatically construct a symbolic execution graph that over-approximates all possible runs of a program and that can be used to prove memory safety. This graph is then transformed into an integer transition system, whose termination can be proved by standard techniques. We implemented this approach in the automated termination prover AProVE and demonstrate its capability of analyzing C programs with pointer arithmetic that existing tools cannot handle.
2008 IEEE Symposium on Security and Privacy (sp 2008), 2008
Attacks often exploit memory errors to gain control over the execution of vulnerable programs. These attacks remain a serious problem despite previous research on techniques to prevent them. We present Write Integrity Testing (WIT), a new technique that provides practical protection from these attacks. WIT uses points-to analysis at compile time to compute the control-flow graph and the set of objects that can be written by each instruction in the program. Then it generates code instrumented to prevent instructions from modifying objects that are not in the set computed by the static analysis, and to ensure that indirect control transfers are allowed by the control-flow graph. To improve coverage where the analysis is not precise enough, WIT inserts small guards between the original program objects. We describe an efficient implementation with optimizations to reduce space and time overhead. This implementation can be used in practice because it compiles C and C++ programs without modifications, it has high coverage with no false positives, and it has low overhead. WIT's average runtime overhead is only 7% across a set of CPU intensive benchmarks and it is negligible when IO is the bottleneck. * Work done while an intern at MSR Cambridge. Periklis Akritidis is affiliated with the University of Cambridge, Cristian Cadar with Stanford University, and Costin Raiciu with University College London.
2018 International Conference on ReConFigurable Computing and FPGAs (ReConFig), 2018
Control-Flow Integrity (CFI) is a software protection mechanism that detects a class of code reuse attacks by identifying anomalous control-flows within an executing program. Hardware-based CFI has the promise of the security benefits of CFI without the performance overhead and complexity of software-based CFI: generally speaking, hardware-based monitors are more difficult to bypass, offer lower performance overheads than software-based monitors, and, furthermore, hardware-based CFI can be performed without the necessity of altering application binaries or instrumenting language compilers. Although hardware-based CFI is an active area of research and there is a growing literature describing CFI strategies at a high-level, there is, to the authors' best knowledge, no work on languages specially tailored to the specification and implementation of CFI monitors. This article presents a proof-ofconcept domain-specific language with built-in abstractions for expressing control-flow constraints along with a compiler that targets the functional hardware description language ReWire. While the case study is small, it indicates, we argue, an approach to rapid-prototyping hardware-based monitors enforcing CFI that is quick, flexible, and extensible as well as being amenable to formal verification.
Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021
Fast, byte-addressable, persistent main memories (PM) make it possible to build complex data structures that can survive system failures. Programming for PM is challenging, not least because it combines well-known programming challenges like locking, memory management, and pointer safety with novel PM-specific bug types. It also requires logging updates to PM to facilitate recovery after a crash. A misstep in any of these areas can corrupt data, leak resources, or prevent successful recovery after a crash. Existing PM libraries in a variety of languages ś C, C++, Java, Go ś simplify some of these problems, but they still require the programmer to learn (and flawlessly apply) complex rules to ensure correctness. Opportunities for data-destroying bugs abound. This paper presents Corundum, a Rust-based library with an idiomatic PM programming interface and leverages Rust's type system to statically avoid most common PM programming bugs. Corundum lets programmers develop persistent data structures using familiar Rust constructs and have confidence that they will be free of those bugs. We have implemented Corundum and found its performance to be as good as or better than Intel's widely-used PMDK library, HP's Atlas, Mnemosyne, and go-pmem. CCS CONCEPTS • Information systems → Storage class memory; • Software and its engineering → Formal software verification; Software testing and debugging; • Hardware → Non-volatile memory.
2022 IEEE/ACM 44th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)
Memory errors may lead to program crashes and security vulnerabilities. In this paper, we present Movec, a dynamic analysis tool that can automatically detect memory errors at runtime. To address the three major challenges faced by existing tools in detecting memory errors, low effectiveness, optimization sensitivity and platform dependence, Movec leverages a smart-status-based monitoring algorithm and performs its intrumentation at the source-level. Our extensive evaluation shows that Movec is capable of finding a wide range of memory errors with moderate and competitive overheads.
ACM Transactions in Embedded Computing Systems, 2005
Traditional approaches to enforcing memory safety of programs rely heavily on run-time checks of memory accesses and on garbage collection, both of which are unattractive for embedded applications. The goal of our work is to develop advanced compiler techniques for enforcing memory safety with minimal run-time overheads. In this paper, we describe a set of compiler techniques that, together with minor semantic restrictions on C programs and no new syntax, ensure memory safety and provide most of the error-detection capabilities of type-safe languages, without using garbage collection, and with no run-time software checks, (on systems with standard hardware support for memory management). The language permits arbitrary pointer-based data structures, explicit deallocation of dynamically allocated memory, and restricted array operations. One of the key results of this paper is a compiler technique that ensures that dereferencing dangling pointers to freed memory does not violate memory safety, without annotations, run-time checks, or garbage collection, and works for arbitrary type-safe C programs. Furthermore, we present a new interprocedural analysis for static array bounds checking under certain assumptions. For a diverse set of embedded C programs, we show that we are able to ensure memory safety of pointer and dynamic memory usage in all these programs with no run-time software checks (on systems with standard hardware memory protection), requiring only minor restructuring to conform to simple type restrictions. Static array bounds checking fails for roughly half the programs we study due to complex array references, and these are the only cases where explicit run-time software checks would be needed under our language and system assumptions.
Science of Computer Programming, 2005
We present a framework for statically reasoning about temporal heap safety properties. We focus on local temporal heap safety properties, in which the verification process may be performed for a program object independently of other program objects. We apply our framework to produce new conservative static algorithms for compile-time memory management, which prove for certain program points that a memory object or a heap reference will not be needed further. These algorithms can be used for reducing space consumption of Java programs. We have implemented a prototype of our framework, and used it to verify compile-time memory management properties for several small, but interesting example programs, including JavaCard programs.
ICT Systems Security and Privacy Protection
Unsafe memory accesses in programs written using popular programming languages like C and C++ have been among the leading causes of software vulnerability. Memory safety checkers, such as Softbound, enforce memory spatial safety by checking if accesses to array elements are within the corresponding array bounds. However, such checks often result in high execution time overhead due to the cost of executing the instructions associated with the bound checks. To mitigate this problem, techniques to eliminate redundant bound checks are needed. In this paper, we propose a novel framework, SIMBER, to eliminate redundant memory bound checks via statistical inference. In contrast to the existing techniques that primarily rely on static code analysis, our solution leverages a simple, model-based inference to identify redundant bound checks based on runtime statistics from past program executions. We construct a knowledge base containing sufficient conditions using variables inside functions, which are then applied adaptively to avoid future redundant checks at a function-level granularity. Our experimental results on realworld applications show that SIMBER achieves zero false positives. Also, our approach reduces the performance overhead by up to 86.94% over Softbound, and incurs a modest 1.7% code size increase on average to circumvent the redundant bound checks inserted by Softbound.
2006
Modern concurrent programming languages like Java and C# have a programming language level memory model; it captures the set of all allowed behaviors of programs on any implementation platform -uni-or multi-processor. Such a memory model is typically weaker than Sequential Consistency and allows re-ordering of operations within a program thread. Therefore, programs verified correct by assuming Sequential Consistency (that is, each thread proceeds in program order) may not behave correctly under certain platforms! The solution to this problem is to develop program checkers which are memory model sensitive. In this paper, we develop such a reachability analysis tool for the programming language C#. Our checker identifies program states which are reached only because the C# memory model is more relaxed than Sequential Consistency. Furthermore, our checker identifies (a) operation re-orderings which cause such undesirable states to be reached, and (b) simple program modifications -by inserting memory barrier operations -which prevent such undesirable re-orderings. 1 Certain other definitions of data race also consider possible races between two write operations to the same shared variable v . However, if there is no read operation to observe the difference in order among the two writes of v -it does not make a difference in observable program behavior.
International Symposium on Code Generation and Optimization (CGO 2011), 2011
Memory corruption, reading uninitialized memory, using freed memory, and other memory-related errors are among the most difficult programming bugs to identify and fix due to the delay and non-determinism linking the error to an observable symptom. Dedicated memory checking tools are invaluable for finding these errors. However, such tools are difficult to build, and because they must monitor all memory accesses by the application, they incur significant overhead. Accuracy is another challenge: memory errors are not always straightforward to identify, and numerous false positive error reports can make a tool unusable. A third obstacle to creating such a tool is that it depends on low-level operating system and architectural details, making it difficult to port to other platforms and difficult to target proprietary systems like Windows.
2014 17th Euromicro Conference on Digital System Design, 2014
This paper presents AHEMS (Asynchronous Hardware-Enforced Memory Safety), an architectural support for enforcing spatial and temporal memory safety to protect against memory corruption attacks. We integrated AHEMS with the Leon3 open-source processor and prototype on an FPGA. In an evaluation of the detection coverage using 677 security test cases (including spatial and temporal memory errors), selected from the Juliet Test Suite, AHEMS detected all but one memory safety violation. The missed test case involves overflow of a sub-object in a data structure whose detection is not supported by the current prototype. Performance assessment using the Olden benchmarks shows an average 10.6% overhead, and negligible impact on the processor-critical path (0.06% overhead) and power consumption (0.5% overhead). I.
2008 IEEE Symposium on Security and Privacy (sp 2008), 2008
Operating systems divide virtual memory addresses into kernel space and user space. The interface of a modern operating system consists of a set of system call procedures that may take pointer arguments called user pointers. It is safe to dereference a user pointer if and only if it points into user space. If the operating system dereferences a user pointer that does not point into user space, then a malicious user application could gain control of the operating system, reveal sensitive data from kernel space, or crash the machine. Because the operating system cannot trust user processes, the operating system must check that the user pointer points to user space before dereferencing it. In this paper, we present a scalable and precise static analysis capable of verifying the absence of unchecked user pointer dereferences. We evaluate an implementation of our analysis on the entire Linux operating system with over 6.2 million lines of code with false alarms reported on only 0.05% of dereference sites.
ACM SIGAda Ada Letters
An important concern addressed by runtime verification tools for C code is related to detecting memory errors. It requires to monitor some properties of memory locations (e.g., their validity and initialization) along the whole program execution. Static analysis based optimizations have been shown to significantly improve the performances of such tools by reducing the monitoring of irrelevant locations. However, soundness of the verdict of the whole tool strongly depends on the soundness of the underlying static analysis technique. This paper tackles this issue for the dataflow analysis used to optimize the E-ACSL runtime assertion checking tool.We formally define the core dataflow analysis used by E-ACSL and prove its soundness.
2008
In this paper, we present a novel approach that establishes a synergy between static and dynamic analyses for detecting memory errors in C code. We extend the standard C type system with effect, region, and host annotations that are relevant to memory management. We define static memory checks to detect memory errors using these annotations. The statically undecidable checks are delegated to dynamic code instrumentation to secure program executions. The static analysis guides its dynamic counterpart by locating instrumentation points and their execution paths. Our dynamic analysis instruments programs with in-lined monitors that observe program executions and ensure safe-fail when encountering memory errors. We prototype our approach by extending the GCC compiler with our type system, a dynamic monitoring library, and code instrumentation capabilities.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.