Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999
This paper describes a probabilistic method for verifying the equivalence of two multiple-valued functions. Each function is hashed to an integer code by transforming it to a integer-valued polynomial and the equivalence of two polynomials is checked probabilistically. The hash codes for two equivalent functions are always the same. Thus, the equivalence of two functions can be verified with a known probability of error, arising from collisions between inequivalent functions. Such a probabilistic verification can be an attractive alternative for verifying functions that are too large to be handled by deterministic verification methods.
Formal Methods in System Design, 1992
We present a novel method for verifying the equivalence of two Boolean functions. Each function is hashed to an integer code by assigning random integer values to the input variables and evaluating an integer-valued transformation of the original function. The hash codes for two equivalent functions are always equal. Thus the equivalence of two functions can be verified with a very low probability of error, which arises from unlikely "collisions" between inequivalent functions. An upper bound, e, on the probability of error is known a priori. The bound can be decreased exponentially by making multiple runs. Results indicate significant time and space advantages for this method over techniques that represent each function as a single OBDD. Some functions known to require space (and time) exponential in the number of input variables for these techniques require only polynomial resources using our method. Experimental results indicate that probabilistic verification can provide an attractive alternative for verifying functions too large to be handled using these OBDD-based techniques.
Journal of Computer and System Sciences, 1981
In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class contains a small number of functions, yet is almost universal,. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log, log,(n)). log,(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m + log, n-log, m bits. An application of this class is a provably secure authentication technique for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal,. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An "enemy"-even one with infinite computer resources-cannot forge or modify a message without detection. The set equality technique allows operations including "add member to set," "delete member from set" and "test two sets for equality" to be performed in expected constant time and with less than a specified probability of error.
Computer Aided Verification, 2011
Verifying code equivalence is useful in many situations, such as checking: yesterday's code against today's, different implementations of the same (standardized) interface, or an optimized routine against a reference implementation. We present a tool designed to easily check the equivalence of two arbitrary C functions. The tool provides guarantees far beyond those possible with testing, yet it often requires less work than writing even a single test case. It automatically synthesizes inputs to the routines and uses bit-accurate, sound symbolic execution to verify that they produce equivalent outputs on a finite number of paths, even for rich, nested data structures. We show that the approach works well, even on heavily-tested code, where it finds interesting errors and gets high statement coverage, often exhausting all feasible paths for a given input size. We also show how the simple trick of checking equivalence of identical code turns the verification tool chain against itself, finding errors in the underlying compiler and verification tool.
Proceedings of the Design Automation & Test in Europe Conference, 2006
This paper addresses the problem of equivalence verification of RTL descriptions that implement arithmetic computations (add, mult, shift) over bitvectors that have differing bit-widths. Such designs are found in many DSP applications where the widths of input and output bit-vectors are dictated by the desired precision. A bit-vector of size n can represent integer values from 0 to 2 n − 1; i.e. integers reduced modulo 2 n . Therefore, to verify bit-vector arithmetic over multiple wordlength operands, we model the RTL datapath as a polynomial function from Z 2 n 1 × Z 2 n 2 × · · · × Z 2 n d to Z 2 m . Subsequently, RTL equivalence f ≡ g is solved by proving whether (f − g) ≡ 0 over such mappings. Exploiting concepts from number theory and commutative algebra, a systematic, complete algorithmic procedure is derived for this purpose. Experimentally, we demonstrate how this approach can be applied within a practical CAD setting. Using our approach, we verify a set of arithmetic datapaths at RTL where contemporary approaches prove to be infeasible.
Data & Knowledge Engineering, 1993
Turau, V. and H. Duch~ne, Equality testing for complex objects based on hashing, Data & Knowledge Engineering 10 (1993) 101-111. An important characteristic of many new data models is the capability of constructing complex data objects. These complex data objects usually include set valued attributes. The efficiency of the implementation of sets heavily depends on the efficiency of the equality operator. In this paper we present algorithms for testing equality of complex objects based on hashing. To evaluate the performance of the two proposed algorithms we made simulations varying the different parameters involved. The first algorithm is based on hash functions and the second is based on a linear ordering. Equality testing based on hashing is considerably better, expecially for large objects. Furthermore, equality testing based on a linear ordering requires preprocessing for maintaining the linear order, whereas in the other case the preprocessing consists solely of calculating the hash values.
Journal of University of Shanghai for Science and Technology, 2021
The ideal Secure Multiparty Computation (SMC) model deploys a Trusted Third Party (TTP) which assists in secure function evaluation. The participating joint parties give input to the TTP which provide the results to the participating parties. The equality check problem in multiple party cases can be solved by simple architecture and a simple algorithm. In our proposed protocol Equality Hash Checkin ideal model, we use a secure hash function. All the parties interested to check equality of their data supply hash of their data to the TTP which then compared all hash values for equality. It declares the result to the parties.
Abstract This paper describes and analyzes a probabilistic technique to reduce the memory requirement of the table of reached states maintained in veri cation by explicit state enumeration. The memory savings of the new scheme come at the price of a certain probability that the search becomes incomplete. However, this probability can be made negligibly small by using typically 40 bits of memory per state.
The paper reviews various existing approaches to construct verifiable computing schemes. Such schemes allow a weak client to delegate the computation of function F(X) to the powerful untrusted worker (server). The worker returns the result of the function evaluation y=F(X) to the client and provides a proof that F was computed correctly. We compare different verifiable computing schemes with each other and discuss their flaws and vulnerabilities. The most attention is focused on approaches using some cryptographic primitives. In particular we concentrate on those that exploit fully homomorphic encryption. Finally, we outline open problems and predict the most perspective directions.
Lecture Notes in Computer Science, 1997
Message authentication codes (MACs) using polynomial evaluation have the advantage of requiring a very short key even for very large messages. We describe a low complexity software polynomial evaluation procedure, that for large message sizes gives a MAC that has about the same low software complexity as for bucket hashing but requires only small keys and has better security characteristics.
2002
Abstract This report gives a survey on cryptographic hash functions. It gives an overview of different types of hash functions and reviews design principles. It also focuses on keyed hash functions and suggests some applications and constructions of keyed hash functions. We have used hash (keyed) function for authenticating messages encrypted using Rijndael [1] block cipher. Moreover, a parallel message digest has been implemented using VHDL.
Verifiable computation (VC) enables thin clients to efficiently verify the computational results produced by a powerful server. Although VC was initially considered to be mainly of theoretical interest, over the last two years impressive progress has been made on implementing VC. Specifically, we now have open-source implementations of VC systems that handle all classes of computations expressed either as circuits or in the RAM model. Despite this very encouraging progress, new enhancements in the design and implementation of VC protocols are required to achieve truly practical VC for real-world applications. In this work, we show that for functions that can be expressed efficiently in terms of set operations (e.g., a subset of SQL queries) VC can be enhanced to become drastically more practical: We present the design and prototype implementation of a novel VC scheme that achieves orders of magnitude speed-up in comparison with the state of the art. Specifically, we build and evaluate TRUESET, a system that can verifiably compute any polynomial-time function expressed as a circuit consisting of "set gates" such as union, intersection, difference and set cardinality. Moreover, TRUESET supports hybrid circuits consisting of both set gates and traditional arithmetic gates. Therefore, it does not lose any of the expressiveness of previous schemes-this also allows the user to choose the most efficient way to represent different parts of a computation. By expressing set computations as polynomial operations and introducing a novel Quadratic Polynomial Program technique, our experiments show that TRUESET achieves prover performance speed-up ranging from 30x to 150x and up to 97% evaluation key size reduction compared to the state-of-the-art.
IEEE Transactions on Information Theory, 2002
This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with -bit outputs for integers 1 based on error-correcting codes and secure compression functions with -bit outputs. This leads to simple and practical hash function constructions based on block ciphers such as Data Encryption Standard (DES), where the key size is slightly smaller than the block size; IDEA, where the key size is twice the block size; Advanced Encryption Standard (AES), with a variable key size; and to MD4-like hash functions. Under reasonable assumptions about the underlying compression function and/or block cipher, it is proved that the new hash functions are collision resistant. More precisely, a lower bound is shown on the number of operations to find a collision as a function of the strength of the underlying compression function. Moreover, some new attacks are presented that essentially match the presented lower bounds. The constructions allow for a large degree of internal parallelism. The limits of this approach are studied in relation to bounds derived in coding theory.
Lecture Notes in Computer Science, 2013
Message authentication codes (MACs) are an essential primitive in cryptography. They are used to ensure the integrity and authenticity of a message, and can also be used as a building block for larger schemes, such as chosenciphertext secure encryption, or identity-based encryption. MACs are often built in two steps: first, the 'front end' of the MAC produces a short digest of the long message, then the 'back end' provides a mixing step to make the output of the MAC unpredictable for an attacker. Our verification method follows this structure. We develop a Hoare logic for proving that the front end of the MAC is an almost-universal hash function. The programming language used to specify these functions is fairly expressive and can be used to describe many block-cipher and compression function-based MACs. We implemented this method into a prototype that can automatically prove the security of almost-universal hash functions. This prototype can prove the security of the front-end of many CBC-based MACs (DMAC, ECBC, FCBC and XCBC to name only a few), PMAC and HMAC. We then provide a list of options for the back end of the MAC, each consisting of only two or three instructions, each of which can be composed with an almostuniversal hash function to obtain a secure MAC.
Ieee Transactions on Computers, 2007
This paper presents a technique for representing multiple-output binary and word-level functions in GFðNÞ (where N ¼ p m , p is a prime number, and m is a nonzero positive integer) based on decision diagrams (DDs). The presented DD is canonical and can be made minimal with respect to a given variable order. The DD has been tested on benchmarks, including integer multiplier circuits, and the results show that it can produce better node compression (more than an order of magnitude in some cases) compared to shared binary DDs (BDDs). The benchmark results also reflect the effect of varying the input and output field sizes on the number of nodes. Methods of graph-based representation of characteristic and encoded characteristic functions in GFðNÞ are also presented. Performance of the proposed representations has been studied in terms of average path lengths and the actual evaluation times with 50,000 randomly generated patterns on many benchmark circuits. All of these results reflect that the proposed technique can outperform existing techniques.
2006
We introduce VSH, very smooth hash, a new S-bit hash function that is provably collision-resistant assuming the hardness of finding nontrivial modular square roots of very smooth numbers modulo an S-bit composite. By very smooth, we mean that the smoothness bound is some fixed polynomial function of S. We argue that finding collisions for VSH has the same asymptotic complexity as factoring using the Number Field Sieve factoring algorithm, i.e., subexponential in S. VSH is theoretically pleasing because it requires just a single multiplication modulo the S-bit composite per Ω(S) message-bits (as opposed to O(logS) message-bits for previous provably secure hashes). It is relatively practical. A preliminary implementation on a 1GHz Pentium III processor that achieves collision resistance at least equivalent to the difficulty of factoring a 1024-bit RSA modulus, runs at 1.1 MegaByte per second, with a moderate slowdown to 0.7MB/s for 2048-bit RSA security. VSH can be used to build a fast, provably secure randomised trapdoor hash function, which can be applied to speed up provably secure signature schemes (such as Cramer-Shoup) and designated-verifier signatures.
IEEE Transactions on Computers, 1997
A new Boolean function representation scheme, the Indexed Binary Decision Diagram (IBDD), is proposed to provide a compact representation for functions whose Ordered Binary Decision Diagram (OBDD) representation is intractably large. We explain properties of IBDDs and present algorithms for constructing IBDDs from a given circuit. Practical and effective algorithms for satisfiability testing and equivalence checking of IBDDs, as well as their implementation results, are also presented. The results show that many functions, such as multipliers and the hidden-weighted-bit function, whose analysis is intractable using OBDDs, can be efficiently accomplished using IBDDs. We report efficient verification of Booth multipliers, as well as a practical strategy for polynomial time verification of some classes of unsigned array multipliers.
European Design and Test Conference, 1994
The verification of sequential circuits with complex data- paths and non-trivial timing behavior is a difficult task. A multifunctional pipline is described as an example of such a circuit automatically verified by a verification pro- cedure. The paper aims at presenting a possible target for hardware verification methods both for hardware de- signers interested in applying such methods and re-
2009
We propose a methodology to construct verifiable random functions from a class of identity based key encapsulation mechanisms (IB-KEM) that we call VRF suitable. Informally, an IB-KEM is VRF suitable if it provides what we call unique decryption (i.e. given a ciphertext C produced with respect to an identity ID, all the secret keys corresponding to identity ID , decrypt to the same value, even if ID = ID) and it satisfies an additional property that we call pseudorandom decapsulation. In a nutshell, pseudorandom decapsulation means that if one decrypts a ciphertext C, produced with respect to an identity ID, using the decryption key corresponding to any other identity ID the resulting value looks random to a polynomially bounded observer. Interestingly, we show that most known IB-KEMs already achieve pseudorandom decapsulation. Our construction is of interest both from a theoretical and a practical perspective. Indeed, apart from establishing a connection between two seemingly unrelated primitives, our methodology is direct in the sense that, in contrast to most previous constructions, it avoids the inefficient Goldreich-Levin hardcore bit transformation.
Lecture Notes in Computer Science, 2007
Yao's classical millionaires' problem is about securely determining whether x > y, given two input values x, y, which are held as private inputs by two parties, respectively. The output x > y becomes known to both parties. In this paper, we consider a variant of Yao's problem in which the inputs x, y as well as the output bit x > y are encrypted. Referring to the framework of secure n-party computation based on threshold homomorphic cryptosystems as put forth by Cramer, Damgård, and Nielsen at Eurocrypt 2001, we develop solutions for integer comparison, which take as input two lists of encrypted bits representing x and y, respectively, and produce an encrypted bit indicating whether x > y as output. Secure integer comparison is an important building block for applications such as secure auctioning. In this extended abstract, our focus is on the two-party case, although most of our results extend to the multi-party case. We propose new logarithmic-and constant-round protocols for this setting, which achieve simultaneously very low communication and computational complexities. We analyze the protocols in detail and show that our solutions compare favorably to other known solutions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.