Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006
Lattice-based signature schemes following the Goldreich- Goldwasser-Halevi (GGH) design have the unusual property that each signature leaks information on the signer’s secret key, but this does not necessarily imply that such schemes are insecure. At Eurocrypt ’03, Szydlo proposed a potential attack by showing that the leakage reduces the key-recovery problem to that of distinguishing integral quadratic forms. He proposed a heuristic method to solve the latter problem, but it was unclear whether his method could attack real-life parameters of GGH and NTRU Cryptosystemssign. Here, we propose an alternative method to attack signature schemes à la GGH, by studying the following learning problem: given many random points uniformly distributed over an unknown n-dimensional parallelepiped, recover the parallelepiped or an approximation thereof. We transform this problem into a multivariate optimization problem that can be solved by a gradient descent. Our approach is very effective in practice: we present the first succesful key-recovery experiments on NTRU Cryptosystemssign-251 without perturbation, as proposed in half of the parameter choices in NTRU standards under consideration by IEEE P1363.1. Experimentally, 90,000 signatures are sufficient to recover the NTRU Cryptosystemssign-251 secret key. We are also able to recover the secret key in the signature analogue of all the GGH encryption challenges, using a number of signatures which is roughly quadratic in the lattice dimension.
Advances in Cryptology – EUROCRYPT 2020, 2020
In this paper, we initiate the study of side-channel leakage in hash-and-sign lattice-based signatures, with particular emphasis on the two efficient implementations of the original GPV lattice-trapdoor paradigm for signatures, namely NIST second-round candidate Falcon and its simpler predecessor DLP. Both of these schemes implement the GPV signature scheme over NTRU lattices, achieving great speed-ups over the general lattice case. Our results are mainly threefold. First, we identify a specific source of side-channel leakage in most implementations of those schemes, namely, the one-dimensional Gaussian sampling steps within lattice Gaussian sampling. It turns out that the implementations of these steps often leak the Gram-Schmidt norms of the secret lattice basis. Second, we elucidate the link between this leakage and the secret key, by showing that the entire secret key can be efficiently reconstructed solely from those Gram-Schmidt norms. The result makes heavy use of the algebraic structure of the corresponding schemes, which work over a power-of-two cyclotomic field. Third, we concretely demonstrate the side-channel attack against DLP (but not Falcon due to the different structures of the two schemes). The challenge is that timing information only provides an approximation of the Gram-Schmidt norms, so our algebraic recovery technique needs to be combined with pruned tree search in order to apply it to approximate values. Experimentally, we show that around 2 35 DLP traces are enough to reconstruct the entire key with good probability.
Advances in Cryptology – ASIACRYPT 2012, 2012
NTRUSign is the most practical lattice signature scheme. Its basic version was broken by Nguyen and Regev in 2006: one can efficiently recover the secret key from about 400 signatures. However, countermeasures have been proposed to repair the scheme, such as the perturbation used in NTRUSign standardization proposals, and the deformation proposed by Hu et al. at IEEE Trans. Inform. Theory in 2008. These two countermeasures were claimed to prevent the NR attack. Surprisingly, we show that these two claims are incorrect by revisiting the NR gradient-descent attack: the attack is more powerful than previously expected, and actually breaks both countermeasures in practice, e.g. 8,000 signatures suffice to break NTRUSign-251 with one perturbation as submitted to IEEE P1363 in 2003. More precisely, we explain why the Nguyen-Regev algorithm for learning a parallelepiped is heuristically able to learn more complex objects, such as zonotopes and deformed parallelepipeds.
Sains Malaysiana
Due to the Nguyen's attack, the Goldreich-Goldwasser-Halevi (GGH) encryption scheme, simply referred to as GGH cryptosystem, is considered broken. The GGH cryptosystem was initially addressed as the first practical latticebased cryptosystem. Once the cryptosystem is implemented in a lattice dimension of 300 and above, its inventors was conjectured that the cryptosystem is intractable. This conjecture was based on thorough security analyses on the cryptosystem against some powerful attacks. This conjecture became more concrete when all initial efforts for decrypting the published GGH Internet Challenges were failed. However, a novel strategy by the Nguyen's attack for simplifying the underlying Closest-Vector Problem (CVP) instance that arose from the cryptosystem, had successfully decrypted almost all the challenges and eventually made the cryptosystem being considered broken. Therefore, the Nguyen's attack is considered as a fatal attack on the GGH cryptosystem. In this paper, we proposed a countermeasure to combat the Nguyen's attack. By implementing the proposed countermeasure, we proved that the simplification of the underlying CVP instance could be prevented. We also proved that, the upgraded GGH cryptosystem remains practical where the decryption could be done without error. We are optimistic that, the upgraded GGH cryptosystem could make a remarkable return into the mainstream discussion of the lattice-based cryptography.
International Journal of Applied Cryptography, 2008
In Crypto 1997, Goldreich, Goldwasser and Halevi (GGH) proposed a lattice analogue of McEliece public key cryptosystem, in which security is related to the hardness of approximating the Closest Vector Problem in a lattice. Furthermore, they also described how to use the same principle of their encryption scheme to provide a signature scheme. Practically, this cryptosystem uses the Euclidean norm, l 2 -norm, which has been used in many algorithms based on lattice theory. Nonetheless, many drawbacks have been studied and these could lead to cryptanalysis of the scheme. In this article, we present a novel method of reducing a vector under the l -norm and propose a digital signature scheme based on it. Our scheme takes advantage of the l -norm to increase the resistance of the GGH scheme and to decrease the signature length. Furthermore, after some other improvements, we obtain a very efficient signature scheme, that trades the security level, speed and space.
Lecture Notes in Computer Science, 2015
At CT-RSA 2014 Bai and Galbraith proposed a lattice-based signature scheme optimized for short signatures and with a security reduction to hard standard lattice problems. In this work we first refine the security analysis of the original work and propose a new 128-bit secure parameter set chosen for software efficiency. Moreover, we increase the acceptance probability of the signing algorithm through an improved rejection condition on the secret keys. Our software implementation targeting Intel CPUs with AVX/AVX2 and ARM CPUs with NEON vector instructions shows that even though we do not rely on ideal lattices, we are able to achieve high performance. For this we optimize the matrixvector operations and several other aspects of the scheme and finally compare our work with the state of the art.
Lecture Notes in Computer Science, 2015
In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving within half a day a LWE instance with dimension n = 128, modulus q = n 2 , Gaussian noise α = 1/(n/π log 2 n) and binary secret, using 2 28 samples, while the previous best result based on BKW claims a time complexity of 2 74 with 2 60 samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2 (ln 2/2+o(1))n/ log log n. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010. LPN in 2 O(n/ log n). During the first stage of the algorithm, the dimension of a is reduced, at the cost of a (controlled) decrease of the bias of b. During the second stage, the algorithm distinguishes between LWE and uniform by evaluating the bias. Since the introduction of LWE, some variants of the problem have been proposed in order to build more efficient cryptosystems. Some of the most interesting variants are Ring-LWE by Lyubashevsky, Peikert and Regev in [38], which aims to reduce the space of the public key using cyclic samples; and the cryptosystem by Döttling and Müller-Quade [18], which uses short secret and error. In 2013, Micciancio and Peikert [40] as well as Brakerski et al. [13] proposed a binary version of the LWE problem and obtained a hardness result. Related Work. Albrecht et al. have presented an analysis of the BKW algorithm as applied to LWE in [4,5]. It has been recently revisited by Duc et al., who use a multi-dimensional FFT in the second stage of the algorithm [19]. However, the main bottleneck is the first BKW step and since the proposed algorithms do not improve this stage, the overall asymptotic complexity is unchanged. In the case of the BinaryLWE variant, where the error and secret are binary (or sufficiently small), Micciancio and Peikert show that solving this problem using m = n(1 + Ω(1/ log(n))) samples is at least as hard as approximating lattice problems in the worst case in dimension Θ(n/ log(n)) with approximation factorÕ(√ nq). We show in section B that existing lattice reduction techniques require exponential time. Arora and Ge describe a 2Õ (αq) 2-time algorithm when q > n to solve the LWE problem [9]. This leads to a subexponential time algorithm when the error magnitude αq is less than √ n. The idea is to transform this system into a noise-free polynomial system and then use root finding algorithms for multivariate polynomials to solve it, using either relinearization in [9] or Gröbner basis in [3]. In this last work, Albrecht et al. present an algorithm whose time complexity is 2 (ω+o(1))n log log log n 8 log log n
Advances in Cryptology – EUROCRYPT 2018, 2018
Recently, numerous physical attacks have been demonstrated against lattice-based schemes, often exploiting their unique properties such as the reliance on Gaussian distributions, rejection sampling and FFT-based polynomial multiplication. As the call for concrete implementations and deployment of postquantum cryptography becomes more pressing, protecting against those attacks is an important problem. However, few countermeasures have been proposed so far. In particular, masking has been applied to the decryption procedure of some lattice-based encryption schemes, but the much more difficult case of signatures (which are highly non-linear and typically involve randomness) has not been considered until now. In this paper, we describe the first masked implementation of a latticebased signature scheme. Since masking Gaussian sampling and other procedures involving contrived probability distribution would be prohibitively inefficient, we focus on the GLP scheme of Güneysu, Lyubashevsky and Pöppelmann (CHES 2012). We show how to provably mask it in the Ishai-Sahai-Wagner model (CRYPTO 2003) at any order in a relatively efficient manner, using extensions of the techniques of Coron et al. for converting between arithmetic and Boolean masking. Our proof relies on a mild generalization of probing security that supports the notion of public outputs. We also provide a proof-of-concept implementation to assess the efficiency of the proposed countermeasure.
1999
Recent results of Ajtai on the hardness of lattice problems have inspired several cryptographic protocols. At Crypto ’97, Goldreich, Goldwasser and Halevi proposed a public-key cryptosystem based on the closest vector problem in a lattice, which is known to be NP-hard. We show that there is a major flaw in the design of the scheme which has two implications: any ciphertext leaks information on the plaintext, and the problem of decrypting ciphertexts can be reduced to a special closest vector problem which is much easier than the general problem. As an application, we solved four out of the five numerical challenges proposed on the Internet by the authors of the cryptosystem. At least two of those four challenges were conjectured to be intractable. We discuss ways to prevent the flaw, but conclude that, even modified, the scheme cannot provide sufficient security without being impractical.
Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security
In this paper, we analyze the implementation level fault vulnerabilities of deterministic lattice-based signature schemes. In particular, we extend the practicality of skip-addition fault attacks through exploitation of determinism in certain variants of Dilithium (Deterministic variant) and qTESLA signature scheme (originally submitted deterministic version), which are two leading candidates for the NIST standardization of post-quantum cryptography. We show that single targeted faults injected in the signing procedure allow to recover an important portion of the secret key. Though faults injected in the signing procedure do not recover all the secret key elements, we propose a novel forgery algorithm that allows the attacker to sign any given message with only the extracted portion of the secret key. We perform experimental validation of our attack using Electromagnetic fault injection on reference implementations taken from the pqm4 library, a benchmarking and testing framework for post quantum cryptographic implementations for the ARM Cortex-M4 microcontroller. We also show that our attacks break two well known countermeasures known to protect against skip-addition fault attacks. We further propose an efficient mitigation strategy against our attack that exponentially increases the attacker's complexity at almost zero increase in computational complexity. CCS CONCEPTS • Security and privacy → Digital signatures; Hardware attacks and countermeasures; Side-channel analysis and countermeasures; Embedded systems security.
IET Information Security, 2020
Recently, lattice signatures based on the Fiat-Shamir framework have seen a lot of improvements which are efficient in practice. The security of these signature schemes depends mainly on the hardness of solving short integer solutions (SIS) and/or learning with errors problem in the random oracle model. The authors propose an alternative lattice-based signature scheme on the Fiat-Shamir framework over the ring ℤ[x]/(x n + 1). The key generation in the signature scheme is based on the combination of NTRU and Ring SIS like key generation. Both the signature and the verification are done efficiently by doing polynomial convolutions in the ring ℤ[x]/(x n + 1). The proposed signature scheme is provably secure based on the hardness of the Ring SIS problem in the random oracle model. The scheme is also efficient up to constant factors as the highly practical schemes which have provable secure instantiation.
Advances in Cryptology – CRYPTO 2020, 2020
We observe that all previously known lattice-based blind signature schemes contain subtle flaws in their security proofs (e.g., Rückert, ASIACRYPT '08) or can be attacked (e.g., BLAZE by Alkadri et al., FC '20). Motivated by this, we revisit the problem of constructing blind signatures from standard lattice assumptions. We propose a new three-round lattice-based blind signature scheme whose security can be proved, in the random oracle model, from the standard SIS assumption. Our starting point is a modified version of the (insecure) BLAZE scheme, which itself is based Lyubashevsky's three-round identification scheme combined with a new aborting technique to reduce the correctness error. Our proof builds upon and extends the recent modular framework for blind signatures of Hauck, Kiltz, and Loss (EUROCRYPT '19). It also introduces several new techniques to overcome the additional challenges posed by the correctness error which is inherent to all lattice-based constructions. While our construction is mostly of theoretical interest, we believe it to be an important stepping stone for future works in this area.
Lecture Notes in Computer Science, 2017
In 2016, Albrecht, Bai and Ducas and independently Cheon, Jeong and Lee presented very similar attacks to break the NTRU cryptosystem with larger modulus than in the NTRUEncrypt standard. They allow to recover the secret key given the public key of Fully Homomorphic Encryption schemes based on NTRU ideas. Hopefully, these attacks do not endanger the security of the NTRUEncrypt, but shed new light on the hardness of the NTRU problem. The idea consists in decreasing the dimension of the NTRU lattice using the multiplication matrix by the norm (resp. trace) of the public key in some subfield instead of the public key itself. Since the dimension of the subfield is smaller, so is the dimension of the lattice and better lattice reduction algorithms perform. In this paper, we first propose a new variant of the subfield attacks that outperforms both of these attacks in practice. It allows to break several concrete instances of YASHE, a NTRU-based FHE scheme, but it is not as efficient as the hybrid method on smaller concrete parameters of NTRUEncrypt. Instead of using the norm and trace, the multiplication by the public key in a subring allows to break smaller parameters and we show that in Q(ζ2n), the time complexity is polynomial for q = 2 Ω(√ n log log n). Then, we revisit the lattice reduction part of the hybrid attack of Howgrave-Graham and analyze the success probability of this attack using a new technical tool proposed by Pataki and Tural. We show that, under some heuristics, this attack is more efficient than the subfield attack and works in any ring for large q, such as the NTRU Prime ring. We insist that the improvement on the analysis applies even for relatively small modulus; although if the secret is sparse, it may not be the fastest attack. We also derive a tight estimation of security for (Ring-) LWE and NTRU assumptions and perform many practical experiments.
ArXiv, 2019
LWE-based cryptosystems are an attractive alternative to traditional ones in the post-quantum era. To minimize the storage cost of part of its public key - a $256 \times 640$ integer matrix, $\textbf{T}$ - a binary version of $\textbf{T}$ has been proposed. One component of its ciphertext, $\textbf{c}_{1}$ is computed as $\textbf{c}_{1} = \textbf{Tu}$ where $\textbf{u}$ is an ephemeral secret. Knowing $\textbf{u}$, the plaintext can be deduced. Given $\textbf{c}_{1}$ and $\textbf{T}$, Galbraith's challenge is to compute $\textbf{u}$ with existing computing resources in 1 year. Our hybrid approach guesses and removes some bits of the solution vector and maps the problem of solving the resulting sub-instance to the Closest Vector Problem in Lattice Theory. The lattice-based approach reduces the number of bits to be guessed while the initial guess based on LP relaxation reduces the number of subsequent guesses to polynomial rather than exponential in the number of guessed bits. Fur...
Lecture Notes in Computer Science, 2020
We propose a new lattice-based digital signature scheme MLWRSign by modifying Dilithium, which is one of the second-round candidates of NIST's call for post-quantum cryptographic standards. To the best of our knowledge, our scheme MLWRSign is the first signature scheme whose security is based on the (module) learning with rounding (LWR) problem. Due to the simplicity of the LWR, the secret key size is reduced by approximately 30% in our scheme compared to Dilithium, while achieving the same level of security. Moreover, we implemented MLWRSign and observed that the running time of our scheme is comparable to that of Dilithium.
IACR Cryptol. ePrint Arch., 2018
This paper proposes a simple single bit flip fault attack applicable to several LWE (Learning With Errors Problem) based lattice based schemes like KYBER, NEWHOPE, DILITHIUM and FRODO which were submitted as proposals for the NIST call for standardization of post quantum cryptography. We have identified a vulnerability in the usage of nonce, during generation of secret and error components in the key generation procedure. Our fault attack, based on a practical bit flip model (single bit flip to very few bit flips for proposed parameter instantiations) enables us to retrieve the secret key from the public key in a trivial manner. We fault the nonce in order to maliciously use the same nonce to generate both the secret and error components which turns the LWE instance into an exactly defined set of linear equations from which the secret can be trivially solved for using Gaussian elimination.
Digital signatures are an important primitive for building secure systems and are used in most real world security protocols. However, almost all popular signature schemes are either based on the factoring assumption (RSA) or the hardness of the discrete logarithm problem (DSA/ECDSA). In the case of classical cryptanalytic advances or progress on the development of quantum computers the hardness of these closely related problems might be seriously weakened. A potential alternative approach is the construction of signature schemes based on the hardness of certain lattices problems which are assumed to be intractable by quantum computers. Due to significant research advancements in recent years, lattice-based schemes have now become practical and appear to be a very viable alternative to number-theoretic cryptography. In this paper we focus on recent developments and the current state-of-the-art in lattice-based digital signatures and provide a comprehensive survey discussing signature schemes with respect to practicality. Additionally, we discuss future research areas that are essential for the continued development of lattice-based cryptography.
IACR Cryptol. ePrint Arch., 2016
We present TESLA] (pronounced “Tesla Sharp”), a digital signature scheme based on the R-LWE assumption that continues a recent line of proposals of lattice-based digital signature schemes originating in work by Lyubashevsky as well as by Bai and Galbraith. It improves upon all of its predecessors in that it attains much faster key pair generation, signing, and verification, outperforming most (conventional or lattice-based) signature schemes on modern processors. We propose a selection of concrete parameter sets, including a high-security instance that aims at achieving post-quantum security. Based on these parameters, we present a full-fledged software implementation protected against timing and cache attacks that supports two scheme variants: one providing 128 bits of classical security and another providing 128 bits of post-quantum security.
Advances in Cryptology – EUROCRYPT 2012, 2012
In this paper we present three digital signature schemes with tight security reductions. Our first signature scheme is a particularly efficient version of the short exponent discrete log based scheme of Girault et al. (J. of Cryptology 2006). Our scheme has a tight reduction to the decisional Short Discrete Logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the original scheme of Girault et al. is based. The second signature scheme we construct is a modification of the scheme of Lyubashevsky (Asiacrypt 2009) that is based on the worst-case hardness of the shortest vector problem in ideal lattices. And the third scheme is a very simple signature scheme that is based directly on the hardness of the Subset Sum problem. We also present a general transformation that converts, what we term lossy identification schemes, into signature schemes with tight security reductions. We believe that this greatly simplifies the task of constructing and proving the security of such signature schemes.
2011
In this paper we will construct a lattice-based public-key cryptosystem using non-commutative quaternion algebra, and since its lattice does not fully fit within Circular and Convolutional Modular Lattice (CCML), we prove it is arguably more secure than the existing lattice-based cryptosystems such as NTRU. As in NTRU, the proposed public-key cryptosystem relies for its inherent security on the intractability of finding the shortest vector in a certain non-convolutional modular lattice, yet it is efficient and cost effective, contrary to cryptosystems such as RSA or ECC. The detailed specification of the proposed cryptosystem, including the underlying algebraic structure, key generation, encryption and decryption process and also the issues regarding key security, message security, and probability of successful decryption are explained. We will further show, based on the existing results for lattice-reduction algorithms, that the proposed cryptosystem with a dimension of 41 will hav...
Lecture Notes in Computer Science, 2015
We introduce a lattice-based group signature scheme that provides several noticeable improvements over the contemporary ones: simpler construction, weaker hardness assumptions, and shorter sizes of keys and signatures. Moreover, our scheme can be transformed into the ring setting, resulting in a scheme based on ideal lattices, in which the public key and signature both have bit-size O(n • log N), for security parameter n, and for group of N users. Towards our goal, we construct a new lattice-based cryptographic tool: a statistical zero-knowledge argument of knowledge of a valid message-signature pair for Boyen's signature scheme (Boyen, PKC'10), which potentially can be used as the building block to design various privacy-enhancing cryptographic constructions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.