0% found this document useful (0 votes)
20 views14 pages

Cryptography

Cryptography

Uploaded by

Captain Hook
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views14 pages

Cryptography

Cryptography

Uploaded by

Captain Hook
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Cryptography

Article by:
Wang, Xunhua Department of Computer Science, James Madison University, Harrisonburg, Virginia.
Coppersmith, Don IBM Corporation, Kingston, New York.
Matyas, Stephen M. IBM Systems Communication Division, Kingston, New York.
Meyer, Carl H. IBM Systems, Communications Division, Kingston, New York.
Publication year:2014
DOI:http://dx.doi.org/10.1036/1097-8542.170600
Content
• Basics
• Unbreakable algorithms
• Strong algorithms
• Computational complexity
• Problem classes P and NP
• Examples
• Block ciphers
• Data encryption standard
• Advanced Encryption Standard (AES)
• Modes of operation
• Stream ciphers
• RSA public key algorithm
• Message authentication codes
• Digital signatures
• Bibliography
• Additional Readings

The various mechanisms for data confidentiality, data authentication, and nonrepudiation (that is, the property
to prevent the denial of previous commitments or actions). Cryptography, with a long history of being an art of
secret writing, has evolved into a science. As society is increasingly dependent upon computers, the vast
amounts of data communicated, processed, and stored within computer systems and networks often have to be
protected, and cryptography is a means of achieving this. It provides confidentiality services for transmitted
data as well as stored data. Cryptographic techniques can also be used for message authentication, entity
authentication, and digital signature verification for electronic funds transfer and credit card transactions. See
also: Computer security; Data communications; Database management system; Digital
computer; Electrical communications

Basics
Data confidentiality is the property that only those authorized can access the data. Cryptography achieves data
confidentiality by transforming meaningful messages, called plaintext, into unintelligible forms, called
ciphertext or cryptogram. The process of transforming plaintext into ciphertext is called encryption or
encipherment, and the opposite process is called decryption or decipherment. (An encryption scheme is also
referred to as a cipher.) As shown in Fig. 1, the encryption algorithm is controlled by a parameter k′, and the
decryption algorithm is controlled by a secret k, known as a key. For confidentiality purposes, kshould be kept
secret. If k′ of a cipher can be used to derive k, then k′ should also be kept secret, and such a cipher is called a
symmetric key encryption. If it is computationally hard (see below) to derive k from k′, then k′ can be made
public and the corresponding cipher is called an asymmetric encryption or a public key encryption. In most
symmetric key encryption schemes, k′ is the same as k and they are called symmetric keys. In public key
cryptography, k′ is called a public key and k is called a private key.
SAVE
Fig. 1 The confidentiality model.
In a symmetric key encryption, if each encryption operation processes just one character or one bit, it is called
a stream cipher. A block cipher, on the other hand, encrypts a block of fixed size, usually multiple bytes, each
time.

In addition to data confidentiality, cryptography provides both data-origin authentication and entity
authentication services. Data-origin authentication allows a message receiver to verify the message origin and
its integrity. Entity authentication allows one to check the identity of another entity that possesses a certain
secret. Given a message m, cryptography achieves data-origin authentication by appending to the message
some additional bits (such as “s” in Fig. 2), called authentication tags, which are computed from mand a
secret k. The receiver of the message and the authentication tags can verify the message origin and integrity by
applying a verification parameter k′ to the received data, including the authentication tags and the message. In
most symmetric key cryptosystems, k and k′ are the same and they should be kept secret. In public key
cryptography, k′ can be made public and k is kept secret; it is computationally hard (see below) to
derive k from k′. Authentication tags generated by symmetric-key cryptography are called message
authentication codes (MAC), which can be generated by both the sender and the receiver as both of them
know k. In contrast, authentication tags by public-key cryptography, called digital signatures, are stronger:
since only the sender knows k and can produce a valid authentication tag under k, he cannot deny having sent
the message in a later situation. Thus authentication tags by public-key cryptography are the digital equivalent
of a handwritten signature and have the potential for the nonrepudiation service.

SAVE
Fig. 2 The authentication model.
Data confidentiality and data authentication are orthogonal in that one can exist without the other. On one
hand, the authentication model given in Fig. 2 does not provide data confidentiality since message m is sent in
the clear. On the other hand, encryption alone is insufficient to ensure that information is not altered during
transmission. This fact is most evident when encryption with a public key algorithm is used. With a public key
system, anyone can encipher data by using the public enciphering key. Thus, any system user or node can
masquerade as any other system user or node.

The opposite of cryptography is cryptanalysis, which studies techniques to defeat cryptographic methods.
Cryptanalysts are those people doing cryptanalysis. The security of an encryption/authentication algorithm
should depend solely on the secrecy of keys, not on the secrecy of algorithms. It is often assumed that the
enemy cryptanalysts know the details of the algorithm employed.

In summary, symmetric block ciphers, symmetric stream ciphers, and public key encryption ciphers are used
for data confidentiality, while MAC and digital signature algorithms are used for data authentication. We will
first review the fundamental theory of these cryptographic algorithms and then give the details of some
practical cryptographic algorithms.

Unbreakable algorithms
Unbreakable ciphers are possible, but the key must be randomly selected and used only once, and its length
must be equal to or greater than that of the plaintext to be enciphered. Therefore such long keys, called one-
time tapes, are not practical in data-processing applications.
Similarly, unconditionally secure authentication schemes (that is, schemes whose security does not depend on
the computational power of the adversary) are possible. But the key must be randomly selected and used only
once, which makes such keys impractical.

To work well, a key must be of fixed length, relatively short, and capable of being repeatedly used without
compromising security. In theory, any algorithm that uses such a finite key can be analyzed; in practice, the
effort and resources necessary to break the algorithm would be unjustified.

Strong algorithms
Fortunately, to achieve effective data security, construction of an unbreakable algorithm is not necessary.
However, the work factor (a measure, under a given set of assumptions, of the requirements necessary for a
specific analysis or attack against a cryptographic algorithm) required to break the algorithm must be
sufficiently great. Included in the set of assumptions is the type of information expected to be available for
cryptanalysis. For encryption schemes, this could be ciphertext only, plaintext (not chosen/known) and
corresponding ciphertext, chosen plaintext and corresponding ciphertext, or chosen ciphertext and
corresponding recovered plaintext.

A strong cryptographic algorithm must satisfy the following conditions: (1) The algorithm's mathematical
complexity prevents, for all practical purposes, solution through analytical methods. (2) The cost or time
necessary to unravel the message or key is too great when mathematically less complicated methods are used,
either because too many computational steps are involved or because too much storage space is required.

To be strong, an encryption algorithm must satisfy the above conditions even when the analyst has the
following advantages: (1) Relatively large amounts of plaintext (specified by the analyst, if so desired) and
corresponding ciphertext are available. (2) Relatively large amounts of ciphertext (specified by the analyst, if
so desired) and corresponding recovered plaintext are available. (3) Large high-speed computers are available
for cryptanalysis.

Similarly, to be strong, an authentication algorithm must satisfy the above conditions even when the analyst
has been given large amounts of messages (specified by the analyst, if so desired) and their corresponding
authentication codes.

Computational complexity
The strength of a cryptographic scheme can be measured by the computational complexity of the task of
cryptanalysis. The complexity of a task is the least possible number of elementary operations used by any
program to accomplish this task. However, nontrivial lower bounds on the complexity of a problem are very
hard to obtain. This is a fundamental problem in the design of cryptographic systems: it is very difficult to
ensure that a system is sufficiently hard to crack. Without a good lower bound, the possibility that someone
will find a fast algorithm for cryptanalyzing a given scheme must always be anticipated.

Problem classes P and NP


An important direction of theoretical work concerns the consideration (from computer science) of P versus NP.
The class P consists of those problems which can be solved in polynomial time. That is, there are
constants c and k such that, if the input to the problem can be specified in N bits, the problem can be solved on
a sequential machine in time c × Nk. Roughly speaking, these are the tractable problems. They include
multiplication of two large numbers, exponentiation modulo a large prime, running the Data Encryption
Standard (discussed below), and roughly any straightforward problem that does not involve searching.
The class NP (nondeterministic polynomial time) consists of problems which can be solved by searching.
Roughly speaking, a possible solution to a problem in NP is to guess in turn each of 2N possible values of
some N-bit quantity, do some polynomial-time work related to each guess, and if some guess turns out to be
correct, report the answer.

An example of a problem in NP is the knapsack problem: Given a set of integers {A1, A2, …, An} and a target
integer B, can a subset of the Ai be selected without repetition (say {A1, A3, A8}) such that their sum
(A1 + A3 + A8) is the target B? One algorithm for solution is to try all possible subsets and just see whether any
has the desired property. This algorithm requires exponential time, so called because the size of the input (n)
occurs in the exponent of the formula expressing the running time (in this case, roughly 2n). In fact, all known
algorithms for this problem require exponential time. But there may be an unknown algorithm that runs in
polynomial time; no proof prohibiting this is currently known.

Certainly, any problem in P is also in NP. A major outstanding question in computer science is whether P
equals NP or whether NP is strictly larger.

There is a particular collection of problems in NP, including the knapsack problem, which are termed NP-
complete. These can be thought of as the hardest problems in NP. More precisely, an important mathematical
result states: If there are any problems in NP but not in P, then each NP-complete problem is also in NP but not
in P. This class has particular significance for cryptography.

In a good cryptographic system, certain operations (like decryption) are easy for those in possession of the key
and difficult for those without the key. The cryptanalyst who could first guess and verify the correct key would
be able to decrypt easily (in polynomial time). This could be done by searching over the space of possible keys
and attempting to verify each one in polynomial time. Since the problem can be solved by searching,
decryption is in NP.

If P = NP, then decryption would also be in P, and a good cryptographic system would most likely be difficult
to design. If P ≠ NP, then the NP-complete problems might form a good starting point for cryptographic
system design. Unfortunately, mathematicians and cryptographers have not yet learned how to transform an
NP-complete problem into a secure cryptographic system.

Even if an NP-complete problem is eventually transformed successfully into a cryptographic system, a proof of
the difficulty of cryptanalysis of that system (or any other) can be expected to be taxing. Such a proof would
probably also prove that P ≠ NP, and this conjecture has eluded computer scientists so far. Thus, for now, the
designers of cryptographic systems must rely on experience rather than rigorous proof of the difficulty of
cryptanalysis.

Examples
Many practical and strong cryptographic algorithms have been developed. The Data Encryption Standard
(DES), a block cipher, would require massive attacks to break it. The only known attacks are key exhaustion,
which involves trying all 256 ≃ 1017 possible keys, or attacks involving prohibitively large amounts (1014 bytes)
of known plaintext and corresponding ciphertext. There is no guarantee against a more efficient, analytic
attack, although none is known at present. A 56-bit key is not considered secure these days. A more advanced
block cipher, Advanced Encryption Standard (AES), uses longer keys and larger data blocks and is assumed to
be more secure and more efficient.

On the public key encryption vein, the RSA encryption algorithm, also discussed below, is based on the
difficulty of factoring large numbers. However, a family of algorithms has been developed for factoring such
numbers. If the modulus involved has, say, 2048 bits, then RSA should be secure in the foreseeable future.
Other popular public key encryption algorithms include ElGamal public key encryption and Elliptic-curve
(EC) public key encryption, both of which are based on the difficulty of the discrete logarithm problem.

Efficient MAC algorithms include those based on the cipher block chaining (CBC) mode of block ciphers,
called CBC-MAC, and the MAC algorithms based on cryptographic hash algorithms, called HMAC.

Among the efficient digital signature algorithms are the RSA digital signature algorithm, the Digital Signature
Standard (DSS, discussed below), and the Elliptic-curve Digital Signature Algorithm (ECDSA).

Block ciphers
A block cipher (Fig. 3) transforms a string of input bits of fixed length (called an input block) into a string of
output bits of fixed length (called an output block). In a strong block cipher, the enciphering and deciphering
functions are such that every bit in the output block jointly depends on every bit in the input block and on
every bit in the key. This property is called intersymbol dependence.

SAVE
Fig. 3 Block cipher. (a) Enciphering. (b) Deciphering.

Data encryption standard


Regardless of the application, a cryptographic system must be based on a cryptographic algorithm of validated
strength if it is to be acceptable. The data encryption standard (DES) was such a validated conventional
algorithm available in the public domain. It was accepted as a standard by the NIST in 1997 and was officially
retired in 2004.

The DES enciphers a 64-bit block of plaintext into a 64-bit block of ciphertext under the control of a 56-bit
key. Conceptually, it can be thought of as a huge key-controlled substitution box (S-box) with a 64-bit input
and output. With such an S-box, 264! different transformations or functions from plaintext to ciphertext are
possible. The 56-bit key used with DES thus limits the number of usable functions to 256.

A single, huge S-box is impossible to construct. Therefore DES is implemented by using several smaller S-
boxes (with a 6-bit input and a 4-bit output) and permuting their concatenated outputs. By repeating the
substitution and permutation process several times, cryptographic strength “builds up.” The DES encryption
process (Fig. 4) consists of 16 iterations, called rounds. At each round a cipher function (f) is used with a 48-
bit key. The function is composed of the substitution and permutation. The 48-bit key, which is different for
each round, is a subset of the bits in the externally supplied key.
SAVE
Fig. 4 Enciphering computation in the Data Encryption Standard.
The externally supplied key consists of 64 bits (56 bits are used by the algorithm, and up to 8 bits may be used
for parity checking). By shifting the original 56-bit key, a different subset of 48 key bits is selected for use in
each round. These key bits are labeled K1, K2, …, K16. To decipher, the keys are used in reverse order (K16 is
used in round one, K15 in round two, and so on).

At each round (either encipherment or decipherment), the input is split into a left half (designated L) and a
right half (designated R) [Fig. 4]. R is transformed with f, and the result is combined, using modulo 2 addition
(also called the EXCLUSIVE OR operation; 0 ⊕ 1 = 1, 1 ⊕ 0 = 1, 0 ⊕ 0 = 0, 1 ⊕ 1 = 0), with L. To consider
one round of encipherment (Fig. 5), the input block (X) may be denoted X = (L0, R0), where L0 and R0 are the
left and right halves of X, respectively. Function f transforms R0 into fK1(R0) under control of cipher key K1.
L0 is then added (modulo 2) to fK1(R0) to obtain R1, as in Eq. (1).

(1)

The round is then completed by setting L1 equal to R0.


SAVE
Fig. 5 Transformation of input block (L0, R0).
The above steps are reversible, without introducing any new parameters or requiring that f be a one-to-one
function. The ciphertext contains L1, which equals R0, and therefore half of the original plaintext is
immediately recovered (Fig. 6). The remaining half, L0, is recovered by recreating fK1(R0) from R0 = L1 and
adding it (modulo 2) to R1, as in Eq. (2).

(2)

SAVE
Fig. 6 Recovery of L0.
However, to use the above procedure for encipherment as well as decipherment, the left and right halves of the
output must be interchanged; that is, the ciphertext (Y) is defined by Eq. (3).

(3)

This modified procedure easily extends to n rounds, where the keys used for deciphering are Kn, Kn−1,
…, K1.

Advanced Encryption Standard (AES)


The key length of DES is 56 bits, which is considered weak. Also, the 64-bit DES data block size is too small
for modern applications. To overcome these difficulties, the National Institute of Standards and Technology
(NIST) adopted an adapted version of the Rijndael algorithm, a symmetric block cipher, as the Advanced
Encryption Standard (AES). AES uses 128-bit data block size and its keys can be 128, 192, and 256 bits,
called AES-128, AES-192, and AES-256, respectively. Like DES, AES has multiple rounds: AES-128 has 10
rounds, AES-192 has 12 rounds, and AES-256 has 14 rounds. Different from DES, AES is not based on the
Feistel cipher structure (Fig. 5).
In AES, the 16-byte plaintext is organized into a 4-by-4-byte array, called the State. The plaintext bytes are
filled into the byte array in a column-wise way: the first column first, then the second column, the third
column, and the last column. During the AES encryption, this state is continuously transformed and the end
state is the ciphertext.

For AES-128, AES-192, and AES-256, from the given cipher key, the AES key expansion operation derives,
respectively, 11, 13, and 15 128-bit round keys.

The AES encryption starts with an initial round key addition, in which the first round key is added to the State
by a simple bitwise XOR operation. Then, the State array is transformed by a round function multiple times
(10 times for AES-128, 12 times for AES-192, and 14 times for AES-256), with the final round differing
slightly with the other rounds. Each time the round function consumes one round key.

Figure 7 shows the details of the AES round function, which comprises four transformations, SubBytes,
ShiftRows, MixColumns, and AddRoundKey. The SubBytes transformation is a nonlinear byte substitution
(called S-box) and it works independently on a single byte. The ShiftRows transformation shifts around the
bytes within the same rows of the State. The MixColumns transformation mixes the bytes within the same
column of the State. The AddRoundKey transformation applies the corresponding round key to the State by a
simple bitwise XOR operation. In the last AES round, the MixColumns transformation does not appear.

SAVE
Fig. 7 AES round function.
All the four AES transformation operations are reversible, which makes AES decryption possible. More
specifically, AddRoundKey is its own inverse; the inverse of SubBytes is InvSubBytes, in which an inverse S-
box is applied to the each byte of the State; the inverse of the ShiftRows transformation is InvShiftRows; and
the inverse of the MixColumns transformation is InvMixColumns. The AES decryption is performed through a
sequence of the inverse transformations, in reverse order, with the corresponding round keys generated from
the same key expansion operation. This makes the AES decryption structure completely different from the
AES encryption structure. AES decryption can also be structured in a similar way as the AES encryption, but a
different key expansion operation should be employed.

Modes of operation
As a rule, block encryption is used to protect keys. A different method, called chained block encryption, is
used to protect data. In chaining, the process of enciphering and deciphering is made dependent on other
(prior) data, plaintext, ciphertext, and the like, also available at the time enciphering and deciphering take
place. Thus every bit in a given output block depends not only on every bit in its respective input block and
every bit in the key, but also on any or all prior data bits, either inputted to or produced during the enciphering
or deciphering process.

Sometimes data to be enciphered contain patterns that extend beyond the cipher's block size. These patterns in
the plaintext can then result in similar patterns in the ciphertext, which would indicate to a cryptanalyst
something about the nature of the plaintext. Thus, chaining is useful because it significantly reduces the
presence of repetitive patterns in the ciphertext. This is because two equal input blocks encipher into unequal
output blocks.

A recommended technique for block chaining, referred to as cipher block chaining (CBC), uses a ciphertext
feedback (Fig. 8). Let X1, X2, …, Xn denote blocks of plaintext to be chained using key K; let Y0 be a nonsecret
quantity defined as the initializing vector; and let Y1, Y2, …, Yn denote the blocks of ciphertext produced.
The ith block of ciphertext (Yi) is produced by EXCLUSIVE ORing Yi−1 with Xi and enciphering the result
with K, as in Eq. (4),

(4)

where ⊕ denotes the EXCLUSIVE OR operation, or modulo 2 addition. Since every bit in Yi depends
on every bit in X1 through Xi, patterns in the plaintext are not reflected in the ciphertext.

SAVE
Fig. 8 Block chaining with ciphertext feedback. (a) Enciphering. (b) Deciphering.
The ith block of plaintext (Xi) is recovered by deciphering Yi with K and EXCLUSIVE OR-ing the result with
Yi−1, as in Eq. (5).

(5)
Since the recovered plaintext Xi depends only on Yi and Yi−1, an error occurring in ciphertext Yj affects
only two blocks of recovered plaintext (Xj and Xj+1).
Other chained block encryption methods include cipher feedback (CFB), output feedback (OFB), and counter
mode (CTR).

Stream ciphers
A stream cipher (Fig. 9) employs a bit-stream generator to produce a stream of binary digits R (traditionally
termed the key stream), which is then combined either with plaintext X to produce ciphertext or with
ciphertext Y to recover plaintext. In most stream ciphers, an EXCLUSIVE OR operation, or modulo 2
addition, is used to combine the respective bit streams. Thus encipherment and decipherment are defined by X
⊕ R = Y and Y ⊕ R = X, respectively.

SAVE
Fig. 9 Stream cipher concept.
If the bit-stream generator were truly random, an unbreakable cipher could be obtained by EXCLUSIVE
ORing the plaintext and the cryptographic bit stream R. The cryptographic bit stream would be used directly as
the key, and would be equal in length to the message. But in that case the cryptographic bit stream must be
provided in advance to the communicants via some independent and secure channel. This introduces
insurmountable logistic problems for heavy data traffic. Hence, for practical reasons, the bit-stream generator
must be implemented as an algorithmic procedure. Then both communicants can generate the same
cryptographic bit stream—provided that their algorithms are identically initialized (Fig. 10).

SAVE
Fig. 10 Stream cipher using an algorithmic bit stream generator, modulo 2 addition, and seed.
When modulo 2 addition is used as the combining operation, each bit in the output ciphertext (recovered
plaintext) is dependent only upon its corresponding bit in the input plaintext (ciphertext). This is in marked
contrast to the block cipher. Both approaches, however, have comparable strength.
For security purposes, a stream cipher must never predictably start from the same initial condition, thereby
producing the same cryptographic bit stream. This can be avoided by making the cryptographic bit stream
dependent on a nonsecret quantity Z (known as seed, initializing vector, or fill), which is used as an input
parameter to the ciphering algorithm (Fig. 9). See also: Information theory

The most famous symmetric stream cipher might be RC4.

RSA public key algorithm


The RSA algorithm (named for the algorithm's inventors, R. L. Rivest, A. Shamir, and L. Adleman) is a public
key encryption algorithm based on the fact that factoring large composite numbers into their prime factors
involves an overwhelming amount of computation.

To describe the RSA algorithm, the following quantities are defined:

Based on an extension of Euler's theorem (6)

(6)

the algorithm's public and private keys (e and d) are chosen so that Eq. (7) or, equivalently, Eq. (8)
(7)

(8)

[which is used here in place of a more complicated equation for ease of understanding] is satisfied.
By selecting two secret prime numbers p and q, the user can calculate N = p × q, which is made
public, and ϕ(N) = (p − 1) × (q − 1), which remains secret and is used to solve Eq. (8). (Tests are
available to determine with a high level of confidence if a number is prime or not.) To obtain a unique
solution for the public key (e), a random number, or secret key (d), is selected that is relatively prime
to ϕ(N). (Integers aand b are relatively prime if their greatest common divisor is 1.) e is the
multiplicative inverse of d, modulo ϕ(N), and e can be calculated from d and ϕ(N) by using Euclid's
algorithm. Equation (6) can therefore be rewritten as Eq. (9),
(9)

which holds true for any plaintext (m). Encipherment and decipherment can now be interpreted as in
Eqs. (10) and (11).
(10)
(11)

Moreover, because multiplication is a commutative operation (e × d = d × e), encipherment followed


by decipherment is the same as decipherment followed by encipherment. Thus the RSA algorithm
can be used for both confidentiality and digital signatures (see below).
Finally, since me(mod N) ≡ (m + xN)e (mod N) for any integer x Ee(m) = Ee(m + xN). Thus the transformation
from plaintext to ciphertext, which is many-to-one, is made one-to-one by restricting m to the set {0, 1, …, N−
1}.

Message authentication codes


Message authentication codes are based on symmetric key cryptography, and they assume that the message
sender and receiver share a common symmetric key.

The strength of a MAC algorithm depends on its output length, its key length, and the algorithm itself. The
strongest attack against a MAC algorithm is the key recovery attack, in which an attacker can recover the
symmetric key from some messages and their corresponding MACs under that key. A weaker attack is the
forgery attack: an attacker might not be able to find the symmetric key used, but can still forge a valid MAC on
a given new message, using help from some known/chosen messages and their corresponding MACs under a
certain key. A secure MAC algorithm should be secure against both types of attacks.

Two widely used MAC algorithms are the CBC-MAC, a MAC algorithm based on CBC mode of block
ciphers, and HMAC, a MAC algorithm based on cryptographic hash algorithms such as SHA-1. In a CBC-
MAC, the symmetric key is used as a block cipher key to encrypt the given message in CBC mode; the last
block ciphertext is the MAC code (other ciphertext blocks are thrown away).

In a HMAC, a cryptographic hash algorithm, h, is used. Given a symmetric key k and a message m, its HMAC
code is computed as h(k ⊕ opad ∥ h(k ⊕ ipad ‖ m)), where opad and ipad are specified constants.

Digital signatures
Digital signatures authenticate messages by ensuring that the sender cannot later disavow messages, the
receiver cannot forge messages or signatures, and the receiver can prove to others that the contents of a
message are genuine and that the message originated with that particular sender. The digital signature is a
function of the message, a secret key or keys possessed by the sender and sometimes data that are nonsecret or
that may become nonsecret as part of the procedure (such as a secret key that is later made public).

Digital signatures are more easily obtained with public key than with conventional algorithms. When a
message is enciphered with a private key (known only to the originator), anyone deciphering the message with
the public key can identify the originator. The originator cannot later deny having sent the message. Receivers
cannot forge messages and signatures, since they do not possess the private key.

Since enciphering and deciphering keys are identical in a conventional algorithm, digital signatures must be
obtained in some other manner. One method is to use a set of keys to produce the signature. Some of the keys
are made known to the receiver to permit signature verification, and the rest of the keys are retained by the
originator in order to prevent forgery.

The RSA algorithm previously discussed can be used for digital signatures. The National Institute of Standards
and Technology (NIST) proposed another method for generating digital signatures, based on a 1985 proposal
by T. El Gamal. Its security relies on the difficulty of computing logarithms over finite fields, or discrete
logarithms, that is, of solving the following problem: Given a prime p and integers g, y, find an
integer x satisfying gx ≡ y (mod p). In this scheme, all users share a large prime number p and a generator g(that
is, an integer whose powers give all nonzero residues modulo p). Each user A has a secret key xA and a public
key yA ≡ g(x ) (mod p). To each message m is associated an integer h(m), 0 < h(m) < p. User A“signs” the
A

message by producing a pair of integers (r, s) between 0 and p satisfying Eq. (12).

(12)

[The signature is computed by choosing a random integer k between 0 and p − 1, relatively prime
to p − 1, computing r ≡ gk (mod p), and solving for s in Eq. (13):
(13)

different random choices k will give different signatures.] This “signature” is verified by using modular
exponentiation of large integers. But a forger, without knowledge of the secret quantity xA, apparently
cannot produce such a signature without first solving the difficult discrete logarithm problem. See
also:Number theory
A signature in the ElGamal scheme consists of two integers, (r, s), each about as large as the prime p.
Since p must have at least 512 bits to make the discrete logarithm problem infeasible, the signature would then
require 1024 bits.

A modification incorporated into the NIST proposal allows for shorter signatures. Here p is chosen such
thatp − 1 has a moderate prime factor q ≃ 2160, and g is chosen so that its powers mod p form a subgroup of
size q, so that s can be specified with fewer bits (160). The discrete logarithm problem is now restricted to that
subgroup, which may or may not make cryptanalysis easier.

A caution in the design of digital signature schemes is the birthday paradox: if signatures are n bits long, then
among a collection of (2n)1/2 = 2n/2 messages there are likely to be two with the same signature. An opponent
who finds it feasible to generate 2n/2 messages and their signatures can use the two that match to generate a
signature. To thwart an opponent whose computing power might be 264, a digital signature of at least 128 bits is
needed. The ElGamal, NIST, and RSA schemes need larger signatures than that because of algebraic shortcuts.

Xunhua Wang
Don Coppersmith
Stephen M. Matyas
Carl H. Meyer

Bibliography
• N. Ferguson and B. Schneier, Practical Cryptography, Wiley, 2003
• W. Mao, Modern Cryptography: Theory and Practice, Prentice Hall PTR, 2003
• A. Menezes, P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography, CRC, 1996
• C. H. Meyer and S. M. Matyas, Cryptography: A New Dimension in Computer Data Security, Wiley, 1982
• G. J. Simmons (ed.), Contemporary Cryptology: The Science of Information Integrity, IEEE Press, 1992
• D. R. Stinson, Cryptography: Theory and Practice, CRC, Boca Raton, 1995

Additional Readings
• S. Arora and B. Barak, Computational Complexity: A Modern Approach, Cambridge University Press,
Cambridge, UK, 2009
• D. Boneh, The decision Diffie-Hellman problem, in Proceedings of the 3d Algorithmic Number Theory
Symposium, vol. 1423 of Lecture Notes in Computer Science, pp. 48–63, Springer, 1998
• D. Boneh, Twenty years of attacks on the RSA cryptosystem, Notices Amer. Math. Soc. (AMS), 46(2):203–
213, 1999
• Y. Desmedt, Cryptographic foundation, in M. J. Atallah (ed.), Algorithms and Theory of Computation
Handbook, Chap. 38, pp. 1–14, CRC Press LLC, 1999
• Y. Desmedt, Encryption schemes, in M. J. Atallah (ed.), Algorithms and Theory of Computation Handbook,
Chap. 39, pp. 1–28, CRC Press LLC, 1999
• S. D. Galbraith, Mathematics of Public Key Cryptography, Cambridge University Press, Cambridge, UK, 2012
• S. Hameed et al., Modified advanced encryption standard for text and images, Comput. Sci., 1.3, 2011
• G. Jacob and A. Murugan, DNA based cryptography: An overview and analysis, Int. J. Emerg. Sci., 3(1):36–
42, 2013
• A. Menezes, P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography, CRC, Boca Raton, 1996

You might also like