**Instructor: **Daniele Micciancio

In today's lecture we discuss of the papers assigned for reading last time:

**[DH]** Diffie, Hellman, *New
directions in cryptography*, IEEE TIT 22(6):644-654 (1976)

**[RSA]** Rivest , Shamir , Adleman, *A method for obtaining
digital signatures and public-key cryptosystems*, CACM 21(2):120-126
(1978)

[DH,RSA] are among the most influential papers in the whole cryptographic literature, a must read for anybody interested in cryptography. They are also an excellent reading to get an idea of what cryptography looked like in the 70's. The main contribution of [DH] is to introduce the idea of public key cryptography, proposing a solution to the long standing problem of key distribution, and opening up the road for a wide use of cryptographic techniques in commercial applications. Before this paper, cryptography was all based on symmetric primitives, where two parties (say, Alice & Bob) achieve secure communication (or some other security property) by means of a shared secret key k known to both of them. In a large network, in order to achieve security, symmetric cryptography requires the establishment of a very large number of keys (n^2, where n is the number of users in the network). For this reason, the use cryptography before [DH] was essentially restricted to organizations (e.g., military or government) with a hierarchycal structure which could be used to distribute and limit the number required keys.

[DH] proposes a notion of public key encryption that addresses the key distribution problem and also realizes a form of digital signatures. The notion of public key encryption introduced in [DH] does not satisfies the indistinguishability security property (IND-CPA) defined in the last lecture, or the modern notion of security for digital signatures (unforgeability under chosen message attack). Rather, it corresponds to what (in modern cryptographic terminology) is called a "trapdoor function family" defined below.

**Definition:***A trapdoor function family is defined by a
triple of PPT algorithms (G,F,T), where*

*G is the key generation algorithm. On input a security parameter n, it outputs a pair of encryption/decryption keys G(n) = (e,d)**F is the function evaluation algorithm. On input a key e and input x, outputs a value F(e,x). We think of e as defining a function F(e, *)**T is the trapdoor algorithm. Given the decryption key d and F(e,x) it recover the input x = T(d,F(e,x)), for any (e,d) in the range of G and input x in the domain of F(e,*)*

*The security property is that for any polynomial time adversary A, the
probability that A(e,F(e,x)) = x when e and x are chosen at random is
negligible in the security parameter.*

Typically, F(e,*) is a permutation over a set X(e), so that we also have F(e,T(d,x)) = x for all x in X(e). [DH] suggests two applications of trapdoor permutation families:

- "Public key encryption": Each party (Alice) runs G(n) = (e,d), keeps d secret and publishes e. Any other party (Bob) can send message m to Alice using the published key e, and encoding m as v = F(e,m). Alice can recover m from v applying T(d,v) = m. Since F(e,*) is hard to invert without knowing d, the message m is kept secret from anybody other than Alice.
- "Digital signatures": Each party (Alice) runs G(n) = (e,d), keeps d secret and publishes e. Alice can subsequently "sign" message m by computing s = T(d,m). Any other user can check that s is a signed copy of m computing F(e,s) = m. Since F(e,*) is hard to invert without knowing d, this gives evidence that the value s was produced by Alice.

We remark that both the cryptosystem and digital signature schemes obtained in this way do not satisfy the strong security notions now standard in modern cryptography. Still, trapdoor function families can be used as the basis for the construction of cryptosystems and digital signature schemes satisfying strong notions of security.

The problem of finding a good candidate trapdoor permutation family is left open in [DH]. Instead, [DH] proposes an implementation to a different solution to the key distribution problem: a public key agreement protocol. This is a protocol (exchange of messages) that allows two parties (Alice and Bob) to produce a secret that is known to both of them, but to no one else. The Diffie-Hellman key exchange protocol works as follows:

- Let p be a large prime number, and g a generator of the multiplicative group of invertible integers modulo p.
- Alice picks a random x (in the range 1..p) and sends g^x mod p to Bob.
- Bob does the same, choosing y and sending g^y mod p to Alice.
- Both parties can compute s = g^(x*y) mod p = (g^x mod p) ^y mod p = (g^y mod p)^x mod p

The secret can be subsequently used to communicate securely using symmetric cryptography. The security of the key exchange protocol is based on the difficulty of the Diffie-Hellman problem: given g^x mod p and g^y mod p, compute g^(x*y) mod p.

This allows to solve the problem of key establishment and secret communication, though it does not give a trapdoor permutation family. The first implementation of trapdoor permutation family is given in [RSA]. The RSA function is still the most widely used cryptographic function today and had an enormous impact in practice. Rivest, Shamir and Adleman received the Turing award for their work [RSA]. We will use the RSA function in some of the examples in this course.

We observed that using a simple variant of the Diffie-Hellman protocol it is possible to build a public key encryption scheme, satisfying all the desired properties, and in fact achieving even stronger security than those seeked in [DH]. This is the ElGamal cryptosystem:

- The key generation algorithm is G(d) = g^d mod p = e
- The encryption is randomized E(e,m;r) = (g^r mod p, e^r * m mod p), where r is chosen uniformly at random
- Ciphertext can be decrypted by D(d,(h,c)) = c / h^d mod p

It is eary to see that for any r, D(d,E(e,m;r)) = m. (Assuming (e,d) is a valid pair produced by G, and m is in the range {1,...,p}. Moreover, the scheme is provably (IND-CPA) secure under the Decisional Diffie-Hellman (DDH) assumption.

(Note: both the DH and DDH problem are named after [DH], and are widely used in cryptographic complexity assumption to prove the security of other protocols.)

**Question:** How could Diffie and Hellman not see this, and
left it to RSA to come up with the first construction of public key
encryption?

I do not know, that's a good question to ask, but here are some possible answers:

- [DH] was looking for a trapdoor permutation family, and didn't considered randomized encryption procedures. Notice that if ElGamal is made deterministic by fixing the value of r, then the scheme is not secure anymore, i.e., one can easily recover the message m from (h,c) computing c / e^r mod p.
- [DH] was mostly concerned with digital signatures, and their proposal of digital signatures require trapdoor permutations. Notice that the ElGamal encryption function is not invertible, even knowing the trapdoor, i.e., one can recover m but not the entire input (m,r). Getting trapdoor permutation families based on the hardness of the DH problem is an open problem in cryptography.

Definitions in [DH] are rather informal, but still one can recognize in the paper many of the most important ideas that contributed to the development of modern cryptography. In [DH] we find the following:

- Discussion of various cryptographic problems and goals. There is a clear distinction between secrecy and authenticity, as well as between various forms of attack: known ciphertext attack, known plaintext attack, chosen plaintext attack. Talking about security in general makes no sense. In order to meaninfully talk about security, one needs to identify a security goal and an attacker model. An application is secure if it achieves a certain security goal under a specified attack model.
- The notion that security should rely on the secrecy of the key and not the encryption algorithm (dating back to the 1880's)
- An interesting historical perspective on the role of provable security and cryptanalysis in validating cryptographic functions. In the 16th and 17th century "proofs" of security (involving counting the number of keys) were common, but proved of little value as "provably secure" systems were repeatedly broken. In the 19th century, it became common practice that systems be evaluated by cryptanalysis. Developmentes in information theory and complexity theory in the 20th century started providing a solid mathematical basis for security proofs, which are now standard in cryptography.
- Discussion of the relevance of complexity theory to the study of cryptography: impossibility of computational encryption if P=NP. Need for hard on average problems in cryptography, as opposed to worst case hardness established by NP-completeness: when you choose your key at random, it is not enough to know that some other key is hard to break. You want to be confident that your random key is hard to break with high probability.
- The notion of establishing relations between cryptographic primitives
via reductions: one can use primitive A to implement primitive B. One
specific example considered is the construction of one-way functions from
a cryptosystem or blockcipher. Consider the following contructions, where
E(key,msg) represents the application of a block cipher (typically
modeled as a pseudoranfom function):
- Fix a message m, and define f(x) = E(x,m) [proposed in this paper, and claimed provably secure under some informally given assumption]
- f(x) = E(x,x) [Proposed by Evans et al. and reported in this paper as a reasonable, but not provably secure function]

The above constructions are examined in the homework assignment. The goal of the homework is to demonstrate the importance of provable security. While [DH] only says that the security proofs for (2) breaks down and still considers it a reasonable proposal, it turns out that it is provably insecure: there are good block ciphers such that construction (2) is not one way and can be broken in a very strong sense. On the other hand, construction (1) can be proved secure under standard cryptographic assumptions. Both issues are explored in the homework. The insecurity of construction (2) also pin points the danger of encryption cycles, e.g., encrypting a key with itself. This if a general limitation of modern computational cryptographic primitives and techniques that has often been overlooked in symbolic treatments of cryptography. We will get back to the problem of encryption cycles later in the course.