# Lecture 4: Oblivious Transfer for semihonest parties

### Oblivious Transfer: definition

Last time we studied ZK proof systems, and defined the ZK property with respect to "honest verifiers" (HVZK), and "cheating verifiers" (general ZK). The definition of zero knowledge is based on a general simulation paradigm which is the base of the definition of general secure computation. In fact, zero knowledge can be considered as a special case of secure function evaluation, where the function to be evaluated is the defining relation of an NP language L. Today we start studying another important example of two party cryptographic protocol: oblivious transfer. As for zero knowledge, we will first give definitions of security for semi-honest parties, and then define security with respect to possibly cheating parties. Then we present a simple protocol meeting the definition of secure OT for semi-honest parties. Finally, we show how ZK proofs can be used as a general tool to transform protocols which are secure when the parties are semi-honest, into protocols that are secure with respect to arbitrary adversarial behaviour.

Oblivious transfer is a protocol where one party S (the sender) transmit part of its input to a receiver (R), in such a way that both parties are protected: the sender does not learn which part of the input was transmitted, and the receiver does not gain any additional knowlwedge about the sender's input, beside the transmitted information. Several variants of oblivious transfer have been considered, and proved equivalent in a strong information theoretic sense. In this lecture we concentrate on a specific variant: 1-out-of-2 OT.

• Sender input: two values v_0, v_1
• Receiver input: a single bit c
• Sender output: nothing

The output of the function specifies that the receiver learns one of the two inputs of the sender. The receiver can choose which input to learn. However, it shouldn't learn anything about the other value. Similarly, the sender should get no information about which of the two inputs was chosen by the receiver. The formal definition follows. An (1-out-of-2) OT protocol is a two party protocol (S,R) such that Output(S(v_0,v_1),R(c)) = (?,v_c). The protocol is secure if the following properties hold:

• The ensambles {View_S(S(v_0,v_1),R(0))} and {View_S(S(v_0,v_1),R(1))} are indistinguishable (ensamble indexed by v_0 and v_1).
• There is a simulator Sim such that ensambles {Sim(c,v_c)} and {View_R(S(v_0,v_1),R(c))} are indistinguishable (ensambles indexed by v_0, v_1 and c)

As usual, the notion of indistinguishability used can be equality, statistical closeness, or computational indistinguishability. As for the commitment protocols, at least one of the two properties can hold only in a computational sense. The above definition refers to the case that both the sender and the receiver follow the protocol. More in general, we require

• For any (possibly cheating) PPT sender S', the ensambles {View_S'[S',R(0)]} and {View_S'[S',R(1)]} are indistinguishable.
• For any (possibly cheating) PPT receiver R', there is an "ideal" adversary that simulates the interaction between R' and S. Here an ideal adversary is a probabilitic machine that given randomness r, outputs a query bit q = Q(r). Then, on input v_q, it outputs the simulated view for R'. We require that {Sim(r,v_{Q(r)})} is computationally indistinguishable from {View_R'[S(v_0,v_1),R']}

### OT Protocol for semihonest parties

Let f be a (family of) trapdoor one-way permutation(s), and B a hard core predicate for f. Let v_0,v_1 be the sender's input bits, and c the receiver's input.

• The sender chooses a one-way permutation f, together with the corresponding trapdoor t, and sends f to R
• The receiver, on input c, chooses a random x_c and y_{not c}, computes y_c = f(x_c), and sends y_0,y_1 to the sender
• The sender uses the trapdoor t to compute x_0, and x_1 such that f(x_i) = y_i. Send u_0 = v_0 + B(x_0) mod 2 and u_1 = v_1 + B(x_1) mod 2 to R
• The receiver outputs u_c + B(x_c) mod 2

### Security

Notice that the sender's view of the interaction is independent of c because for any permutation f, both y_0 and y_1 are uniformly distributed. This guarantees security for the receiver, even if the sender tries to cheat, provided the trapdoor permutation family is "certified", i.e., there is not way for the sender to choose a key such that f is not a permutation.

We prove the security for the sender, when the receiver is semihonest, i.e., it follows the protocol. We need to define a simulator Sim(c,v_c) whose output is indistinguishable from View_R[S(v_0,v_1),R]. The Simulator proceeds as follows.

• Chooses f at random
• Chooses x_c and y_{not c} at random
• Computes y_c = f(x_c)
• Set u_c = B(x_c) + v_c (mod 2), and chooses u_{not c} at random.
• Outputs ((x_c,y_{not c}), f, u_0, u_1 ), where the first pair is the randomness of R, and the other elements are the received messages.

Notice that the output of Sim is identically distributed to View_R, except for the value of u_{not c} which is random in the output of Sim, and equals B(f^{-1}(y_{not c})) + v_c in the real interaction View_R.

We want to show that if one can distinguish the two distributions above, then one can predict hard core predicate B. Let D be a distinguisher between R's view of the interaction, and the view simulated by Sim. Formally, there are v_0,v_1 and c such that for all polynomials p(k), and for infinitely many values of the security parameter k,
Pr{D(Sim(c,v_c)) = 1} - Pr{D(View_R(S(v_0,v_1),R(c))) = 1} > 1/p(k).

We build a predictor P that on input f and f(x) tries to guess B(x). The predictor works as follows.

1. Choose x_c and b at random
2. Run D on input ((x_c,y), f, u_0, u_1) where u_c = B(x_c) + v_c, and u_{not c} = b + v_{not c}.
3. If D outputs 1, output b, otherwise output {not b}.

We want to compute the probability that the output of P equals B(x). The output of P is correct if either b = B(x) and D outputs 1, or b != B(x) and D outputs 0. Notice that b = B(x) with probability exactly 1/2. Let I be the conditional distribution of the input to D, given b = B(x), and J be the conditional distribution, given b != B(x). Notice that distribution I is identical to the view of a real interaction. Algorithm P can be described as follows: Sample X according to distribution (1/2)I + (1/2)J, and output b + D(X) mod 2. So, the condition on D gives

Pr{ D((1/2)I + (1/2)J) = 1 } - Pr{D(I) = 1} > 1/p(k)

which can be simplified to

(1/2)Pr{ D(J) = 1 } - (1/2)Pr{ D(I) = 1} > 1/p(k).

So, D is more likely to output 1 when the guess b is wrong. The probability that P outputs B(x) is

Pr(P(y) = B(x)} = (1/2) Pr{ D(I) = 0 } + (1/2) Pr{ D(J) = 1 } = (1/2) + (1/2)Pr{ D(J) = 1 } - (1/2)Pr{ D(I) = 1 } > (1/2) + 1/p(k).

This proves that P guesses hardcore predicate B with non negligible advantage.

### References

Most of the material presented in this lecture can be extracted from the survey/tutorial

O. Goldreich, Secure Multi-Party Computation.

Specifically, our definitions of secure oblivious transfer can be obtained as special cases of secure two party computation as given in Section 2.1. It is a good excercise to see how the definitions used here follow from the general definition of secure computation. The OT protocol secure against semihonest parties is presented in Section 2.2.2..