In the last lectures we studied the oblivious transfer problem, and how to solve it using trapdoor permutations and zero knowledge proofs of knowledge. Oblivious transfer is an example of secure function evaluation protocol. Today we start studying secure evaluation of general functions. Let f(x_1,...x_n)=(y_1,...,y_n) be a (possibly randomized) function mapping n inputs to n outputs. We want to design protocols for n parties P1,...,Pn (where each Pi holds one input x_i) to evaluate the function in such a way that each Pi learn y_i, but nothing else. The protocol should alsa ensure that no party can influence the output of the function, other than by changing his own input. To sum up, we want the protocol to satisfy privacy and correctness requirements. Formulating good definition of multiparty protocols is not a trivial task and it is easy to get bad definitions. In order to illustrate what makes a definition good, we will first consider some bad definitions. But first, we introduce some notation about the execution of a protocol.

Let P = (p1,...,pn) be a set of interactive programs to be run by the parties. Each program takes as input a value x_i, and some randomness r_i, and can send and receive messages with the other parties. We assume a secure point-to-point communication network, i.e., the channels are private and authenticated. Secure channels can be build using standard cryptographic tools, like encryption and digital signatures.

Let I be a set of corrupted parties, and \I its complement. For simplicity we consider only static adversaries, so that you can think of I as fixed in advance. However, the protocol should be secure for every I, i.e., P should not depend on the set I of corrupted parties, and honest parties do not know who the bad guys are. Dishonest parties will try to gain something using all availables means. In particular, they can communicate with each other through the secure channels, and coordinate their adversarial strategy. In order to make this more explicit, we assume there is an external entity, the "adversary", masterminging the attack, and telling the dishonest parties how to act. We denote this adversary A. The dishonest parties P_I will simply rely messages back and forth to A. The adversary also gets some auxiliary input x_0 and randomness r_0. Let X = (x_0,x_1,...,x_n) the inputs to the parties and R=(r_0,r_1,...,r_n) the randomness. The execution of protocol P in the presence of adversary A proceeds as follows.

- A learns the inputs X[I] and randomness R[I] of the bad parties. We assume index 0 always belongs to set I, so that X[I] and R[I] include the auxiliary input and randomness of the adversary.
- Honest parties P[\I] execute their programs p[\I] using inputs X[\I] and randomness p[\I], exchanging messages among themselves and with the adversary playing the role of the bad players P[I].
- Honest parties do not know who's bad and who's not. They execute their instructions as if anybody were honest, and at the end of the execution they output their value y[I].
- The output of the bad parties is decided by the adversary, based on all information collected during execution. Without loss of generality, one can assume parties do not output anything, and A outputs its entire view.

The output of the protocol is denoted EXEC_{P,A,I}(X,R). We write EXEC_{P,A,I}(X) for the probability distribution induced by a random R, and EXEC_{P,A,I}(X)[S] for the part of the output corresponding to a subset of players S.

If no adversary is present (i.e., I = {0}), then for any possible input X, it should be EXEC_{P,A,{0}}(X) = f(X), i.e., the output produced by the protocol has the same distribution as the one specified by function X.

In the presence of an adversary, we want to limit what can be achieved by A to the bare minimum. There are three things that parties P[I] cannot be prevented from doing:

- Learning their own inputs X[I]
- Changing their inputs to different values X'[I] = A(X[I]), thereby affecting the output of honest parties f(X'[I],X[\I])[\I]
- Learning their part of the output of f(X'[I],X[\I])[I]

Informally, we want to enforce that malicious parties cannot do anything more than this. In particular, they should not learn the input of honest parties other than what is implied by the value of function f, and they cannot influence the output of the function other than by changing their own inputs.

It is natural to formulate a privacy and correctness requirement as follows:

- Privacy: for every adversary A there exists a simulator S such that for
all inputs X, EXEC_{P,A,I}(X)[I] = S(X[I],f(S(X[I]),X[\I]).

Informally, whatever the bad parties learn (and output) can be produced by a simulator that gets only the legitimate information. - Correctness: for every adversary A there exists a simulator S such that
for all inputs X, EXEC_{P,A,I}(X)[\I] = f(S(X[I]),X[\I]).

Informally, the influence of the bad parties to the good parties output is limited to contributing different inputs to function f.

It turns out that this definition is not strong enough, and the problem is that privacy and correctness cannot be separated.

Consider the following 2 party protocol. P[1] and P[2] want to compute the exclusive or of their input bits f(X) = X[1] + X[2]. To this end, P[1] sends X[1] to P[2], and then P[2] sends X[2] to P[1]. They both output the exclusive or of the two bits. Intuitively, the protocol is private because if you know your input and the function output, then you also know the other party input. At the same time, by changing your input you can force the output to be any bit.

**EXERCISE (1):** Does the protocol satisfy the definition of
security?

The protocol is intuitively insecure. P[2] can influence the output, and make P[1] always output 1. This is not possible if the players are interacting with a trusted party for the evaluation of the protocol. The problem with the above protocol is that it should not be possible to affect the output of the function after learning information about the other parties input. Even if P[2] does not learn anything about X[1] that is not implies by X[2] and f(X), and the influence of P[2] on the output of P[1] can be always described as P[2] changing his own input, the previous definition allows P[2] to influence the output of P[1] in a way that depends on X[1].

Whether or not the protocol satisfies the definition, it seems prudent to make the definition stronger, asking that for evey adversary A there exists a simulator S such that both the privacy and correctness properties are met. The simulator should be the same for both properties.

- Privacy + Correctness: for every adversary A there exists a simulator S such that for all inputs X, EXEC_{P,A,I}(X)[I] = S(X[I],f(S(X[I]),X[\I]) and EXEC_{P,A,I}(X)[\I] = f(S(X[I]),X[\I])

Still, even in this stronger version, this is not a good definition. Let's consider another protocol:

P[1] tosses a random coint b, sends b to P[2] and output b. P[2] ignores b and output nothing.

This protocol computes randomized function f(X) = [random bit, nothing]

The protocol is intuitively insecure because party P[2] learns the random output of P[1], which he shouldn't learn. Notice, this protocol is insecure even if all parties are semihonest!

**EXERCISE (2):** Prove that the last protocol satisfies the
definition of security, even it the strong version requiring a single
simulator for both properties..

The problem of the last protocol is that although the two distributions EXEC_{P,A,I}(X)[I] and EXEC_{P,A,I}(X)[\I] can be individually simulated, even using the same simulator, the joint simulated distribution is not the right one, i.e., [S(X[I],f(S(X[I]),X[\I]), f(S(X[I]),X[\I])] is not equal to EXEC_{P,A,I}(X). In order to get a good definition of secure function evaluation, we need to completely merge the issues of privacy and correctness and simply ask that whatever can be done by A can be simulated.

**Definition:** A protocol P securely computes function f, if
for every adversary A there is a simulator such that for all inputs X

(S(X[I],f(S(X[I]),X[\I]), f(S(X[I]),X[\I])) = (EXEC_{P,A,I}(X)[I],
EXEC_{P,A,I}(X)[I]).

It turns out (modulo some issues regarding the ability of bad players to possibly stop the protocol early) that this is a good definition of security. We will get some confidence that we finally formulated a good definition next time, when we will prove that this definition is compositional, i.e., combining protocols that satisfy the definition results in protocols that are also secure according to the definition.

The recommended reference for the material presented in this lecture is

R. Canetti, Security and Composition of Multiparty Cryptographic Protocols, Journal of Cryptology, 13(1), 2000. (Special issue on secure computation)

The paper also contains an exhaustive bibliography about previous definitional efforts to model secure function evaluation.Another good reference is

O. Goldreich, Secure Multi-Party Computation. manuscript

which provides more details about the two party case, the problem of early termination mentioned above.