We assume familiarity with all basic notions from complexity theory, standard models of computation (deterministic, non-deterministic, randomized, non-uniform circuit families), basic complexity classes (P, NP, RP, BPP, P/poly), the theory of NP-completeness and Cook-Levin's theorem.

The statistical distance between two discrete probability distributions A
and B over a common set X, is defined as the sum

dist(A,B) = (1/2) Sum_x |Pr(A=x) - Pr(B=x)|

where x ranges over the set X. It is easy to verify that the statistical distance is a metric, i.e., it satisfies

- dist(A,B)=dist(B,A)
- dist(A,B) ≥ 0, with equality if and only if A and B are identically distributed
- dist(A,C) ≤ dist(A,B)+dist(B,C)

A *probability ensamble* is a set (A_i) of discrete probability
distributions, where i ranges over some set of indices I (typically, all
unary, or binary strings).

A function f(n) is called *negligible* if it is asymptotically
smaller than any inverse polynomial, i.e., for every constant c>0, there
is an index m such that |f(n)| < 1/n^c for all n>m.

Two distribution ensambles (A_i) and (B_i) are statistically close if dist(A_i,B_i) is a negligible function of |i|.

Two distribution ensambles (A_i) and (B_i) are computationally
indistinguishable if for any probabilistic polynomial time algorithm D,

|Pr(D(A_i) = 1) - Pr(B_i) = 1|

is a negligible function of |i|.

We assume familiarity with basic cryptographic primitives, e.g., secret and public key encryption, digital signatures, one-way functions, trapdoor function families, commitment schemes, pseudo-random generators, pseudo-random functions.

Below, we recall some of these definitions.

A one-way function is a function f such that:

- f is efficiently computable, i.e., there if a polynomial time algorithm that on input x, output f(x). We also assume some standard regularity conditions, e.g., the length of the output f(x) depends only on the length of the input x, |f(x)| is polynomially related to |x|, the length of the input |x| can be easily deduced from the length of the output |f(x)|, etc.
- f is hard to invert (on the average), i.e., for any probabilityic polynomial time algorithm I, if x is chosen uniformly at random among all strings of length s (the security parameter), y = f(x), and z = I(y), then the probability that f(z) = y is negligible in s.

A bit-commitment function is a function commit(b,r) that takes as input a bit b and a random string r (of length s, the security parameter). The following security definitions apply:

- commit(b,r) is computable in polynomial time
- commit is perfectly binding if the images of commit(0,.) and commit(1,.) are disjoint
- commit is computationally binding if no probabilistic polynomial time algorithm F can find strings r,r' such that commit(0,r) = commit(1,r') with non-negligible probability
- commit is perfectly (resp. statistically, computationally) hiding if distributions commit(0,$) and commit(1,$) are identical (resp. statistically close, computationally indistinguishable)

It is easy to see that a commitment scheme cannot be both perfectly binding and perfectly hiding. Typical commitment schemes are perfectly binding and computationally hiding, or computationally binding and perfectly hiding.

An excellent reference for the material presented in this and the next lecture is

Oded Goldreich, Foundations of
Cryptography, vol. 1 (Basic tools), Cambridge Univ. Press., 2001

(Fragments of a preliminary version of the book are available here.)

Specific sections of the book related to the material presented today in class are Section 1.3 (Computational model, complexity classes, etc.), Section 2.2 (One-way functions), Section 2.5 (Hard code predicates), Section 3.2 (Computational indistinguishability), Section 4.4.1 (Commitment schemes)