### UPP

Bounded Probabilistic Polynomial time (BPP) is the class of all languages L for which some randomized Turing machine M has the following property: for each input x, if x is in L, then the probability that M(x) accepts is >= 2/3 and if x is not in L, then the probability that M(x) accepts is <= 1/3.

The numbers 2/3 and 1/3 are not special. What is special is that these 2 numbers are a positive distance from one another. I think that this is where the Bounded part of BPP comes from: the probability of error is bounded below by some positive real.

So what if we erased the bounded requirement of BPP? Let 0 < p < 1 be a real and define Unbounded Probabilistic Polynomial time of p (UPP(p)) to be the class of languages for which there exists a randomized Turing machine M with the following property: for each input x, if x is in L, then the probability that M(x) accepts is > p and if x is not in L, then the probability that M(x) accepts is < p.

The following essay (pdf) explores these classes of languages. The major result is that they are all the same independent of p!

### Dual Vector Space

Let V be a vector space over a field F. The dual V* of V is the vector space of F-linear homomorphisms from V to F. Usually V* is isomorphic to V, each element representing an inner product with some fixed vector in V. But if V has infinite dimension, this may not be the case! The following essay (pdf) presents a good example of this phenomenon.

### CDF Bounds

Let X be a random variable and suppose that we wish to bound P(X<=a) by P(Y<=a) for all a where Y is a random variable with a simpler CDF. We may be tempted to choose parameters for Y so that E(X)=E(Y) to get a good bound, but as the following essay (pdf) shows, this is not possible.

### Dimension of R^N

Let V be the infinite dimensional R-vector space of all maps from the natural numbers, N, to R. The dimension of V is not the cardinality of N. To give some idea of why this is, notice that the countable set of tuples B={e1,e2,...} where ei has a 1 in the ith position and 0's elsewhere does not span R^N, as it can only generate tuples with finitely many nonzero components. The following essay (pdf) finds the actual dimension.

### Infinite Matrices

What are the basic properties of infinite matrices? What does the eigenspace look like. This (pdf) explores a few such questions, though it asks more questions than it answers.

### What is Security?

This short essay explores just what is meant by "system A is secure".

### Isoperimetric Problem

Suppose we wish to find a subset S of the Boolean cube 2^n of size m so as to minimize the size of the boundary of S, i.e. the number of Hamming distance 1 neighbors of S. This can be done by putting S into a Hamming ball. The proof is quite hard and I could not find a complete version of it anywhere, so I wrote my own (pdf). It should also be noted that to minimize the number of boundary edges, S should be put into a cube.

### Nonmeasurable Sets

Let X be a sample space. A probability measure can be defined on P(X) provided X is countable, but if X is a continuum, then we may run into contradictions unless we restrict the probability measure. This essay (pdf) gives an example of a nonmeasurable set.

### Self Reducibility of NPC Languages

All languages in NPC are self reducible, as this short essay (pdf) shows.

### Extension of Minimax to Infinite Matrices

Consider a zero-sum game between players row and column with m x n payoff matrix A, so that if row chooses i and column chooses j, then column pays A(i,j) to row. Von Neumann's theorem states that for m,n finite, the maximum, over all mixed strategies p for player row, of the minimum, over all mixed strategies q for player column, of the expected payoff is the same as the minimum, over all mixed strategies q for player column, of the maximum, over all mixed strategies p for player row, of the expected payoff.

But what happens when m,n may be infinite? If both are infinite, then the max min may not equal the min max, but if at least 1 is infinite and A is bounded, then the max min does equal the min max, as the following essay (pdf) proves.

### Probability Counterexample

If you were worried about your house being destroyed, would you rather hear news that there was a tornado, hear news that there was a flood, or hear news that there was a tornado or a flood? One might think that surely one of the first 2 options is better than the third one. Wouldn't it be better to pick the natural disaster, especially given that you might know that your house is resistant to flooding (because you live on a hill) or resistant to tornados (because you live in an underground bunker)? Oddly, the probabilities might work out so that the best news to hear is "there was a tornado or a flood", as this essay (pdf) shows.

### k-Universal Family

A set of n random variables is k-universal iff each subset of k of them has the same distribution as k independent and uniform (over some domain) random variables. Constructing such families over small probability spaces has important applications, especially in the derandomization of algorithms. The following essay (pdf) gives one such construction.

### Natural Proofs

All lower bound proofs take the form "function family f has property C and no family with property C lies in the complexity class Lambda". Also most have the property that whether f has property C is easily checkable given the truth table of f and that a random function has nonnegligible probability of having property C. Such "natural" proofs cannot provide nontrivial lower bounds unless there are no strong pseudo-random generators. The following essay (pdf) gives all the necessary definitions, a major result with proof, and much discussion of this idea.

The original paper by Razborov and Rudich leaves certain ideas ambiguous and the proof is missing a major step. Part of the goal of this essay is to cover the holes. As a bonus, the essay also includes a definition and proof of useful properties of a nice new notation for "ultimately" and "infinitely often" quantification over the natural numbers.

### Hard Input Distribution

There is a single input distribution D on which no algorithm solving an NP-complete problem runs in polytime, unless P=NP, as this short essay (pdf) shows. But D is almost certainly not efficiently sampleable. This shows that average case hardness needs to be defined in terms of efficiently sampleable distributions only, since arbitrary distributions can easily diagonalize against a class of algorithms.

### CSP Duality

We show in this essay (pdf) a reduction from (a,b)-Constraint Satisfaction Problem (CSP) of size (n,m) to (b,a)-CSP of size (m, O(m^a)) and taking time O(m^a).

### Complexity of FSAs

In this essay (pdf) we show a few simple facts about finite state automata. (1) For a given NFA N, a minimal equivalent DFA D may have exponentially many more states. (2) A shortest string rejected by an NFA may be of length exponential in the size of the NFA. Of course a shortest string accepted must be of linear size.

### Finding Interesting Palindromes

We show that it is possible to decide whether a given dictionary can generate a palindrome in this essay (pdf). It is not clear whether this can be done efficiently. As a bonus, we show how to efficiently decide whether a given sentence is generable from a given dictionary.

### Solving the 15-Puzzle

The 15-puzzle is played on a 4x4 matrix containing labeled tiles and a single blank. A move is to interchange the blank with an adjacent tile and the goal is to order the tiles. In this essay (pdf) we give necessary and sufficient conditions for solvability and show how to compute this in linear time. We also give an algorithm to solve the game using a number of moves that achieves the average case minimum number of moves to within a constant factor.

Jacob's ladder is a toy constructed from n blocks linked so that block i is always adjacent to block i+1. By holding it up by block 1, letting the rest hang down, and rotating block 1 by pi, you can flip all of the blocks in a cascading motion. But you can also create certain graphs with a Jacob's ladder toy. This essay (pdf) classifies exactly which graphs can be so constructed.

### Nim

The classic game of somewhat unknown origins is fully analyzed here (pdf) in terms of computational complexity and game length.

The ripple carry addition method requires Theta(n) depth and size for bounded fanin circuits. By contrast, carry lookahead addition requires Theta(lg n) depth and Theta(n^2) size, as this essay (pdf) shows.

### Robots Take Over the World

This essay (pdf) shows one of the ridiculous things that can happen if P=NP, namely that we can construct efficient mistake-bounded learners. Also proved in the essay are that NP witnesses can be efficiently sampled uniformly at random given an NP oracle and that BPP is in Sigma_2.

### Relativization Fallacy

Beginning students of complexity theory often think that for complexity classes A,B,C, A <= B implies A^C <= B^C. When they are told that this is not valid, they seem not to believe it at first; and who can blame them? It seems plausible, and usually mathematical notation is context insensitive. Further compounding the problem is the lack of handy counterexamples. This essay (pdf) gives a compelling counterexample.

### Small Space = No Space

A very small amount of space is computationally useless because one cannot even compute how much space one has to work with. In this essay (pdf) we show that SPACE(lg lg lg n) = SPACE(0) = the regular languages.

### 2-TQBF is in P

In 1979 Plass, Aspval, Tarjan showed that 2-TQBF is in linear time, but it's pretty hard to find online, and anyway I showed it's in P independently here (pdf).