Investigator: D. Micciancio
Ph.D. students: V. Lyubashevsky, Yi-Kai Liu, (B. Warinschi)
Other collaborators: V. Guruswami, O. Regev, U. Feige, O. Goldreich, S. Goldwasser, S. Safra, J.-P. Seifert
Funding: Micciancio’s NSF CAREER Award, Sloan Research Fellowship, (Hellman Fellowship)
Summary: The goal of the Complexity of lattice problems project is to identify computational problems on lattices that are computationally intractable, e.g., NP-hard. Identifying and studying computationally hard problems is important for two different reasons:
As part of this project we are investigating the most important computational problems on lattices, and proved that many of them are NP-hard to solve even in an approximate sense. Compuational problems considered so far include:
For an introduction to the computational complexity of lattice problems, we refer the reader to the book Complexity of Lattice Problems: A Cryptographic Perspective (Kluwer, 2002) by D. Micciancio and S. Goldwasser.
The shortest vector problem is the most famous computational problem on lattices, and it has been studied by mathematicians for centuries since the time of Gauss (in the equivalent formulation of minimization of a quadratic form). The NP-hardness of SVP was conjectured by van Emde Boas in 1981, and remained one of the most important open problems in the area for almost two decades.
Another long standing open problem in the area was the relation between SVP and it inhomogeneous counterpart, the Closest Vector Problem (CVP). Although it was commonly believed that CVP is a harder problem than SVP, no theoretical evidence supporting such a claim was known till recently. In fact, in 1986 Babai had posed the problem of reducing SVP to CVP even in a very weak sense: show that if CVP can be solved almost exactly in polynomial time, then SVP can be solved in polynomial time within factors subexponential in the dimension. We positively resolved this conjecture in a very strong sense by giving a dimension preserving and approximation preserving polynomial time reduction from SVP to CVP:
The Closest Vector Problem (CVP) is the inhomogeneous counterpart of SVP: given a lattice L and a target point in space y, find the lattice point in L closest to y. This is a classic NP-hard combinatorial optimization problem: it was proved NP-hard to solve exactly by van Emde Boas in 1981, and the best currently known inapproximability result (by Dinur, Kindler, Raz, Safra, 2003) shows the NP-harness of the problem within any subpolynomial factor of the form n^O(1/log\ log\ n)^.
The Closest Vector Problem with Preprocessing (CVPP) poses the question whether the hardness of CVP comes from the fact that the lattice L is not known in advance, or CVP can be hard to solve even for a fixed family of lattices. Namely, given a sequence of lattices L1,…,Ln, … in increasing dimension n, CVPP asks if there is a polynomial time algorithm that on input an n-dimensional vector y finds the lattice point in Ln closest to y. For certain families of lattices, CVPP can be efficiently solved: for example, if Ln=Zn is the lattice of all integer vectors, then the closest lattice point to y can be easily found by rounding each coordinate of y to the closest integer.
A natural question is whether CVPP is not only NP-hard to solve exactly, but also NP-hard to solve in an approximate sense.
The best currently known inapproximability result for CVPP is by O. Regev, for factors up to square root of 3. Proving the NP-hardness of CVPP for any constant approximation factor is an important open problem in the area. On the positive side, it is interesting to construct interesting family of lattices that are easily decodable, i.e., such that CVPP can be solved efficiently.
The ratio betwteen the covering and packing radius of the lattices defined in (Micciancio, SICOMP 2004) is sqrt(n log log n / log n). It is an interesting open problem (both for cryptographic and coding theory applications) to find better constructions achieving smaller (possibly constant) ratios between the two radii.
The covering radius is the smallest radius such that spheres centered at all lattice points cover the entire space. The covering radius of lattices has been extensively studied by mathematicians, but it has received so far little attention from a computational point of view. Recently, (Micciancio, SICOMP 2004) has brought attention to this prioblem because of potential cryptographic applications. The problem is especially interesting because the covering radius of a lattice is not even known to be efficiently computable in non-deterministic polynomial time (NP), so the exact version of the problem appears to be extremely hard.
In The complexity of the covering radius problem (Computational Complexity, 2005) D. Micciancio with collaborators V. Guruswami and O. Regev, started the systematic study of the computational complexity of approximating the covering radius of point lattices and proved many results for various values of the approximation factor. These results include:
Proving the NP-hardness of the covering radius problem for lattices (even in its exact version) is currently an open problem. The analogous problems for linear codes is known to be NP-hard for any constant approximation factor, and Pi2-hard for some constant approximation factor. (See Complexity of Coding Problems project page.)