Hardness of approximating the minimum distance of a linear code

Authors: Ilya Dumer, Daniele Micciancio and Madhu Sudan

IEEE Transactions on Information Theory, 49(1):22-37 (Jan. 2003)

[BibTeX] [PostScript] [PDF] [doi:10.1109/TIT.2002.806118]

Abstract: We show that the minimum distance d of a linear code is not approximable to within any constant factor in random polynomial time (RP), unless NP (nondeterministic polynomial time) equals RP. We also show that the minimum distance is not approximable to within an additive error that is linear in the block length n of the code. Under the stronger assumption that NP is not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor exp(log^{1-ε}(n)), for any ε > 0. Our results hold for codes over any finite field, including binary codes. In the process we show that it is hard to find approximately nearest codewords even if the number of errors exceeds the unique decoding radius d/2 by only an arbitrarily small fraction ε d. We also prove the hardness of the nearest codeword problem for asymptotically good codes, provided the number of errors exceeds (2/3)d. Our results for the minimum distance problem strengthen (though using stronger assumptions) a previous result of Vardy who showed that the minimum distance cannot be computed exactly in deterministic polynomial time P, unless P=NP. Our results are obtained by adapting proofs of analogous results for integer lattices due to Ajtai and Micciancio. A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes.

Notes

Preliminary versions in ECCC TR99-029, FOCS 1999 and ISIT 2000.