## AI helps you reading Science

## AI Insight

AI extracts a summary of this paper

Weibo:

# Error correction via linear programming

Pittsburgh, PA, pp.668-681, (2005)

EI WOS

Keywords

Abstract

Suppose we wish to transmit a vector f 2 Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion. We do not know which entries are aected nor do we know how they are aected. Is it possible to recover ...More

Code:

Data:

Introduction

- 1.1 The error correction problem.
- This paper considers the model problem of recovering an input vector f ∈ Rn from corrupted measurements y = Af + e.
- In its abstract form, the problem is equivalent to the classical error correcting problem which arises in coding theory as the authors may think of A as a linear code; a linear code is a given collection of codewords which are vectors a1, .
- The question is : given the coding matrix A and Af + e, can one recover f exactly?

Highlights

- 1.1 The error correction problem

This paper considers the model problem of recovering an input vector f ∈ Rn from corrupted measurements y = Af + e - We report on numerical experiments suggesting that 1minimization is amazingly effective; f is recovered exactly even in situations where a very significant fraction of the output is corrupted
- In its abstract form, our problem is equivalent to the classical error correcting problem which arises in coding theory as we may think of A as a linear code; a linear code is a given collection of codewords which are vectors a1, . . . , an ∈ Rm—the columns of the matrix A
- A first impulse to find the sparsest solution to an underdetermined system of linear equations might be to solve the combinatorial problem (P0 )

Results

- The authors' experiments show that the linear program recovers the input vector all the time as long as the fraction of the corrupted entries is less or equal to 22.5% in the case where m = 2n and less than about 35% in the case where m = 4n

Conclusion

- The paper establishes deterministic results showing that exact decoding occurs provided the coding matrix A obeys the conditions of Theorem 1.1
- It is of interest because the own work [8, 10] shows that the condition of Theorem 1.1 with large values of r for many other types of matrices, and especially matrices obtained by sampling rows or columns of larger Fourier matrices.
- The paper links solutions to sparse underdetermined systems to a linear programming problem for error correction, which the authors believe is new

Funding

- C. is partially supported in part by a National Science Foundation grant DMS 01-40698 (FRG) and by an Alfred P
- R. is partially supported by the NSF grant DMS 0245380
- T. is supported by a grant from the Packard Foundation
- Sloan Research Fellow He was also partially supported by the NSF grant DMS 0401032 and by the Miller Scholarship from the

Reference

- S. Artstein. Proportional concentration phenomena on the sphere. Israel J. Math. 132: 337–358, 2002.
- D. Amir, and V. D. Milman. Unconditional and symmetric sets in n-dimensional normed spaces. Israel J. Math. 37: 3– 20, 1980.
- B. Beferull-Lozano, and A. Ortega. Efficient quantization for overcomplete expansions in Rn. IEEE Trans. Inform. Theory 49: 129–150, 2003.
- S. Boyd, and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
- P. G. Casazza, and J. Kovacevic. Equal-norm tight frames with erasures. Adv. Comput. Math. 18: 387–430, 2003.
- E. J. Candes, and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions. To appear Foundations of Computational Mathematics, November 2004.
- E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. To appear IEEE Transactions on Information Theory, June 2004. Available on the ArXiV preprint server: math.NA/0409186.
- E. J. Candes, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. To appear Comm. Pure Appl. Math. Available on the ArXiV preprint server: math.NA/0409186.
- E. J. Candes, and T. Tao. Near optimal signal recovery from random projections: universal encoding strategies? Submitted to IEEE Transactions on Information Theory, October 2004. Available on the ArXiV preprint server: math.CA/0410542.
- E. J. Candes, and T. Tao. Decoding by linear programming. Submitted, December 2004. Available on the ArXiV preprint server: math.MG/0502327.
- S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Scientific Computing 20: 33–61, 1998.
- I.Daubechies. Ten lectures on wavelets. SIAM, Philadelphia, 1992.
- D. L. Donoho. For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Manuscript, September 2004.
- D. L. Donoho. For most large undetermined systems of linear equations the minimal 1-norm near-solution is also the sparsest near-solution. Manuscript September 2004.
- D. Donoho. Compressed sensing. Manuscript, September 2004.
- D. L. Donoho, and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization. Proc. Natl. Acad. Sci. USA 100: 2197–2202 (2003).
- D. Donoho, M. Elad, and V. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. Manuscript, 2004.
- D. L. Donoho, and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE Transactions on Information Theory, 47:2845–2862, 2001.
- D. L. Donoho, and Y. Tsaig. Extensions of compresed sensing. Preprint, 2004.
- D. Donoho, and Y. Tsaig. Breakdown of equivalence between the minimal 1-norm solution and the sparsest solution. Preprint, 2004.
- N. El Karoui. New Results about Random Covariance Matrices and Statistical Applications. Stanford Ph..D. Thesis, August 2004.
- M. Elad, and A. Bruckstein. A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inform. Theory 48: 2558–2567, 2002.
- J. Feldman. Decoding Error-Correcting Codes via Linear Programming. Ph.D. Thesis 2003, Massachussets Institute of Technology.
- J. Feldman, LP decoding achieves capacity, 2005 ACMSIAM Symposium on Discrete Algorithms (SODA), preprint (2005).
- J. Feldman, T. Malkin, C. Stein, R. A. Servedio, and M. J. Wainwright, LP decoding corrects a constant fraction of errors. Proc. IEEE International Symposium on Information Theory (ISIT), June 2004.
- A. Feuer, and A. Nemirovski. On sparse representation in pairs of bases. IEEE Trans. Inform. Theory 49: 1579–1581, 2003.
- A. Yu. Garnaev, E. D. Gluskin, The widths of a Euclidean ball (Russian), Dokl. Akad. Nauk SSSR 277: 1048–1052, 1984. English translation: Soviet Math. Dokl. 30: 200–204, 1984.
- V. K. Goyal. Theoretical foundations of transform coding. IEEE Signal Processing Magazine 18(5): 9–21, 2001.
- V. K. Goyal. Multiple description coding: compression meets the network. IEEE Signal Processing Magazine 18(5): 74–93, 2001.
- V. K. Goyal, J. Kovacevic, and J. A. Kelner. Quantized frame expansions with erasures. Applied and Computational Harmonic Analysis 10: 203–233, 2001.
- V. K. Goyal, M. Vetterli, and N. T. Thao. Quantized overcomplete expansions in RN: analysis, synthesis and algorithms, IEEE Trans. on Information Theory 44: 16–31, 1998.
- R. Gribonval, and M. Nielsen. Sparse representations in unions of bases. IEEE Trans. Inform. Theory 49: 3320– 3325, 2003.
- Handbook of coding theory. Vol. I, II. Edited by V. S. Pless, W. C. Huffman and R. A. Brualdi. North-Holland, Amsterdam, 1998.
- I. M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29: 295–327, 2001.
- J. Kovacevic, P. Dragotti, and V. Goyal. Filter bank frame expansions with erasures. IEEE Trans. on Information Theory, 48: 1439–1450, 2002.
- M. Ledoux. The concentration of measure phenomenon. Mathematical Surveys and Monographs 89, American Mathematical Society, Providence, RI, 2001.
- M. A. Lifshits, Gaussian random functions. Mathematics and its Applications, 322. Kluwer Academic Publishers, Dordrecht, 1995.
- V. A. Marchenko, and L. A. Pastur. Distribution of eigenvalues in certain sets of random matrices. Mat. Sb. (N.S.) 72: 407–535, 1967 (in Russian).
- J. Matousek. Lectures on discrete geometry. Graduate Texts in Mathematics, 212. Springer-Verlag, New York, 2002.
- S. Mendelson. Geometric parameters in learning theory. Geometric aspects of functional analysis. Lecture Notes in Mathematics 1850: 193–235, Springer, Berlin, 2004.
- B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Comput. 24: 227–234, 1995.
- M. Rudelson, and R. Vershynin. Geometric approach to error correcting codes and reconstruction of signals. Submitted, 2005. Available on the ArXiV preprint server: math.FA/0502299.
- S. J. Szarek. Condition numbers of random matrices. J. Complexity 7:131–149, 1991.
- [45] J. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory, 50(10): 22312242, October 2004.
- [46] J. Tropp. Just relax: Convex programming methods for subset selection and sparse approximation. ICES Report 04-04, UT-Austin, 2004.

Tags

Comments

数据免责声明

页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果，我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问，可以通过电子邮件方式联系我们：report@aminer.cn