Basis pursuit
In-game article clicks load inline without leaving the challenge.
Basis pursuit is the mathematical optimization problem of the form
min x ‖ x ‖ 1 subject to y = A x , {\displaystyle \min _{x}\|x\|_{1}\quad {\text{subject to}}\quad y=Ax,}
where x is a N-dimensional solution vector (signal), y is a M-dimensional vector of observations (measurements), A is a M × N transform matrix (usually measurement matrix) and M < N. The version of basis pursuit that seeks to minimize the L0 norm is NP-hard.
It is usually applied in cases where there is an underdetermined system of linear equations y = Ax that must be exactly satisfied, and the sparsest solution in the L1 sense is desired.
When it is desirable to trade off exact equality of Ax and y in exchange for a sparser x, basis pursuit denoising is preferred.
Basis pursuit problems can be converted to linear programming problems in polynomial time and vice versa, making the two types of problems polynomially equivalent.
Equivalence to Linear Programming
A basis pursuit problem can be converted to a linear programming problem by first noting that
‖ x ‖ 1 = | x 1 | + | x 2 | + … + | x n | = ( x 1 + + x 1 − ) + ( x 2 + + x 2 − ) + … + ( x n + + x n − ) {\displaystyle {\begin{aligned}\|x\|_{1}&=|x_{1}|+|x_{2}|+\ldots +|x_{n}|\\&=(x_{1}^{+}+x_{1}^{-})+(x_{2}^{+}+x_{2}^{-})+\ldots +(x_{n}^{+}+x_{n}^{-})\end{aligned}}}
where x i + , x i − ≥ 0 {\displaystyle x_{i}^{+},x_{i}^{-}\geq 0}. This construction is derived from the constraint x i = x i + − x i − {\displaystyle x_{i}=x_{i}^{+}-x_{i}^{-}}, where the value of | x i | {\displaystyle |x_{i}|} is intended to be stored in x i + {\displaystyle x_{i}^{+}} or x i − {\displaystyle x_{i}^{-}} depending on whether x i {\displaystyle x_{i}} is greater or less than zero, respectively. Although a range of x i + {\displaystyle x_{i}^{+}} and x i − {\displaystyle x_{i}^{-}} values can potentially satisfy this constraint, solvers using the simplex algorithm will find solutions where one or both of x i + {\displaystyle x_{i}^{+}} or x i − {\displaystyle x_{i}^{-}} is zero, resulting in the relation | x i | = ( x i + + x i − ) {\displaystyle |x_{i}|=(x_{i}^{+}+x_{i}^{-})}.
From this expansion, the problem can be recast in canonical form as:
Find a vector ( x + , x − ) that minimizes 1 T x + + 1 T x − subject to A x + − A x − = y and x + , x − ≥ 0 . {\displaystyle {\begin{aligned}&{\text{Find a vector}}&&(\mathbf {x^{+}} ,\mathbf {x^{-}} )\\&{\text{that minimizes}}&&\mathbf {1} ^{T}\mathbf {x^{+}} +\mathbf {1} ^{T}\mathbf {x^{-}} \\&{\text{subject to}}&&A\mathbf {x^{+}} -A\mathbf {x^{-}} =\mathbf {y} \\&{\text{and}}&&\mathbf {x^{+}} ,\mathbf {x^{-}} \geq \mathbf {0} .\end{aligned}}}
See also
- Basis pursuit denoising
- Compressed sensing
- Frequency spectrum
- Group testing
- Lasso (statistics)
- Least-squares spectral analysis
- Matching pursuit
- Sparse approximation
Notes
References & further reading
- Stephen Boyd, Lieven Vandenbergh: Convex Optimization, Cambridge University Press, 2004, ISBN 9780521833783, pp. 337–337
- Simon Foucart, Holger Rauhut: A Mathematical Introduction to Compressive Sensing. Springer, 2013, ISBN 9780817649487, pp. 77–110
External links
- Shaobing Chen, David Donoho:
- Terence Tao: . Mahler Lecture Series (slides)