Saturday, July 11, 2009

Gradient Descent with Sparsification: An Iterative Algorithm for Sparse Recovery with Restricted Isometry Property


This paper tells one story about the following optimization problem arising from compressive sampling
\min_x \quad \| x \|_0 \qquad \text{s.t.} \quad \Phi x = y,
which is usually approximated with the following version
\min_x \quad \| x \|_1 \qquad \text{s.t.} \quad \Phi x = y.
We have quite a few algorithms for solving the optimization under different conditions. This paper proposes the following gradient-based algorithm
x \leftarrow H_s \left( x + \frac{1}{\gamma} \cdot \Phi^\top (y - \Phi x)\right)
where H_s take the largest (absolute value) s elements of x and other as 0.

The theoretical result is their small computation cost (K in each iteration), fast convergence rate (2L / \log(\frac{1 - \delta_{2s}}{2\delta_{2s}})) and lenient recovery condition (\delta_{2s} < \frac{1}{3}). I think it might be a good algorithm if we late want to make some lasso experiments.

No comments: