Thursday, July 10, 2008

CS: Restricted isometry constants where lp-minimization can fail for p above 0 and less or equal to 1, and a talk at Rice.


We investigate conditions under which the solution of an underdetermined linear system with minimal ℓp norm, , is guaranteed to be also the sparsest one. Our results highlight the pessimistic nature of sparse recovery analysis when recovery is predicted based on the restricted isometry constants (RIC) of the associated matrix. We construct matrices with RIC δ2m arbitrarily close to 1/√2 ≈ 0.717 where sparse recovery with p = 1 fails for at least one m-sparse vector. This indicates that there is limited room for improving over the best known positive results of Foucart and Lai (mentioned here), which guarantee that ℓ1-minimisation recovers all m-sparse vectors for any matrix with . Another consequence of our construction is that recovery conditions expressed uniformly for all matrices in terms of RIC must require that all 2m-column submatrices are extremely well conditioned (condition numbers less than 2.5). In contrast, we also construct matrices with δ2m arbitrarily close to one and δ2m+1 = 1 where ℓ1-minimisation succeeds for any m-sparse vector. This illustrates the limits of RIC as a tool to predict the behaviour of ℓ1 minimisation. These constructions are a by-product of tight conditions for ℓp recovery () with matrices of unit spectral norm, which are expressed in terms of the minimal singular values of 2m-column submatrices. The results show that, compared to ℓ1-minimisation, ℓp-minimisation recovery failure is only slightly delayed in terms of the RIC values. Furthermore in this case the minimisation is nonconvex and it is important to consider the specific minimisation algorithm being used. We show that when ℓp optimisation is attempted using an iterative reweighted ℓ1 scheme, failure can still occur for arbitrarily close to 1/√2.

Remi Gribonval also sent me the summarizing comments on the subject:
The main contributions are as follows:

1) Weakness of the prediction of l1 recovery in terms of RIC

We highlight the pessimistic nature of the RIC to predict sparse recovery by constructing matrices with RIC delta_2m ~ 0.717 where l1 minimization fails to recover at least one m-sparse vector, as well as tight frames with RIC delta_2m ~ 1 where l1 is nevertheless always successful.

2) Tight characterizations of lp recovery

Furthermore, we obtain tight characterizations of the success of lp recovery, , in terms of asymetric versions of the RIC. The results show that lp-failure is only slightly delayed compared to l1 failure.

3) Iterative reweighted l1

Last, we prove that when an iterative re-weighted l1 scheme is used to attempt to solve the non-convex lp-minimization problem, failure can still occur for an RIC arbitrarily close to 0.717.

While this may seem in contradiction with empirical evidence indicating the substantial benefits of iterative re-weighted l1 over plain l1, this suggests that the typical performance of these algorithms is not fully captured by a worst case analysis based on the RIC, and a more subtle coefficient dependent analysis is required to fully understand it.




Presentation at Rice (Thursday, July 17, 2008 4:00 PM to 5:00 PM 1049 Duncan Hall, Rice University, 6100 Main St, Houston, Texas) by Rachel Ward entitled: Cross Validation in Compressed Sensing via the Johnson Lindenstrauss Lemma.


Compressed Sensing decoding algorithms aim to reconstruct an unknown N dimensional vector x from m with an assumed sparsity constraint on x. All algorithms presently are iterative in nature, producing a sequence of approximations until a certain algorithm-specific stopping criterion is reached at iteration J, at which point the estimate x* = is returned as an approximation to x. In many algorithms, the error || x - x* || measured in the Hilbert norm is bounded above by a function of the error between x and the best k-term approximation to x. However, as x is unknown, such estimates provide no numerical bounds on the error.

In this talk, we demonstrate that tight numerical upper and lower bounds on the error ||x - s_j || for iterations of a compressed sensing decoding algorithm are attainable by simply reserving 4 log p of the original m measurements to cross validate the sequence of approximants s_j obtained from the remaining measurements, using as underlying tool the Johnson-Lindenstrauss lemma. As a result, a numerical upper bound on the error between x and the best k-term approximation to x can be estimated with almost no cost. Our observation has applications outside of Compressed Sensing as well.





Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M. Before Scraping Wonderland as seen from Phoenix, Sol 45.

No comments:

Printfriendly