Wednesday, September 12, 2007

Compressed Sensing: Oil, Curvelets, Missing Data and a Condition on strong CS


There is a new batch of Compressed Sensing articles showing up on the Rice Compressed Sensing site. I am told by Mark Davenport that the site is going to change at some point and will allow for RSS feed. This is good.

Three subjects caught my attention in this batch.

First as usual, the oil people have always been pushing the envelope in terms of devising and using the newest applied mathematics to get things done, so I was not overly surprised to see how curvelets are being used to figure out the layers from seismic measurements. In Non-parametric seismic data recovery with curvelet frames, Felix J. Herrmann and Gilles Hennenfent describe a curvelet-based recovery of seismic signal by a sparsity-promoting inversion technique. This is interesting as this is first time I see an additional matrix added in the inverse problem in order to get rid of data that are necessarily shown by the physics to lead to an ill-posed problem.

With regards to the problem of the missing data problem mentioned earlier, Yin Zhang at Rice asks and begins to answer the question "When is missing data recoverable?" and show how the issue of what you are given in the missing data problem could be construed as a random projections from the real data with the hope that using these measurements you can do a good job of inverting the problem at hand.

Yin is also a co-writer of the Fixed-Point Continuation (FPC) matlab code , An algorithm for large-scale image and data processing applications of l1-minimization

General l-1 regularized minimization problems of the form

(1) min ||x||1 + μ f(x),
where f is a convex, but not necessarily strictly convex, function, can be solved with a globally-convergent fixed-point iteration scheme.
Last but not least, Boris S. Kashin and Vladimir N. Temlyakov, A remark on compressed sensing where their equation 1.4 gives a condition I had never seen between the L1 and L2 norm and sparsity number to permit what they call Weak or Strong Compressed Sensing. What we have generally heard before seemed to only be about sparsity number, so I think this is new.

2 comments:

Anonymous said...

There is one new example (chapter 4) on compressed sensing in the book by Dattorro

http://convexoptimization.com

It shows how to solve a minimum rank problem (which is like l_0 projection).

Igor said...

Thank you very much. This is also an example that I mentioned in a previous entry:

http://arxiv.org/abs/0706.4138

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
Authors: Benjamin Recht, Maryam Fazel, Pablo A. Parrilo

http://nuit-blanche.blogspot.com/2007/08/cs-is-not-just-compressed-sampling-nor.html

They make a parallel between the sparsity issue and the rank minimization issue and use the distance problem as one of their examples as well. Dattorro goes deeper in the explanation, very nice.

It also looks like the machine learning folks have gotten speedier algorithm to do that as well, so there maybe a use for this type of algorithm for compressed sensing applications.

Thanks for the note.

Igor.

Printfriendly