Monday, July 12, 2010

CS: Higher order BP, Robust Sampling with impulsive noise, Reed-Muller deterministic CS matrices, Greedy algos convergence rate and more.

We have some interesting papers starting the week, enjoy:

High-Order Methods for Basis Pursuit by Tom Goldstein and Simon Setzer. The abstract reads:
We introduce a second-order method for solving L1 regularized minimization problems. In most situations, this method nds the exact solution to an L1 regularized minimization in a nite number of steps. This is done by exploiting the fact that, when recovering a sparse signal, the basis pursuit problem is equivalent to a quadratic minimization problem involving a low-rank matrix. The novel method takes advantage of this property by applying a conjugate gradient partan scheme to the underlying low-rank problem. This new algorithm has several advantages over other basis-pursuit schemes. In particular, it requires no time-step parameters as input, and the speed of the algorithm is almost completely insensitive to the condition number of the problem. This allows for the fast solution of basis pursuit problems involving convolution matrices.

Robust Sampling and Reconstruction Methods for Sparse Signals In Presence of Impulsive Noise by Rafael Carrillo, Kenneth Barner and Tuncer Can Aysal. The abstract reads:
Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing, and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with current reconstruction algorithms, fail to recover a close approximation of the signal. In this paper, we propose robust methods for sampling and reconstructing sparse signals in the presence of impulsive noise. To solve the problem of impulsive noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad estimator. In addition, we introduce a geometric optimization problem based on l_1 minimization employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements. Analysis of the proposed methods show that in impulsive environments when the noise posses infinite variance we have a finite reconstruction error and furthermore these methods yield successful reconstruction of the desired signal. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments.

Using Reed-Muller codes as deterministic compressed sensing matrices for image reconstruction by Kangyu Ni, Somantika Datta, Prasun Mahanti, Svetlana Roudenko, and Douglas Cochran.The abstract reads:
An image reconstruction algorithm using compressed sensing (CS) with deterministic matrices of second-order Reed- Muller (RM) sequences is introduced. The 1D algorithm of Howard et al. using CS with RM sequences suffers significant loss in speed and accuracy when the degree of sparsity is not high, making it inviable for 2D signals. This paper describes an efficient 2D CS algorithm using RM sequences, provides medical image reconstruction examples, and compares it with the original 2D CS using noiselets. This algorithm entails several innovations that enhance its suitability for images: initial best approximation, a greedy algorithm for the nonzero locations, and a new approach in the least-squares step. These enhancements improve fidelity, execution time, and stability in the context of image reconstruction.
Of note, Prasun works with LROC.


Convergence rates for greedy algorithms in reduced basis methods by Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk The abstract reads:
The reduced basis method was introduced for the accurate online evaluation of solutions to a parameter dependent family of elliptic partial di fferential equations. Abstractly, it can be viewed as determining a \good" n dimensional space Hn to be used in approximating the elements of a compact set F in a Hilbert space H. One, by now popular, computational approach is to fi nd Hn through a greedy strategy. It is natural to compare the approximation performance of the Hn generated by this strategy with that of the Kolmogorov widths dn(F) since the latter gives the smallest error that can be achieved by subspaces of xed dimension n. The first such comparisons, given in [1], show that the approximation error, n(F) := dist(F;Hn), obtained by the greedy strategy satisfi es n(F) Cn2ndn(F). In this paper, various improvements of this result will be given. Among these, it is shown that whenever dn(F) Mn􀀀 , for all n > 0, and some M; > 0, we also have n(F) C Mn􀀀 for all n > 0, where C depends only on . Similar results are derived for generalized exponential rates of the form Me􀀀an . The exact greedy algorithm is not always computationally feasible and a commonly used computationally friendly variant can be formulated as a \weak greedy algorithm". The results of this paper are established for this version as well.
We only have abstracts of the following:

Stable takens' embedding for linear dynamical systems.by H.L. Yap and Christopher Rozell The abstract reads:
Takens' Embedding Theorem gives theoretical justification for the use of delay coordinate maps in characterizing and predicting nonlinear dynamical systems. However, in practice imperfections such as system and measurement noise may render these results unusable. In this paper, we consider conditions allowing for a stable version of Takens' Embedding Theorem in the restricted case of linear dynamical systems. Our work is inspired from results from the field of Compressive Sensing, where signals from a low-dimensional signal family residing in a high-dimensional space can be robustly recovered from compressive measurements only if the measurement form a stable embedding of the signal family. In particular, we show that a stable embedding of the attractor of the dynamical system is indeed possible and give sufficient conditions on the number of delays and the observation function for the delay coordinate maps to be stabilized. In addition, we also show that when the attractor is an ellipse, the conditioning of the embedding is lower bounded by a positive constant dependent only on the dynamical system and not within control of the experimentalist. We illustrate our results with an example linear dynamical system converging to an elliptical attractor. Our analysis in this paper will give insights into stable Takens' Embedding of general dynamical systems.
Analog sparse approximation for compressed sensing recovery by Christopher Rozell and P. Garrigues.The abstract reads:
Non-smooth convex optimization programs such as L1 minimization produce state-of-the-art results in many signal and image processing applications. Despite the progress in algorithms to solve these programs, they are still too computationally expensive for many real-time applications. Using recent results describing dynamical systems that efficiently solve these types of programs, we demonstrate through simulation that custom analog ICs implementations of this system could potentially perform compressed sensing recovery for real time applications approaching 500 KHz. Furthermore, we show that this architecture can implement several other optimization programs of recent interest, including reweighted L1 and group L1 minimization.

Credit: ESA 2010 MPS for OSIRIS Team MPS / UPD / LAM / IAA / RSSD / INTA / UPM / DASP / IDA, via the Planetary society blog. Lutetia - and Saturn!

No comments:

Printfriendly