Friday, November 20, 2009

CS: Brain TF, Sampling subspaces, MRFM, Wideband Signal Acquisition Receivers, Fine Grained Processor Performance Monitoring.


Have you ever wondered how the transfer function between the brain electrical circuitry and the rest of the sensorimotor system of the body was evaluated ? You can have a sense of how this calibration is done in the following video on a surgery performed on a Parkinson's patient and how simple movements can be seen in the EEG like readings in the brain. It looks as though, only specific parts of the brain are responsible for specific movements. Hmmm, it looks like this technique could be improved by some blind sparse deconvolution and this is also a clear (at least to me) sign that a compressive EEG system should not be difficult to build for a Brain-Computer Interface.

Oh well, let us go back to the new findings on the interwebs. I found the following potentially important paper on arxiv: Sampling and reconstructing signals from a union of linear subspaces by Thomas Blumensath. The abstract reads:
In this note we study the problem of sampling and reconstructing signals which are assumed to lie on or close to one of several subspaces of a Hilbert space. Importantly, we here consider a very general setting in which we allow infinitely many subspaces in infinite dimensional Hilbert spaces. This general approach allows us to unify many results derived recently in areas such as compressed sensing, affine rank minimisation and analog compressed sensing. Our main contribution is to show that a conceptually simple iterative projection algorithms is able to recover signals from a union of subspaces whenever the sampling operator satisfies a bi-Lipschitz embedding condition. Importantly, this result holds for all Hilbert spaces and unions of subspaces, as long as the sampling procedure satisfies the condition for the set of subspaces considered. In addition to recent results for finite unions of finite dimensional subspaces and infinite unions of subspaces in finite dimensional spaces, we also show that this bi-Lipschitz property can hold in an analog compressed sensing setting in which we have an infinite union of infinite dimensional subspaces living in infinite dimensional space.
While I was looking at the most recent entries on the Rice Compressive Sensing site, I noticed that I probably left out some recent entries. Here they are:

On the incoherence of noiselet and Haar bases by Tomas Tuma, Paul Hurley. The abstract reads:
Noiselets are a family of functions completely uncompressible using Haar wavelet analysis. The resultant perfect incoherence to the Haar transform, coupled with the existence of a fast transform has resulted in their interest and use as a sampling basis in compressive sampling. We derive a recursive construction of noiselet matrices and give a short matrix-based proof of the incoherence.

On the Applicability of Compressive Sampling in Fine Grained Processor Performance Monitoring by Tomas Tuma, Sean Rooney, Paul Hurley. The abstract reads:
Real-time performance analysis of processor behaviourr equires the efficient gathering of micro-architecturalinformation from processor cores. Such information can beexpected to be highly structured allowing it to be compressed, but the computational burden of conventional compression techniques exclude their use in this environment. We consider the use of new mathematical techniques that allow a signal to be compressed and recovered from a relatively small number of samples. These techniques, collectively termed Compressive Sampling, are asymmetric in that compression is simple, but recovery is complex. This makes them appropriate for applications in which the simplicity of the sensor can be offset against complexity at the ultimate recipient of the sensed information. We evaluate the practicality of using such techniques in the transfer of signals representing one or more micro-architectural counters from a processor core. We show that compressive sampling is usable to recover such performance signals, evaluating the trade-off between efficiency, accuracy and practicability within its various variants.

Bayesian orthogonal component analysis for sparse representation by Nicolas Dobigeon, Jean-Yves Tourneret. The abstract reads:
This paper addresses the problem of identifying a lower dimensional space where observed data can be sparsely represented. This under-complete dictionary learning task can be formulated as a blind separation problem of sparse sources linearly mixed with an unknown orthogonal mixing matrix. This issue is formulated in a Bayesian framework. First, the unknown sparse sources are modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted mixture of an atom at zero and a Gaussian distribution is proposed as prior distribution for the unobserved sources. A non-informative prior distribution defined on an appropriate Stiefel manifold is elected for the mixing matrix. The Bayesian inference on the unknown parameters is conducted using a Markov chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is designed to generate samples asymptotically distributed according to the joint posterior distribution of the unknown model parameters and hyperparameters. These samples are then used to approximate the joint maximum a posteriori estimator of the sources and mixing matrix. Simulations conducted on synthetic data are reported to illustrate the performance of the method for recovering sparse representations. An application to sparse coding on under-complete dictionary is finally investigated.

Hierarchical Bayesian Sparse Image Reconstruction With Application to MRFM by Nicolas Dobigeon, Alfred O. Hero, and Jean-Yves Tourneret. The abstract reads:
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

Application of Compressive Sensing to the Design of Wideband Signal Acquisition Receivers by John Treichler, Mark Davenport, Richard Baraniuk. The abstract reads:
Compressive sensing (CS) exploits the sparsity present in many signals to reduce the number of measurements needed for digital acquisition. With this reduction would come, in theory, commensurate reductions in the size, weight, power consumption, and/or monetary cost of both signal sensors and any associated communication links. This paper examines the use of CS in environments where the input signal takes the
form of a sparse combination of narrowband signals of unknown frequencies that appear anywhere in a broad spectral band. We formulate the problem statement for such a receiver and establish a reasonable set of requirements that a receiver should meet to be practically useful. The performance of a CS receiver for this application is then evaluated in two ways: using the applicable (and still evolving) CS theory and using a set of computer simulations carefully constructed to compare the CS receiver against the performance expected from a conventional implementation. This sets the stage for work in a sequel that will use these results to produce comparisons of the size, weight, and power consumption of a CS receiver against an exemplar of a conventional design.

On Support Sizes of Restricted Isometry Constants by Jeff rey D. Blanchard, Andrew Thompson. The abstract reads:
A generic tool for analyzing sparse approximation algorithms is the restricted isometry property (RIP) introduced by Candes and Tao. For qualitative comparison of sufficient conditions derived from an RIP analysis, the support size of the RIP constants is generally reduced as much as possible with the goal of achieving a support size of twice the sparsity of the target signal. Using a quantitative comparison via phase transitions for Gaussian measurement matrices, three examples from the literature of such support size reduction are investigated. In each case, utilizing a larger support size for the RIP constants results in a sufficient condition for exact sparse recovery satis ed, with high probability, by a signifi cantly larger subset of Gaussian matrices.
There following paper is available behind a paywall. The application of compressed sensing for photo-acoustic tomography by Provost J, Lesage F. The abstract reads:
Photo-acoustic (PA) imaging has been developed for different purposes, but recently, the modality has gained interest with applications to small animal imaging. As a technique it is sensitive to endogenous optical contrast present in tissues and, contrary to diffuse optical imaging, it promises to bring high resolution imaging for in vivo studies at midrange depths (3-10 mm). Because of the limited amount of radiation tissues can be exposed to, existing reconstruction algorithms for circular tomography require a great number of measurements and averaging, implying long acquisition times. Time-resolved PA imaging is therefore possible only at the cost of complex and expensive electronics. This paper suggests a new reconstruction strategy using the compressed sensing formalism which states that a small number of linear projections of a compressible image contain enough information for reconstruction. By directly sampling the image to recover in a sparse representation, it is possible to dramatically reduce the number of measurements needed for a given quality of reconstruction.

No comments:

Printfriendly