Browsing by Author "Baron, Dror"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Analysis of the DCS one-stage Greedy Algorothm for Common Sparse Supports(2005-11-01) Baron, Dror; Duarte, Marco F.; Wakin, Michael; Sarvotham, Shriram; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Analysis of the DCS one-stage Greedy Algorothm for Common Sparse SupportsItem Compressing Piecewise Smooth Multidimensional Functions Using Surflets: Rate-Distortion Analysis(2004-03-01) Chandrasekaran, Venkat; Wakin, Michael; Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Discontinuities in data often represent the key information of interest. Efficient representations for such discontinuities are important for many signal processing applications, including compression, but standard Fourier and wavelet representations fail to efficiently capture the structure of the discontinuities. These issues have been most notable in image processing, where progress has been made on modeling and representing one-dimensional edge discontinuities along C² curves. Little work, however, has been done on efficient representations for higher dimensional functions or on handling higher orders of smoothness in discontinuities. In this paper, we consider the class of N-dimensional Horizon functions containing a CK smooth singularity in N-1 dimensions, which serves as a manifold boundary between two constant regions; we first derive the optimal rate-distortion function for this class. We then introduce the surflet representation for approximation and compression of Horizon-class functions. Surflets enable a multiscale, piecewise polynomial approximation of the discontinuity. We propose a compression algorithm using surflets that achieves the optimal asymptotic rate-distortion performance for this function class. Equally important, the algorithm can be implemented using knowledge of only the N-dimensional function, without explicitly estimating the (N-1)-dimensional discontinuity. This technical report is a supplement to a CISS 2004 paper "Compression of Higher Dimensional Functions Containing Smooth Discontinuities". The body of the paper is the same, while the appendices contain additional details and proofs for all theorems.Item Compression of Higher Dimensional Functions Containing Smooth Discontinuities(2004-03-01) Chandrasekaran, Venkat; Wakin, Michael; Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Discontinuities in data often represent the key information of interest. Efficient representations for such discontinuities are important for many signal processing applications, including compression, but standard Fourier and wavelet representations fail to efficiently capture the structure of the discontinuities. These issues have been most notable in image processing, where progress has been made on modeling and representing one-dimensional edge discontinuities along C² curves. Little work, however, has been done on efficient representations for higher dimensional functions or on handling higher orders of smoothness in discontinuities. In this paper, we consider the class of N-dimensional Horizon functions containing a CK smooth singularity in N-1 dimensions, which serves as a manifold boundary between two constant regions; we first derive the optimal rate-distortion function for this class. We then introduce the surflet representation for approximation and compression of Horizon-class functions. Surflets enable a multiscale, piecewise polynomial approximation of the discontinuity. We propose a compression algorithm using surflets that achieves the optimal asymptotic rate-distortion performance for this function class. Equally important, the algorithm can be implemented using knowledge of only the N-dimensional function, without explicitly estimating the (N-1)-dimensional discontinuity.Item Distributed Compressed Sensing of Jointly Sparse Signals(2005-11-01) Sarvotham, Shriram; Baron, Dror; Wakin, Michael; Duarte, Marco F.; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we expand our theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We present a second new model for jointly sparse signals that allows for joint recovery of multiple signals from incoherent projections through simultaneous greedy pursuit algorithms. We also characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction.Item Faster Sequential Universal Coding via Block Partitioning(2006-04-01) Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Rissanen provided a sequential universal coding algorithm based on a block partitioning scheme, where the source model is estimated at the beginning of each block. This approach asymptotically approaches the entropy at the fastest possible rate of 1/2log(n) bits per unknown parameter. We show that the complexity of this algorithm is /spl Omega/(nlog(n)), which is comparable to existing sequential universal algorithms. We provide a sequential O(nlog(log(n))) algorithm by modifying Rissanen's block partitioning scheme. The redundancy with our approach is greater than with Rissanen's block partitioning scheme by a multiplicative factor 1+O(1/log(log(n))), hence it asymptotically approaches the entropy at the fastest possible rate.Item How Quickly Can We Approach Channel Capacity?(2004-11-01) Baron, Dror; Khojastepour, Mohammad; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Recent progress in code design has made it crucial to understand how quickly communication systems can approach their limits. To address this issue for the channel capacity C, we define the nonasymptotic capacity C/sub NA/(n, /spl epsi/) as the maximal rate of codebooks that achieve a probability /spl epsi/ of codeword error while using codewords of length n. We prove for the binary symmetric channel that C/sub NA/(n,/spl epsi/)=C-K(/spl epsi/)//spl radic/n+o(1//spl radic/n), where K(/spl epsi/) is available in closed form. We also describe similar results for the Gaussian channel. These results may lead to more efficient resource usage in practical communication systems.Item Measurements vs. Bits: Compressed Sensing meets Information Theory(2006-09-01) Sarvotham, Shriram; Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measurements) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noise-free measurements. However, measurement noise is unavoidable in analog systems and must be accounted for. We demonstrate that measurement noise is the crucial factor that dictates the number of measurements needed for reconstruction. To establish this result, we evaluate the information contained in the measurements by viewing the measurement system as an information theoretic channel. Combining the capacity of this channel with the rate-distortion function of the sparse signal, we lower bound the rate-distortion performance of a compressed sensing system. Our approach concisely captures the effect of measurement noise on the performance limits of signal reconstruction, thus enabling to benchmark the performance of specific reconstruction algorithms.Item Random Filters for Compressive Sampling and Reconstruction(2006-05-01) Baraniuk, Richard G.; Wakin, Michael; Duarte, Marco F.; Tropp, Joel A.; Baron, Dror; Digital Signal Processing (http://dsp.rice.edu/)We propose and study a new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps. The method is designed for sparse and compressible signals, i.e., ones that are well approximated by a short linear combination of vectors from an orthonormal basis. Signal reconstruction involves a non-linear Orthogonal Matching Pursuit algorithm that we implement efficiently by exploiting the nonadaptive, time-invariant structure of the measurement process. While simpler and more efficient than other random acquisition techniques like Compressed Sensing, random filtering is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals. Extensive numerical experiments demonstrate its efficacy for acquiring and reconstructing signals sparse in the time, frequency, and wavelet domains, as well as piecewise smooth signals and Poisson processes.Item Representation and Compression of Multi-Dimensional Piecewise Functions Using Surflets(2006-03-01) Chandrasekaran, Venkat; Wakin, Michael; Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)We study the representation, approximation, and compression of functions in M dimensions that consist of constant or smooth regions separated by smooth (M-1)-dimensional discontinuities. Examples include images containing edges, video sequences of moving objects, and seismic data containing geological horizons. For both function classes, we derive the optimal asymptotic approximation and compression rates based on Kolmogorov metric entropy. For piecewise constant functions, we develop a multiresolution predictive coder that achieves the optimal rate-distortion performance; for piecewise smooth functions, our coder has near-optimal rate-distortion performance. Our coder for piecewise constant functions employs surflets, a new multiscale geometric tiling consisting of M-dimensional piecewise constant atoms containing polynomial discontinuities. Our coder for piecewise smooth functions uses surfprints, which wed surflets to wavelets for piecewise smooth approximation. Both of these schemes achieve the optimal asymptotic approximation performance. Key features of our algorithms are that they carefully control the potential growth in surflet parameters at higher smoothness and do not require explicit estimation of the discontinuity. We also extend our results to the corresponding discrete function spaces for sampled data. We provide asymptotic performance results for both discrete function spaces and relate this asymptotic performance to the sampling rate and smoothness orders of the underlying functions and discontinuities. For approximation of discrete data we propose a new scale-adaptive dictionary that contains few elements at coarse and fine scales, but many elements at medium scales. Simulation results demonstrate that surflets provide superior compression performance when compared to other state-of-the-art approximation schemes.Item Surflets: A Sparse Representation for Multidimensional Functions Containing Smooth Discontinuities(2004-07-01) Chandrasekaran, Venkat; Wakin, Michael; Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Discontinuities in data often provide vital information, and representing these discontinuities sparsely is an important goal for approximation and compression algorithms. Little work has been done on efficient representations for higher dimensional functions containing arbitrarily smooth discontinuities. We consider the N-dimensional Horizon class -- N-dimensional functions containing a C^K smooth (N-1)-dimensional singularity separating two constant regions. We derive the optimal rate-distortion function for this class and introduce the multiscale surflet representation for sparse piecewise approximation of these functions. We propose a compression algorithm using surflets that achieves the optimal asymptotic rate-distortion performance for Horizon functions. This algorithm can be implemented using knowledge of only the N-dimensional function, without explicitly estimating the (N-1)-dimensional discontinuity.Item Universal Distributed Sensing via Random Projections(2006-04-01) Wakin, Michael; Duarte, Marco F.; Baraniuk, Richard G.; Baron, Dror; Digital Signal Processing (http://dsp.rice.edu/)This paper develops a new framework for distributed coding and compression in sensor networks based on distributed compressed sensing (DCS). DCS exploits both intra-signal and inter-signal correlations through the concept of joint sparsity; just a few measurements of a jointly sparse signal ensemble contain enough information for reconstruction. DCS is well-suited for sensor network applications, thanks to its simplicity, universality, computational asymmetry, tolerance to quantization and noise, robustness to measurement loss, and scalability. It also requires absolutely no inter- sensor collaboration. We apply our framework to several real world datasets to validate the framework.Item Variable-Rate Universal Slepian-Wolf Coding with Feedback(2005-11-01) Sarvotham, Shriram; Baron, Dror; Baraniuk, Richard G.; Digital Signal Processing (http://dsp.rice.edu/)Traditional Slepian-Wolf coding assumes known statistics and relies on asymptotically long sequences. However, in practice the statistics are unknown, and the input sequences are of finite length. In this finite regime, we must allow a non-zero probability of codeword error and also pay a penalty by adding redundant bits in the encoding process. In this paper, we develop a universal scheme for Slepian-Wolf coding that allows encoding at variable rates close to the Slepian-Wolf limit. We illustrate our scheme in a setup where we encode a uniform Bernoulli source sequence and the second sequence, which is correlated to the first via a binary symmetric correlation channel, is available as side information at the decoder. This specific setup is easily extended to more general settings. For length n source sequences and a fixed, we show that the redundancy of our scheme is O(vnF-1()) bits over the Slepian-Wolf limit. The prior art for Slepian-Wolf coding with known statistics shows that the redundancy is O(vnF-1()). Therefore, we infer that for Slepian-Wolf coding, the penalty needed to accommodate universality is T(vnF-1()).