Browsing by Author "Davenport, Mark A."
Now showing 1 - 20 of 25
Results Per Page
Sort Options
Item The 2nu-SVM: A Cost-Sensitive Extension of the nu-SVM(2005-12-01) Davenport, Mark A.; Digital Signal Processing (http://dsp.rice.edu/)Standard classification algorithms aim to minimize the probability of making an incorrect classification. In many important applications, however, some kinds of errors are more important than others. In this report we review cost-sensitive extensions of standard support vector machines (SVMs). In particular, we describe cost-sensitive extensions of the C-SVM and the nu-SVM, which we denote the 2C-SVM and 2nu-SVM respectively. The C-SVM and the nu-SVM are known to be closely related, and we prove that the 2C-SVM and 2nu-SVM share a similar relationship. This demonstrates that the 2C-SVM and 2nu-SVM explore the same space of possible classifiers, and gives us a clear understanding of the parameter space for both versions.Item An Introduction to Compressive Sensing(Rice University, 2014-08-26) Baraniuk, Richard; Davenport, Mark A.; Duarte, Marco F.; Hegde, ChinmayItem Compressive Sensing(Rice University, 2007-09-21) Davenport, Mark A.; Baraniuk, Richard; DeVore, RonaldItem Controlling False Alarms with Support Vector Machines(2006-05-01) Davenport, Mark A.; Baraniuk, Richard G.; Scott, Clayton D.; Digital Signal Processing (http://dsp.rice.edu/)We study the problem of designing support vector classifiers with respect to a Neyman-Pearson criterion. Specifically, given a user-specified level alpha, 0 < alpha < 1, how can we ensure a false alarm rate no greater than a while minimizing the miss rate? We examine two approaches, one based on shifting the offset of a conventionally trained SVM and the other based on the introduction of class-specific weights. Our contributions include a novel heuristic for improved error estimation and a strategy for efficiently searching the parameter space of the second method. We also provide a characterization of the feasible parameter set of the 2nu-SVM on which the second approach is based. The proposed methods are compared on four benchmark datasets.Item Detection and estimation with compressive measurements(2006-11-01) Baraniuk, Richard G.; Davenport, Mark A.; Wakin, Michael B.The recently introduced theory of compressed sensing enables the reconstruction of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist rate samples. Interestingly, it has been shown that random projections are a satisfactory measurement scheme. This has inspired the design of physical systems that directly implement similar measurement schemes. However, despite the intense focus on the reconstruction of signals, many (if not most) signal processing problems do not require a full reconstruction of the signal { we are often interested only in solving some sort of detection problem or in the estimation of some function of the data. In this report, we show that the compressed sensing framework is useful for a wide range of statistical inference tasks. In particular, we demonstrate how to solve a variety of signal detection and estimation problems given the measurements without ever reconstructing the signals themselves. We provide theoretical bounds along with experimental results.Item Error control for support vector machines(2007) Davenport, Mark A.; Baraniuk, Richard G.In binary classification there are two types of errors, and in many applications these may have very different costs. We consider two learning frameworks that address this issue: minimax classification, where we seek to minimize the maximum of the false alarm and miss rates, and Neyman-Pearson (NP) classification, where we seek to minimize the miss rate while ensuring the false alarm rate is less than a specified level a. We show that our approach, based on cost-sensitive support vector machines, significantly outperforms methods typically used in practice. Our results also illustrate the importance of heuristics for improving the accuracy of error rate estimation in this setting. We then reduce anomaly detection to NP classification by considering a second class of points, allowing us to estimate minimum volume sets using algorithms for NP classification. Comparing this approach with traditional one-class methods, we find that our approach has several advantages.Item Learning minimum volume sets with support vector machines(2006-09-01) Davenport, Mark A.; Baraniuk, Richard G.; Scott, Clayton D.Given a probability law P on d-dimensional Euclidean space, the minimum volume set (MV-set) with mass beta , 0 < beta < 1, is the set with smallest volume enclosing a probability mass of at least beta. We examine the use of support vector machines (SVMs) for estimating an MV-set from a collection of data points drawn from P, a problem with applications in clustering and anomaly detection. We investigate both one-class and two-class methods. The two-class approach reduces the problem to Neyman-Pearson (NP) classification, where we artificially generate a second class of data points according to a uniform distribution. The simple approach to generating the uniform data suffers from the curse of dimensionality. In this paper we (1) describe the reduction of MV-set estimation to NP classification, (2) devise improved methods for generating artificial uniform data for the two-class approach, (3) advocate a new performance measure for systematic comparison of MV-set algorithms, and (4) establish a set of benchmark experiments to serve as a point of reference for future MV-set algorithms. We find that, in general, the two-class method performs more reliably.Item Method and apparatus for automatic gain control for nonzero saturation rates(2013-07-16) Baraniuk, Richard G.; Laska, Jason N.; Boufounos, Petros T.; Davenport, Mark A.; Rice University; United States Patent and Trademark OfficeA method for automatic gain control comprising the steps of measuring a signal using compressed sensing to produce a sequence of blocks of measurements, applying a gain to one of the blocks of measurements, adjusting the gain based upon a deviation of a saturation rate of the one of the blocks of measurements from a predetermined nonzero saturation rate and applying the adjusted gain to a second of the blocks of measurements. Alternatively, a method for automatic gain control comprising the steps of applying a gain to a signal, computing a saturation rate of the signal and adjusting the gain based upon a difference between the saturation rate of the signal and a predetermined nonzero saturation rate.Item Method and apparatus for compressive domain filtering and interference cancellation(2014-05-13) Davenport, Mark A.; Boufounos, Petros T.; Baraniuk, Richard G.; Rice University; United States Patent and Trademark OfficeA method for compressive domain filtering and interference cancelation processes compressive measurements to eliminate or attenuate interference while preserving the information or geometry of the set of possible signals of interest. A signal processing apparatus assumes that the interfering signal lives in or near a known subspace that is partially or substantially orthogonal to the signal of interest, and then projects the compressive measurements into an orthogonal subspace and thus eliminate or attenuate the interference. This apparatus yields a modified set of measurements that can provide a stable embedding of the set of signals of interest, in which case it is guaranteed that the processed measurements retain sufficient information to enable the direct recovery of this signal of interest, or alternatively to enable the use of efficient compressive-domain algorithms for further processing. The method and apparatus operate directly on the compressive measurements to remove or attenuate unwanted signal components.Item Method and apparatus for compressive parameter estimation and tracking(2013-10-22) Baraniuk, Richard G.; Boufounos, Petros T.; Schnelle, Stephen R.; Davenport, Mark A.; Laska, Jason N.; Rice University; United States Patent and Trademark OfficeA method for estimating and tracking locally oscillating signals. The method comprises the steps of taking measurements of an input signal that approximately preserve the inner products among signals in a class of signals of interest and computing an estimate of parameters of the input signal from its inner products with other signals. The step of taking measurements may be linear and approximately preserve inner products, or may be non-linear and approximately preserves inner products. Further, the step of taking measurements is nonadaptive and may comprise compressive sensing. In turn, the compressive sensing may comprise projection using one of a random matrix, a pseudorandom matrix, a sparse matrix and a code matrix. The step of tracking said signal of interest with a phase-locked loop may comprise, for example, operating on compressively sampled data or by operating on compressively sampled frequency modulated data, tracking phase and frequency.Item Method and apparatus for distributed compressed sensing(2009-03-31) Baraniuk, Richard G.; Baron, Dror Z.; Duarte, Marco F.; Sarvotham, Shriram; Wakin, Michael B.; Davenport, Mark A.; Rice University; United States Patent and Trademark OfficeA method for approximating a plurality of digital signals or images using compressed sensing. In a scheme where a common component xc of said plurality of digital signals or images an innovative component xi of each of said plurality of digital signals each are represented as a vector with m entries, the method comprises the steps of making a measurement yc, where yc comprises a vector with only ni entries, where ni is less than m, making a measurement yi for each of said correlated digital signals, where yi comprises a vector with only ni entries, where ni is less than m, and from each said innovation components yi, producing an approximate reconstruction of each m-vector xi using said common component yc and said innovative component yi.Item Method and apparatus for distributed compressed sensing(2007-09-18) Baraniuk, Richard G.; Baron, Dror Z.; Duarte, Marco F.; Sarvotham, Shriram; Wakin, Michael B.; Davenport, Mark A.; Rice University; United States Patent and Trademark OfficeA method for approximating a plurality of digital signals or images using compressed sensing. In a scheme where a common component xc of said plurality of digital signals or images an innovative component xi of each of said plurality of digital signals each are represented as a vector with m entries, the method comprises the steps of making a measurement yc, where yc comprises a vector with only ni entries, where ni is less than m, making a measurement yi for each of said correlated digital signals, where yi comprises a vector with only ni entries, where ni is less than m, and from each said innovation components yi, producing an approximate reconstruction of each m-vector xi using said common component yc and said innovative component yi.Item Method and apparatus for on-line compressed sensing(2014-04-01) Baraniuk, Richard G.; Baron, Dror Z.; Duarte, Marco F.; Elnozahi, Mohamed; Wakin, Michael B.; Davenport, Mark A.; Laska, Jason N.; Tropp, Joel A.; Massoud, Yehia; Kirolos, Sami; Ragheb, Tamer; Rice University; United States Patent and Trademark OfficeA typical data acquisition system takes periodic samples of a signal, image, or other data, often at the so-called Nyquist/Shannon sampling rate of two times the data bandwidth in order to ensure that no information is lost. In applications involving wideband signals, the Nyquist/Shannon sampling rate is very high, even though the signals may have a simple underlying structure. Recent developments in mathematics and signal processing have uncovered a solution to this Nyquist/Shannon sampling rate bottleneck for signals that are sparse or compressible in some representation. We demonstrate and reduce to practice methods to extract information directly from an analog or digital signal based on altering our notion of sampling to replace uniform time samples with more general linear functionals. One embodiment of our invention is a low-rate analog-to-information converter that can replace the high-rate analog-to-digital converter in certain applications involving wideband signals. Another embodiment is an encoding scheme for wideband discrete-time signals that condenses their information content.Item Method and apparatus for signal detection- classification and estimation from compressive measurements(2013-07-09) Baraniuk, Richard G.; Duarte, Marco F.; Davenport, Mark A.; Wakin, Michael B.; Rice University; United States Patent and Trademark OfficeThe recently introduced theory of Compressive Sensing (CS) enables a new method for signal recovery from incomplete information (a reduced set of “compressive” linear measurements), based on the assumption that the signal is sparse in some dictionary. Such compressive measurement schemes are desirable in practice for reducing the costs of signal acquisition, storage, and processing. However, the current CS framework considers only a certain task (signal recovery) and only in a certain model setting (sparsity). We show that compressive measurements are in fact information scalable, allowing one to answer a broad spectrum of questions about a signal when provided only with a reduced set of compressive measurements. These questions range from complete signal recovery at one extreme down to a simple binary detection decision at the other. (Questions in between include, for example, estimation and classification.) We provide techniques such as a “compressive matched filter” for answering several of these questions given the available measurements, often without needing to first reconstruct the signal. In many cases, these techniques can succeed with far fewer measurements than would be required for full signal recovery, and such techniques can also be computationally more efficient. Based on additional mathematical insight, we discuss information scalable algorithms in several model settings, including sparsity (as in CS), but also in parametric or manifold-based settings and in model-free settings for generic statements of detection, classification, and estimation problems.Item Method and apparatus for signal reconstruction from saturated measurements(2013-06-04) Baraniuk, Richard G.; Laska, Jason N.; Boufounos, Petros T.; Davenport, Mark A.; Rice University; United States Patent and Trademark OfficeA method for recovering a signal by measuring the signal to produce a plurality of compressive sensing measurements, discarding saturated measurements from the plurality of compressive sensing measurements and reconstructing the signal from remaining measurements from the plurality of compressive sensing measurements. Alternatively, a method for recovering a signal comprising the steps of measuring a signal to produce a plurality of compressive sensing measurements, identifying saturated measurements in the plurality of compressive sensing measurements and reconstructing the signal from the plurality of compressive sensing measurements, wherein the recovered signal is constrained such that magnitudes of values corresponding to the identified saturated measurements are greater than a predetermined value.Item Minimax support vector machines(2007-08-01) Davenport, Mark A.; Baraniuk, Richard G.; Scott, Clayton D.We study the problem of designing support vector machine (SVM) classifiers that minimize the maximum of the false alarm and miss rates. This is a natural classification setting in the absence of prior information regarding the relative costs of the two types of errors or true frequency of the two classes in nature. Examining two approaches – one based on shifting the offset of a conventionally trained SVM, the other based on the introduction of class-specific weights – we find that when proper care is taken in selecting the weights, the latter approach significantly outperforms the strategy of shifting the offset. We also find that the magnitude of this improvement depends chiefly on the accuracy of the error estimation step of the training procedure. Furthermore, comparison with the minimax probability machine (MPM) illustrates that our SVM approach can outperform the MPM even when the MPM parameters are set by an oracle.Item Multiscale random projections for compressive classification(2007-09-01) Duarte, Marco F.; Davenport, Mark A.; Wakin, Michael B.; Laska, Jason N.; Takhar, Dharmpal; Kelly, Kevin F.; Baraniuk, Richard G.We propose a framework for exploiting dimension-reducing random projections in detection and classification problems. Our approach is based on the generalized likelihood ratio test; in the case of image classification, it exploits the fact that a set of images of a fixed scene under varying articulation parameters forms a low-dimensional, nonlinear manifold. Exploiting recent results showing that random projections stably embed a smooth manifold in a lower-dimensional space, we develop the multiscale smashed filter as a compressive analog of the familiar matched filter classifier. In a practical target classification problem using a single-pixel camera that directly acquires compressive image projections, we achieve high classification rates using many fewer measurements than the dimensionality of the images.Item Random observations on random observations: Sparse signal acquisition and processing(2010) Davenport, Mark A.; Baraniuk, Richard G.In recent years, signal processing has come under mounting pressure to accommodate the increasingly high-dimensional raw data generated by modern sensing systems. Despite extraordinary advances in computational power, processing the signals produced in application areas such as imaging, video, remote surveillance, spectroscopy, and genomic data analysis continues to pose a tremendous challenge. Fortunately, in many cases these high-dimensional signals contain relatively little information compared to their ambient dimensionality. For example, signals can often be well-approximated as a sparse linear combination of elements from a known basis or dictionary. Traditionally, sparse models have been exploited only after acquisition, typically for tasks such as compression. Recently, however, the applications of sparsity have greatly expanded with the emergence of compressive sensing, a new approach to data acquisition that directly exploits sparsity in order to acquire analog signals more efficiently via a small set of more general, often randomized, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. A common theme in this research is the use of randomness in signal acquisition, inspiring the design of hardware systems that directly implement random measurement protocols. This thesis builds on the field of compressive sensing and illustrates how sparsity can be exploited to design efficient signal processing algorithms at all stages of the information processing pipeline, with a particular focus on the manner in which randomness can be exploited to design new kinds of acquisition systems for sparse signals. Our key contributions include: (i) exploration and analysis of the appropriate properties for a sparse signal acquisition system; (ii) insight into the useful properties of random measurement schemes; (iii) analysis of an important family of algorithms for recovering sparse signals from random measurements; (iv) exploration of the impact of noise, both structured and unstructured, in the context of random measurements; and (v) algorithms that process random measurements to directly extract higher-level information or solve inference problems without resorting to full-scale signal recovery, reducing both the cost of signal acquisition and the complexity of the post-acquisition processing.Item Regression level set estimation via cost-sensitive classification(2007-06-01) Scott, Clayton D.; Davenport, Mark A.Regression level set estimation is an important yet understudied learning task. It lies somewhere between regression function estimation and traditional binary classification, and in many cases is a more appropriate setting for questions posed in these more common frameworks. This note explains how estimating the level set of a regression function from training examples can be reduced to cost-sensitive classification. We discuss the theoretical and algorithmic benefits of this learning reduction, demonstrate several desirable properties of the associated risk, and report experimental results for histograms, support vector machines, and nearest neighbor rules on synthetic and real data.Item A simple proof of the restricted isometry property for random matrices(2007-01-18) Baraniuk, Richard G.; Davenport, Mark A.; DeVore, Ronald A.; Wakin, Michael B.We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main ingredients: (i) concentration inequalities for random inner products that have recently provided algorithmically simple proofs of the Johnson–Lindenstrauss lemma; and (ii) covering numbers for finite-dimensional balls in Euclidean space. This leads to an elementary proof of the Restricted Isometry Property and brings out connections between Compressed Sensing and the Johnson–Lindenstrauss lemma. As a result, we obtain simple and direct proofs of Kashin’s theorems on widths of finite balls in Euclidean space (and their improvements due to Gluskin) and proofs of the existence of optimal Compressed Sensing measurement matrices. In the process, we also prove that these measurements have a certain universality with respect to the sparsity-inducing basis.