Browsing by Author "Nowak, Robert D."
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item A coding theoretic approach to image segmentation(2001) Ndili, Unoma Ifeyinwa; Nowak, Robert D.Using a coding theoretic approach, we achieve unsupervised image segmentation by implementing Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions in images. MDL offers a mathematical foundation for balancing brevity of descriptions against their fidelity to the data by penalizing overly complex representations. Our image model is a Gaussian random field whose mean and variance functions are piecewise constant. The image pixels are conditionally independent and Gaussian, given the mean and variance functions. Our model is aimed at identifying regions of constant intensity (mean) and texture (variance). We adopt a multi-scale encoding approach to the segmentation problem, and develop two different schemes. One algorithm is based on an adaptive (greedy) rectangular partitioning, while the second algorithm is an optimally-pruned wedgelet-decorated dyadic partitioning scheme. We compare the two algorithms with the more common signal plus constant noise schemes, which account for variations in mean only. We explore applications of our algorithms on Synthetic Aperture Radar (SAR) imagery. Based on our segmentation scheme, we implement a robust Constant False alarm Rate (CFAR) detector towards Automatic Target Recognition (ATR) on Laser Radar (LADAR) and Infra-Red (IR) images.Item A hierarchical wavelet-based framework for pattern analysis and synthesis(2000) Scott, Clayton Dean; Nowak, Robert D.Despite their success in other areas of statistical signal processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations inherent in most pattern observations. In this thesis we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown pattern transformations. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR, a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. If we are given several trained models for different patterns, our framework provides a low-dimensional subspace classifier that is invariant to unknown pattern transformations as well as background clutter.Item Dyadic decision trees(2004) Scott, Clayton; Nowak, Robert D.This thesis introduces a new family of classifiers called dyadic decision trees (DDTs) and develops their theoretical properties within the framework of statistical learning theory. First, we show that DDTs achieve optimal rates of convergence for a broad range of classification problems and are adaptive in three important respects: They automatically (1) adapt to favorable conditions near the Bayes decision boundary; (2) focus on data distributed on lower dimensional manifolds; and (3) reject irrelevant features. DDTs are selected by penalized empirical risk minimization using a new data-dependent penalty and may be computed exactly and efficiently. DDTs are the first practical classifier known to achieve optimal rates for the diverse class of distributions studied here. This is also the first study (of which we are aware) to consider rates for adaptation to data dimension and relevant features. Second, we develop the theory of statistical learning using the Neyman-Pearson (NP) criterion. It is shown that concepts from learning with a Bayes error criterion have counterparts in the NP context. Thus, we consider constrained versions of empirical risk minimization and structural risk minimization (NP-SRM), proving performance guarantees for both. We also provide a general condition under which NP-SRM leads to strong universal consistency. Finally, we apply NP-SRM to dyadic decision trees, deriving rates of convergence and providing an explicit algorithm to implement NP-SRM in this setting. Third, we study the problem of pruning a binary tree by minimizing an objective function that sums an additive cost with a non-additive penalty depending only on tree size. We focus on sub-additive penalties which are motivated by theoretical results for dyadic and other decision trees. Consider the family of optimal prunings generated by varying the scalar multiplier of a sub-additive penalty. We show this family is a subset of the analogous family produced by an additive penalty. This implies (by known results for additive penalties) that the trees generated by a sub-additive penalty (1) are nested; (2) are unique; and (3) can be computed efficiently. It also implies that an additive penalty is preferable when using cross-validation to select from the family of possible prunings.Item Loss inference in unicast network tomography based on TCP traffic monitoring(2001) Tsang, Yau-Yau Yolanda; Nowak, Robert D.Network tomography is a promising technique for characterizing the internal behavior of large-scale networks based solely on end-to-end measurements. Despite the efficiency of active probing in most network loss tomography methods, these measurements impose an additional burden on the network in terms of bandwidth and network resources. They can therefore cause the estimated performance parameters to differ substantially from losses suffered by existing TCP traffic flows. In this thesis, we propose a promising passive measurement framework based on the sampling of existing TCP flows. We demonstrate its performance using extensive ns-2 simulations. We observe accurate estimates of link losses (with 2% mean absolute error). We also describe the Expectation-Maximization (EM) algorithm in solving the Maximum Likelihood (ML) Estimates in terms of individual link loss rates as an incomplete data problem. Finally, we present a new method for simultaneously visualizing the network connectivity and the network performance parameters.Item Multiple-source network tomography(2003) Rabbat, Michael Gabriel; Nowak, Robert D.Assessing and predicting internal network performance is of fundamental importance in problems ranging from routing optimization to anomaly detection. The problem of estimating internal network structure and link-level performance from end-to-end measurements is called network tomography. This thesis investigates the general network tomography problem involving multiple sources and receivers, building on existing single source techniques. Using multiple sources potentially provides a more accurate and refined characterization of the internal network. The general network tomography problem is decomposed into a set of smaller components, each involving just two sources and two receivers. A novel measurement procedure is proposed which utilizes a packet arrival order metric to classify two-source, two-receiver topologies according to their associated model-order. Then a decision-theoretic framework is developed, enabling the joint characterization of topology and internal performance. A statistical test is designed which provides a quantification of the tradeoff between network topology complexity and network performance estimation.Item Multiresolution methods for recovering signals and sets from noisy observations(2005) Willett, Rebecca M.; Nowak, Robert D.The nonparametric multiscale partition-based estimators presented in this thesis are powerful tools for signal reconstruction and set estimation from noisy observations. Unlike traditional wavelet-based multiscale methods, the spatially adaptive and computationally efficient methods presented in this thesis are (a) able to achieve near minimax optimal error convergence rates for broad classes of signals, including Besov spaces, and adapt to arbitrary levels of smoothness in one dimension; (b) capable of optimally reconstructing images consisting of smooth surfaces separated by smooth boundaries; (c) well-suited to a variety of observation models, including Poisson count statistics and unbinned observations of a point process, as well as the signal plus additive white Gaussian noise model; (d) amenable to energy-efficient decentralized estimation; and (e) flexible enough to facilitate the optimization of specialized criteria for tasks such as accurate set extraction. Set estimation differs from signal reconstruction in that, while the goal of the latter is to estimate a function, the goal of the former is to determine where in its support the function meets some criterion. Because of their unique objectives, the two problems require distinct estimator evaluation metrics and analysis tools, yet both can be solved accurately and efficiently using the partition-based framework developed here. These methods are a key component of effective iterative inverse problem solvers for challenging tasks such as deblurring, tomographic reconstruction, and superresolution image reconstruction. Both signal reconstruction and set estimation arise routinely in a variety of scientific and engineering applications, and this thesis demonstrates the effectiveness of the proposed methods in the context of medical and astrophysical imaging, density estimation, distributed field estimation using wireless sensor networks, network traffic analysis, and digital elevation map processing.Item Multiscale analysis for intensity and density estimation(2002) Willett, Rebecca M.; Nowak, Robert D.The nonparametric multiscale polynomial and platelet algorithms presented in this thesis are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these algorithms are both well suited to processing Poisson and multinomial data and capable of preserving image edges. At the heart of these new algorithms lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on platelets in two dimensions. This thesis introduces platelets, localized atoms at various locations, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial- and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Simulations establish the practical effectiveness of these algorithms in applications such as medical and astronomical, density estimation, and networking; statistical risk bounds establish the theoretical near-optimality of these algorithms.Item Network tomography in theory and practice(2005) Tsang, Yau-Yau Yolanda; Nowak, Robert D.Network tomography has recently emerged as a promising method for indirectly inferring network state information from end-to-end measurements. In this thesis, I present novel methodologies for several challenging network inference problems. I also tackle practical problems faced in deploying tomographic techniques in the Internet and provide practical solutions to address and overcome some of these difficulties. The major contributions are four-fold. First, a passive monitoring technique for estimating internal link-level drop rates is proposed. This approach only requires TCP traces from the end hosts, and it is more effective and less invasive than other tomography schemes. I have demonstrated its effectiveness using ns-2 simulations. I have also conducted theoretical queuing analysis which corroborates the results obtained through simulation experiments. Second, in delay distribution estimation, a non-parametric wavelet-based approach is developed for estimating link-level queuing delay characteristics. The approach overcomes the bias-variance tradeoff caused by delay quantization, a problem associated with most existing delay estimation methods. Realistic network simulations are carried out using ns-2 simulations to demonstrate the accuracy of the estimation procedure. Third, in order to make tomographic inference techniques more practical, I investigated a Round Trip Time (RTT) based measurement technique. This novel technique does not require clock synchronization and does not require special-purpose cooperation from receivers, enabling deployment of my tomographic tool from any host in the Internet. I demonstrated that my RTT method is effective under a wide range of operating conditions both in an emulation environment and in the Internet. Finally, to make inference techniques more reliable and robust, I formulated the tomographic data collection process as an optimal experimental design problem, in which a fixed number of network probes are optimally distributed to minimize the squared estimation error of the tomographic reconstruction. Explicit forms for the estimation errors are derived in terms of topology, noise levels, and number and distribution of probes. This analysis reveals the dominant sources causing ill-conditioning and scalability issues in network tomography.Item Network tomography using closely-spaced unicast packets(2005-01-04) Nowak, Robert D.; Coates, Mark J.; Rice University; United States Patent and Trademark OfficeThis work discloses a unicast, end-to-end network performance measurement process which is capable of determining internal network losses, delays, and probability mass functions for these characteristics. The process is based on using groups of closely-spaced communications packets to determine the information necessary for inferring the performance characteristics of communications links internal to the network. Computationally efficient estimation algorithms are provided.Item Wavelet-based signal modeling and processing algorithms with applications(2003) Wan, Yi; Nowak, Robert D.Good signal representation and the corresponding signal processing algorithms lie at the heart of the signal processing research effort. Since the 1980's wavelet analysis has become more and more a mature tool in many applications such as image compression due to some key advantages over the traditional Fourier analysis. In this thesis we first develop a wavelet-based statistical framework and an efficient algorithm for solving the linear inverse problems with application to image restoration. The result is an efficient method that produces state-of-the-art results for such problems and has potential further applications in other areas. To overcome the issues such as the blocking artifacts in using orthogonal wavelets, we next investigate the design issue of more flexible basis representations based on frames. In particular, we develop a quasi image rotation method that is based on pixel reassignment and hence retains the original image statistics. When combined with translation operators, this method provides very efficient and desirable frames for image processing. Given a frame, due to the large number of redundant basis functions in it, how to efficiently implement a frame-based algorithm is the key issue. We show this through the example of optimal signal denoising in the presence of added zero-mean white noise. We show that the optimal solution exists yet the computation toward the solution is very heavy. We develop a framework that allows for fast approximations to the optimal solution and has clear physical interpretation. This method is in essence different from the other various approximate approaches such the basis pursuit and has applications in other areas such as image segmentation. We also develop a complexity regularized iterative algorithm for getting sparse solutions to the frame-based signal denoising problem.