Browsing by Author "Hand, Paul"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item A Spectrum-based Regularization Approach to Linear Inverse Problems: Models, Learned Parameters and Algorithms(2015-04-20) Castanon, Jorge Castanon Alberto; Zhang, Yin; Tapia, Richard; Hand, Paul; Kelly, KevinIn this thesis, we study the problem of recovering signals, in particular images, that approximately satisfy severely ill-conditioned or underdetermined linear systems. For example, such a linear system may represent a set of under-sampled and noisy linear measurements. It is well-known that the quality of the recovery critically depends on the choice of an appropriate regularization model that incorporates prior information about the target solution. Two of the most successful regularization models are the Tikhonov and Total Variation (TV) models, each of which is used in a wide range of applications. We design and investigate a class of spectrum-based models that generalize and improve upon both the Tikhonov and the TV methods, as well as their combinations or so-called hybrids. The proposed models contain "spectrum parameters" that are learned from training data sets through solving optimization problems. This parameter-learning feature gives these models the flexibility to adapt to desired target solutions. We devise efficient algorithms for all the proposed models and conduct comprehensive numerical experiments to evaluate their performance as compared to established models. Numerical results show a generally superior quality in recovered images by our approach from under-sampled linear measurements. Using the proposed algorithms, one can often obtain much improved quality at a moderate increase in computational time.Item Blind Demodulation via Convex and Non-Convex Programming(2019-04-18) Joshi, Babhru; Hand, Paul; Hicks, IllyaWe consider the bilinear inverse problem of recovering two vectors, $x$ and $w$, in $\R^L$ from their entrywise product. In this dissertation, we consider three different prior on these unknown signals, a subspace prior, a sparsity prior and a generative prior. For both subspace prior and sparsity prior case, we assume the signs of $x$ and $w$ are known which admits intuitive convex programs. For the generative prior case, we study a non-convex program, the empirical risk minimization program. For the case where the vectors have known signs and belong to known subspaces, we introduce the convex program BranchHull, which is posed in the natural parameter space that does not require an approximate solution or initialization in order to be stated or solved. Under the structural assumptions that $x$ and $w$ are members of known $K$ and $N$ dimensional random subspaces, we present a recovery guarantee for the noiseless case and a noisy case. In the noiseless case, we prove that the BranchHull recovers $x$ and $w$ up to the inherent scaling ambiguity with high probability when $L >> 2(K+N)$. The analysis provides a precise upper bound on the coefficient for the sample complexity. In a noisy case, we show that with high probability the BranchHull is robust to small dense noise when $L = \Omega(K+N)$. We reformulate the BranchHull program and introduce the $l_1$-BranchHull program for the case where $w$ and $x$ are sparse with respect to known dictionaries of size $K$ and $N$, respectively. Here, $K$ and $N$ may be larger than, smaller than, or equal to $L$. The $l_1$-BranchHull program is also a convex program that is posed in the natural parameter space. { We study the case where $x$ and $w$ are $S_1$- and $S_2$-sparse with respect to a random dictionary, with the sparse vectors satisfying an effective sparsity condition, and present a recovery guarantee that depends on the number of measurements as $L > \Omega(S_1+S_2)\log^{2}(K+N)$. We also introduce a variants of $l_1$-BranchHull for the purposes of tolerating noise and outliers, and for the purpose of recovering piecewise constant signals. We provide an ADMM implementation of these variants and show they can extract piecewise constant behavior from real images. We also examine the theoretical properties of enforcing priors provided by generative deep neural networks on the unknown signals via empirical risk minimization. We establish that for networks of suitable dimensions with a randomness assumption on the network weights, the non-convex objective function given by empirical risk minimization has a favorable landscape. That is, we show that at any point away from small neighborhoods around four hyperbolic curves, the objective function has a descent direction. We also characterize the local maximizers of the empirical risk objective and, hence, show that there does not exist any other stationary point outside of the four hyperbolic neighborhoods and the set of local maximizers.Item Learned Generative Priors for Imaging Inverse Problems(2021-04-30) Leong, Oscar; Hicks, Illya; Hand, PaulA ubiquitous and fundamental task across the natural sciences is an imaging inverse problem, where the goal is to reconstruct a desired image from a small number of noisy measurements. Due to the ill-posed nature of such problems, it is desirable to enforce that the reconstructed image obeys particular structural properties believed to be obeyed by the image of interest. To minimize the amount of measurements required, the desired properties often have a low-dimensional structure. Such properties are known as priors and the dominant paradigm over the last two decades or so has been to exploit the sparsity of natural images in a hand-crafted basis. In recent years, however, the field of machine learning, and deep learning in particular, has demonstrated the effectiveness of data-driven priors in the form of generative models. These models represent signals as lying on an explicitly parameterized low-dimensional manifold, and have shown to generate highly realistic, yet synthetic images from a number of complex image classes, ranging from human faces to proteins. This dissertation proposes a novel framework for image recovery by exploiting these data-driven priors, and offers three main contributions. First, we rigorously prove that these learned models can help recover images from fewer nonlinear measurements than traditional hand-crafted techniques in the challenging inverse problem, phase retrieval. We additionally discuss how our developed theory has a broader applicability to more general settings without structural information on the image. Finally, we present a method using invertible generative models to overcome dataset biases and representational issues in previous generative prior-based approaches, and theoretically analyze the method’s recovery performance in compressive sensing. This thesis, more broadly, offers a new paradigm for image recovery under deep generative priors and gives concrete empirical and theoretical evidence towards the benefits of utilizing such learned priors in a variety of inverse problems.Item Phase Retrieval Under a Generative Prior(2019-04-11) Leong, Oscar; Hicks, Illya; Hand, PaulThe phase retrieval problem, arising from X-ray crystallography and medical imaging, asks to recover a signal given intensity-only measurements. When the number of measurements is less than the dimensionality of the signal, solving the problem requires additional assumptions, or priors, on its structure in order to guarantee recovery. Many techniques enforce a sparsity prior, meaning that the signal has very few non-zero entries. However, these methods have seen various computational bottlenecks. We sidestep this issue by enforcing a generative prior: the assumption that the signal is in the range of a generative neural network. By formulating an empirical risk minimization problem and directly optimizing over the domain of the generator, we show that the objective’s energy landscape exhibits favorable global geometry for gradient descent with information theoretically optimal sample complexity. Based on this geometric result, we introduce a gradient descent algorithm to converge to the true solution. We corroborate these results with experiments showing that exploiting generative models in phase retrieval tasks outperforms sparse phase retrieval methods.