Blind Demodulation via Convex and Non-Convex Programming

Date
2019-04-18
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

We consider the bilinear inverse problem of recovering two vectors, x and w, in \RL from their entrywise product. In this dissertation, we consider three different prior on these unknown signals, a subspace prior, a sparsity prior and a generative prior. For both subspace prior and sparsity prior case, we assume the signs of x and w are known which admits intuitive convex programs. For the generative prior case, we study a non-convex program, the empirical risk minimization program.

For the case where the vectors have known signs and belong to known subspaces, we introduce the convex program BranchHull, which is posed in the natural parameter space that does not require an approximate solution or initialization in order to be stated or solved. Under the structural assumptions that x and w are members of known K and N dimensional random subspaces, we present a recovery guarantee for the noiseless case and a noisy case. In the noiseless case, we prove that the BranchHull recovers x and w up to the inherent scaling ambiguity with high probability when L>>2(K+N). The analysis provides a precise upper bound on the coefficient for the sample complexity. In a noisy case, we show that with high probability the BranchHull is robust to small dense noise when L=Ω(K+N).

We reformulate the BranchHull program and introduce the l1-BranchHull program for the case where w and x are sparse with respect to known dictionaries of size K and N, respectively. Here, K and N may be larger than, smaller than, or equal to L. The l1-BranchHull program is also a convex program that is posed in the natural parameter space. { We study the case where x and w are S1- and S2-sparse with respect to a random dictionary, with the sparse vectors satisfying an effective sparsity condition, and present a recovery guarantee that depends on the number of measurements as L>Ω(S1+S2)log2⁡(K+N). We also introduce a variants of l1-BranchHull for the purposes of tolerating noise and outliers, and for the purpose of recovering piecewise constant signals. We provide an ADMM implementation of these variants and show they can extract piecewise constant behavior from real images.

We also examine the theoretical properties of enforcing priors provided by generative deep neural networks on the unknown signals via empirical risk minimization. We establish that for networks of suitable dimensions with a randomness assumption on the network weights, the non-convex objective function given by empirical risk minimization has a favorable landscape. That is, we show that at any point away from small neighborhoods around four hyperbolic curves, the objective function has a descent direction. We also characterize the local maximizers of the empirical risk objective and, hence, show that there does not exist any other stationary point outside of the four hyperbolic neighborhoods and the set of local maximizers.

Description
Degree
Doctor of Philosophy
Type
Thesis
Keywords
Blind Demodulation, Convex programming, Non-convex programming, Sparsity, Convex relaxation
Citation

Joshi, Babhru. "Blind Demodulation via Convex and Non-Convex Programming." (2019) Diss., Rice University. https://hdl.handle.net/1911/106016.

Has part(s)
Forms part of
Published Version
Rights
Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
Link to license
Citable link to this page