Browsing by Author "Joshi, Babhru"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item A Convex Algorithm for Mixed Linear Regression(2017-03-22) Joshi, Babhru; Hand, Paul EMixed linear regression is a high dimensional affine space clustering problem where the goal is to find the parameters of multiple affine spaces that best fit a collection of points. We introduce a convex 2nd order cone program (based on l1/fused lasso) which allows us to reformulate the mixed linear regression as an Rd clustering problem. The convex program is parameter free and does not require prior knowledge of the number of clusters, which is more tractable while clustering in Rd. In the noiseless case, we prove that the convex program recovers the regression coefficients exactly under narrow technical conditions of well-separation and balance. We demonstrate numerical performance on BikeShare data and music tone perception data.Item Blind Demodulation via Convex and Non-Convex Programming(2019-04-18) Joshi, Babhru; Hand, Paul; Hicks, IllyaWe consider the bilinear inverse problem of recovering two vectors, $x$ and $w$, in $\R^L$ from their entrywise product. In this dissertation, we consider three different prior on these unknown signals, a subspace prior, a sparsity prior and a generative prior. For both subspace prior and sparsity prior case, we assume the signs of $x$ and $w$ are known which admits intuitive convex programs. For the generative prior case, we study a non-convex program, the empirical risk minimization program. For the case where the vectors have known signs and belong to known subspaces, we introduce the convex program BranchHull, which is posed in the natural parameter space that does not require an approximate solution or initialization in order to be stated or solved. Under the structural assumptions that $x$ and $w$ are members of known $K$ and $N$ dimensional random subspaces, we present a recovery guarantee for the noiseless case and a noisy case. In the noiseless case, we prove that the BranchHull recovers $x$ and $w$ up to the inherent scaling ambiguity with high probability when $L >> 2(K+N)$. The analysis provides a precise upper bound on the coefficient for the sample complexity. In a noisy case, we show that with high probability the BranchHull is robust to small dense noise when $L = \Omega(K+N)$. We reformulate the BranchHull program and introduce the $l_1$-BranchHull program for the case where $w$ and $x$ are sparse with respect to known dictionaries of size $K$ and $N$, respectively. Here, $K$ and $N$ may be larger than, smaller than, or equal to $L$. The $l_1$-BranchHull program is also a convex program that is posed in the natural parameter space. { We study the case where $x$ and $w$ are $S_1$- and $S_2$-sparse with respect to a random dictionary, with the sparse vectors satisfying an effective sparsity condition, and present a recovery guarantee that depends on the number of measurements as $L > \Omega(S_1+S_2)\log^{2}(K+N)$. We also introduce a variants of $l_1$-BranchHull for the purposes of tolerating noise and outliers, and for the purpose of recovering piecewise constant signals. We provide an ADMM implementation of these variants and show they can extract piecewise constant behavior from real images. We also examine the theoretical properties of enforcing priors provided by generative deep neural networks on the unknown signals via empirical risk minimization. We establish that for networks of suitable dimensions with a randomness assumption on the network weights, the non-convex objective function given by empirical risk minimization has a favorable landscape. That is, we show that at any point away from small neighborhoods around four hyperbolic curves, the objective function has a descent direction. We also characterize the local maximizers of the empirical risk objective and, hence, show that there does not exist any other stationary point outside of the four hyperbolic neighborhoods and the set of local maximizers.