Browsing by Author "Osher, Stanley"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item A Comparison of Three Total Variation Based Texture Extraction Models(2007-01) Yin, Wotao; Goldfarb, Donald; Osher, StanleyThis paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization:the Meyer, Vese-Osher, and TV-L1 models. We formulate discrete versions of these models as second-order cone programs (SOCPs) which can be solved efficiently by interior-point methods. Our experiments with these models on 1D oscillating signals and 2D images reveal their differences: the Meyer model tends to extract oscillation patterns in the input, the TV-L1 model performs a strict multiscale decomposition, and the Vese-Osher model has properties falling in between the other two models.Item Error Forgetting of Bregman Iteration(2012-01) Yin, Wotao; Osher, StanleyThis short article analyzes an interesting property of the Bregman iterative procedure for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax=b. The procedure obtains its solution by solving a sequence of unconstrained subproblems, each minimizing J(x) + (1/2) ||Ax-bk||2, and iteratively updating bk. In practice, the subproblems are solved with finite accuracy. Let wk denote the numerical error at iteration k. If all wk are sufficiently small, Bregman iteration identifies the optimal face in finitely many iterations, and afterward, it enjoys an interesting error-forgetting property: the distance between the current point and the optimal solution set is bounded by ||wk+1-wk||, independent of the numerical errors at previous iterations. This property partially explains why the Bregman iterative procedure works well for sparse optimization and ||x||1 minimization. The error-forgetting property is unique to piece-wise linear functions (i.e., polyhedral functions) J(x), and it is new to the literature of the augmented Lagrangian method.Item Learning Circulant Sensing Kernels(2012-01) Xu, Yangyang; Yin, Wotao; Osher, StanleyIn signal acquisition, Toeplitz and circulant matrices are widely used as sensing operators. They correspond to discrete convolutions and are easily or even naturally realized in various applications. For compressive sensing, recent work has used random Toeplitz and circulant sensing matrices and proved their efficiency in theory, by computer simulations, as well as through physical optical experiments. Motivated by a recent work by Duarte-Carvajalino and Sapiro, we propose models to learn a circulant sensing matrix/operator for one and higher dimensional signals. Given the dictionary of the signal(s) to be sensed, the learned circulant sensing matrix/operator is more effective than a randomly generated circulant sensing matrix/operator, and even slightly so than a Gaussian random sensing matrix. In addition, by exploiting the circulant structure, we improve the learning from the patch scale in the work by Duarte-Carvajalino and Sapiro to the much large image scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We demonstrate these results using both synthetic sparse signals and real images.Item The Total Variation Regularized L1 Model for Multiscale Decomposition(2006-11) Yin, Wotao; Goldfarb, Donald; Osher, StanleyThis paper studies the total variation regularization model with an L1 fidelity term (TV-L1) for decomposing an image into features of different scales. We first show that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry subproblems. Using this result we show that the TV-L1 model is able to separate image features according to their scales, where the scale is analytically defined by the G-value. A number of other properties including the geometric and morphological invariance of the TV-L1 model are also proved and their applications discussed.