- Browse by Author

### Browsing by Author "Zhang, Yin"

Now showing 1 - 20 of 110

###### Results Per Page

###### Sort Options

Item A Branch-and-Cut Method for Solving the Bilevel Clique Interdiction Problem(2015-04-20) Becker, Timothy J; Hicks, Illya; Zhang, Yin; McLurkin, JamesShow more I introduce an algorithm to solve the current formulation of the bilevel clique interdiction problem. Interdiction, a military term, describes the removal of enemy resources. The single level clique interdiction problem describes the attempt of an attacker to interdict a maximum number of cliques. The bilevel form of the problem introduces a defender who attempts to minimize the number of cliques interdicted by the attacker. Previous authors have developed algorithms for the single level clique interdiction problem, as well as for bilevel formulations of other problems. However, an algorithm for the bilevel clique interdiction problem has not previously been creatd. The algorithm presented in this thesis uses a branch and cut approach to solve the proposed problem. This algorithm is expected to be usable on any social network, thereby improving the study of many network problems including terrorist cells or marketing strategies.Show more Item A Compressive Sensing and Unmixing Scheme for Hyperspectral Data Processing(2011-01) Li, Chengbo; Sun, Ting; Kelly, Kevin; Zhang, YinShow more Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation and I/O throughputs, especially when real-time processing is desired. In this paper, we investigate a low-complexity scheme for hyperspectral data compression and reconstruction. In this scheme, compressed hyperspectral data are acquired directly by a device similar to the single-pixel camera based on the principle of compressive sensing. To decode the compressed data, we propose a numerical procedure to directly compute the unmixed abundance fractions of given endmembers, completely bypassing high-complexity tasks involving the hyperspectral data cube itself. The reconstruction model is to minimize the total variational of the abundance fractions subject to a pre-processed fidelity equation with a significantly reduced size, and other side constraints. An augmented Lagrangian type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardware-measured data. Experimental and computational evidence obtained from this study indicates that the proposed scheme has a high potential in real-world applications.Show more Item A Computational Study of a Gradient-Based Log-Barrier Algorithm for a Class of Large-Scale SDPs(2001-06) Burer, Samuel; Monteiro, Renato D.C.; Zhang, YinShow more The authors of this paper recently introduced a transformation that converts a class of semidefinite programs (SDPs) into nonlinear optimization problems free of matrix-valued constraints and variables. This transformation enables the application of nonlinear optimization techniques to the solution of certain SDPs that are too large for conventional interior-point methods to handle efficiently. Based on the transformation, they proposed a globally convergent, first-order (i.e., gradient-based) log-barrier algorithm for solving a class of linear SDPs. In this paper, we discuss an efficient implementation of the proposed algorithm and report computational results on semidefinite relaxations of three types of combinatorial optimization problems. Our results demonstrate that the proposed algorithm is indeed capable of solving large-scale SDPs and is particularly effective for problems with a large number of constraints.Show more Item A Cubically Convergent Method for Locating a Nearby Vertex in Linear Programming(1989-12) Tapia, R.A.; Zhang, YinShow more Item A Fast Algorithm for Edge-Preserving Variational Multichannel Image Restoration(2008-07) Yang, Junfeng; Yin, Wotao; Zhang, Yin; Wang, YilunShow more We generalize the alternating minimization algorithm recently proposed in [32] to effciently solve a general, edge-preserving, variational model for recovering multichannel images degraded by within- and cross-channel blurs, as well as additive Gaussian noise. This general model allows the use of localized weights and higher-order derivatives in regularization, and includes a multichannel extension of total variation (MTV) regularization as a special case. In the MTV case, we show that the model can be derived from an extended half-quadratic transform of Geman and Yang [14]. For color images with three channels and when applied to the MTV model (either locally weighted or not), the per-iteration computational complexity of this algorithm is dominated by nine fast Fourier transforms. We establish strong convergence results for the algorithm including finite convergence for some variables and fastﾠq-linear convergence for the others. Numerical results on various types of blurs are presented to demonstrate the performance of our algorithm compared to that of the MATLAB deblurring functions. We also present experimental results on regularization models using weighted MTV and higher-order derivatives to demonstrate improvements in image quality provided by these models over the plain MTV model.Show more Item A Fast Newton's Algorithm for Entropy Maximization in Phase Determination(1999-05) Wu, Zhijun; Phillips, George; Tapia, Richard; Zhang, YinShow more A long-standing problem in X-ray crystallography, known as the phase problem, is to determine the phases for a large set of complex variables, called the structure factors of the crystal, given their magnitudes obtained from X-ray diffraction experiments. We introduce a statistical phase estimation approach to the problem. This approach requires solving a special class of entropy maximization problems repeatedly to obtain the joint probability distribution of the structure factors. The entropy maximization problem is a semi-infinite convex program, which can be solved in a finite dual space by using a standard Newton's method. The Newton's method converges quadratically, but is costly in general, requiring O(n log n) floating point operations in every iteration, where n is the number of variables. We present a fast Newton's algorithm for solving the entropy maximization problem. The algorithm requires only O(n log n) floating point operations for each of its iterates, yet has the same convergence rate as the standard Newton. We describe the algorithm and discuss related computational issues. Numerical results on simple test cases will also be presented to demonstrate the behavior of the algorithm.Show more Item A Fast TVL1-L2 Minimization Algorithm for Signal Reconstruction from Partial Fourier Data(2008-10) Yang, Junfeng; Zhang, Yin; Yin, WotaoShow more Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose a simple and fast algorithm for signal reconstruction from partial Fourier data. The algorithm minimizes the sum of three terms corresponding to total variation, $\ell_1$-norm regularization and least squares data fitting. It uses an alternating minimization scheme in which the main computation involves shrinkage and fast Fourier transforms (FFTs), or alternatively discrete cosine transforms (DCTs) when available data are in the DCT domain. We analyze the convergence properties of this algorithm, and compare its numerical performance with two recently proposed algorithms. Our numerical simulations on recovering magnetic resonance images (MRI) indicate that the proposed algorithm is highly efficient, stable and robust.Show more Item A Fixed-Point Continuation Method for L_1-Regularization with Application to Compressed Sensing(2007-05) Hale, Elaine T.; Yin, Wotao; Zhang, YinShow more We consider solving minimization problems with L_1-regularization: min ||x||_1 + mu f(x) particularly for f(x) = (1/2)||Ax-b||M2, where A is m by n and m < n. Our goal is to construct efficient and robust algorithms for solving large-scale problems with dense data, and our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. This paper establishes q-linear convergence rates for our algorithm applied to problems with f(x) convex, but not necessarily strictly convex. We present numerical results for several types of compressed sensing problems, and show that our algorithm compares favorably with three state-of-the-art algorithms when applied to large-scale problems with noisy data.Show more Item A General Robust-Optimization Formulation for Nonlinear Programming(2004-07) Zhang, YinShow more Most research in robust optimization has so far been focused on inequality-only, convex conic programming with simple linear models for uncertain parameters. Many practical optimization problems, however, are nonlinear and non-convex. Even in linear programming, coefficients may still be nonlinear functions of uncertain parameters. In this paper, we propose robust formulations that extend the robust-optimization approach to a general nonlinear programming setting with parameter uncertainty involving both equality and inequality constr aints. The proposed robust formulations are valid in a neighborhood of a given nominal parameter value and are robust to the first-order, thus suitable for app lications where reasonable parameter estimations are available and uncertain var iations are moderate.Show more Item A Geometric Approach to Fluence Map Optimization in IMRT Cancer Treatment Planning(2004-07) Zhang, Yin; Merritt, MichaelShow more Intensity-modulated radiation therapy (IMRT) is a state-of-the-art technique for administering radiation to cancer patients. The goal of a treatment is to deliver a prescribed amount of radiation to the tumor, while limiting the amount absorbed by the surrounding healthy and critical organs. Planning an IMRT treatment requires determining fluence maps, each consisting of hundreds or more beamlet intensities. Since it is difficult or impossible to deliver a sufficient dose to a tumor without irradiating nearby critical organs, radiation oncologists have developed guidelines to allow tradeoffs by introducing so-called dose-volume constraints (DVCs), which specify a given percentage of volume for each critical organ that can be sacrificed if necessary. Such constraints, however, are of combinatorial nature and pose significant challenges to the fluence map optimization problem. In this paper, we describe a new geometric approach to the fluence map optimization problem. Contrary to the traditional view, we regard dose distributions as our primary independent variables, while treating beamlet intensities as secondary ones. We consider two sets in the dose space: (i) the physical set consisting of physically realizable dose distributions, and (ii) the prescription set consisting of dose distributions meeting the prescribed tumor doses and satisfying the given dose-volume constraints. We seek a suitable dose distribution by successively projecting between these two sets. A crucial observation is that the projection onto the prescription set, which is non-convex, can be properly defined and easily computed. The projection onto the physical set, on the other hand, requires solving a nonnegative least squares problem. We show that this alternating projection algorithm is actually equivalent to a greedy algorithm driven by local sensitivity information readily available in our formulation. Moreover, the availability of such local sensitivity information will enable us to devise greedy algorithms to search for a desirable plan even when a "good and achievable" prescription is unknown.Show more Item A Global Optimization Method for the Molecular Replacement Problem in X-ray Crystallography(2002-06) Jamrog, Diane C.; Phillips, George N. Jr.; Tapia, Richard A.; Zhang, YinShow more The primary technique for determining the three-dimensional structure of a protein molecule is X-ray crystallography, from which the molecular replacement (MR) problem often arises as a critical step. The MR problem is a global optimization problem to locate an optimal position of a model protein, whose structure is similar to the unknown protein structure that is to be determined, so that at this position the model protein will produce calculated intensities closest to those observed from an X-ray crystallography experiment. Improving the applicability and robustness of MR methods is an important research topic because commonly used traditional MR methods, though often successful, have their limitations in solving difficult problems. We introduce a new global optimization strategy that combines a coarse-grid search, using a surrogate function, with extensive multi-start local optimization. A new MR code, called SOMoRe, based on this strategy is developed and tested on four realistic problems, including two difficult problems that traditional MR codes failed to solve directly. SOMoRe was able to solve each test problem without any complication, and SOMoRe solved a MR problem using a less complete model than the models required by three other programs. These results indicate that the new method is promising and should enhance the applicability and robustness of the MR methodology.Show more Item A New Alternating Minimization Algorithm for Total Variation Image Reconstruction(2007-06) Wang, Yilun; Yang, Junfeng; Yin, Wotao; Zhang, YinShow more We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observa- tions with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also isotropic forms of total variation discretizations. The per-iteration computational complexity of the algorithm is three Fast Fourier Transforms (FFTs). We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or q-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the Lagged Diffusivity algorithm for total-variation-based deblurring. Some extensions of our algorithm are also discussed.Show more Item A new global optimization strategy for the molecular replacement problem(2002) Jamrog, Diane Christine; Zhang, Yin; Phillips, George N., Jr.; Tapia, Richard A.Show more The primary technique for determining the three-dimensional structure of a protein is X-ray crystallography, in which the molecular replacement (MR) problem arises as a critical step. Knowledge of protein structures is extremely useful for medical research, including discovering the molecular basis of disease and designing pharmaceutical drugs. This thesis proposes a new strategy to solve the MR problem, which is a global optimization problem to find the optimal orientation and position of a structurally similar model protein that will produce calculated intensities closest to those observed from an X-ray crystallography experiment. Improving the applicability and the robustness of MR methods is an important research goal because commonly used traditional MR methods, though often successful, have difficulty solving certain classes of MR problems. Moreover, the use of MR methods is only expected to increase as more structures are deposited into the Protein Data Bank. The new strategy has two major components: a six-dimensional global search and multi-start local optimization. The global search uses a low-frequency surrogate objective function and samples a coarse grid to identify good starting points for multi-start local optimization, which uses a more accurate objective function. As a result, the global search is relatively quick and the local optimization efforts are focused on promising regions of the MR variable space where solutions are likely to exist, in contrast to the traditional search strategy that exhaustively samples a uniformly fine grid of the variable space. In addition, the new strategy is deterministic, in contrast to stochastic search methods that randomly sample the variable space. This dissertation introduces a new MR program called SOMoRe that implements the new global optimization strategy. When tested on seven problems, SOMoRe was able to straightforwardly solve every test problem, including three problems that could not be directly solved by traditional MR programs. Moreover, SOMoRe also solved a MR problem using a less complete model than those required by two traditional programs and a stochastic 6D program. Based on these results, this new strategy promises to extend the applicability and robustness of MR.Show more Item A Nonlinear Differential Semblance Algorithm for Waveform Inversion(2013-07-24) Sun, Dong; Symes, William W.; Heinkenschloss, Matthias; Zhang, Yin; Zelt, Colin A.Show more This thesis proposes a nonlinear differential semblance approach to full waveform inversion as an alternative to standard least squares inversion, which cannot guarantee a reliable solution, because of the existence of many spurious local minima of the objective function for typical data that lacks low-frequency energy. Nonlinear differential semblance optimization combines the ability of full waveform inversion to account for nonlinear physical effects, such as multiple reflections, with the tendency of differential semblance migration velocity analysis to avoid local minima. It borrows the gather-flattening concept from migration velocity analysis, and updates the velocity by flattening primaries-only gathers obtained via nonlinear inversion. I describe a general formulation of this algorithm, its main components and implementation. Numerical experiments show for simple layered models, standard least squares inversion fails, whereas nonlinear differential semblance succeeds in constructing a kinematically correct model and fitting the data rather precisely.Show more Item A sensitivity-driven greedy approach to fluence map optimization in intensity-modulated radiation therapy(2006) Merritt, Michael S.; Zhang, YinShow more Intensity-modulated radiation therapy (IMRT) is a state-of-the-art technique for administering radiation to cancer patients. The goal of a treatment is to maximize the radiation absorbed by the tumor and minimize that absorbed by the surrounding critical organs. Since a plan can almost never be found that both kills the tumor and completely avoids irradiating critical organs, the medical physics community has quantified the sacrifices that can be tolerated in so-called dose-volume constraints. Efficiently imposing such constraints, which are fundamentally combinatorial in nature, poses a major challenge due to the large amount of data. Also, the IMRT technology limits which dose distributions are actually deliverable. So, we seek a physically deliverable dose distribution that at the same time meets the minimum tumor dose prescription and satisfies the dose-volume constraints. We propose a new greedy algorithm and show that it converges to a local minimum of the stated formulation of the fluence map problem. Numerical comparison is made to an approach representative of the leading commercial software for IMRT planning. We find our method produces plans of competitive quality with a notable improvement in computational performance. Our efficiency gain is most aptly attributed to a new interior-point gradient algorithm for solving the nonnegative least squares subproblem every iteration. Convergence is proven and numerical comparisons are made to other leading methods demonstrating this solver is well-suited for the subproblem.Show more Item A Simple Proof for Recoverability of L1-Minimization (II): the Nonnegativity Case(2005-09) Zhang, YinShow more When using L1 minimization to recover a sparse, nonnegative solution to a under-determined linear system of equations, what is the highest sparsity level at which recovery can still be guaranteed? Recently, Donoho and Tanner discovered, by invoking classic results from the theory of convex polytopes that the highest sparsity level equals half of the number of equations. In this report, we provide a completely self-contained, yet short and elementary, proof for this remarkable result. We also connect dots for different recoverability conditions obtained from different spaces.Show more Item A Simple Proof for Recoverability of L1-Minimization: Go Over or Under?(2005-08) Zhang, YinShow more It is well-known by now that L1 minimization can help recover sparse solutions to under-determined linear equations or sparsely corrupted solutions to over-determined equations, and the two problems are equivalent under appropriate conditions. So far almost all theoretic results have been obtained through studying the ``under-determined side'' of the problem. In this note, we take a different approach from the ``over-determined side'' and show that a recoverability result (with the best available order) follows almost immediately from an inequality of Garnaev and Gluskin. We also connect dots with recoverability conditions obtained from different spaces.Show more Item A Spectrum-based Regularization Approach to Linear Inverse Problems: Models, Learned Parameters and Algorithms(2015-04-20) Castanon, Jorge Castanon Alberto; Zhang, Yin; Tapia, Richard; Hand, Paul; Kelly, KevinShow more In this thesis, we study the problem of recovering signals, in particular images, that approximately satisfy severely ill-conditioned or underdetermined linear systems. For example, such a linear system may represent a set of under-sampled and noisy linear measurements. It is well-known that the quality of the recovery critically depends on the choice of an appropriate regularization model that incorporates prior information about the target solution. Two of the most successful regularization models are the Tikhonov and Total Variation (TV) models, each of which is used in a wide range of applications. We design and investigate a class of spectrum-based models that generalize and improve upon both the Tikhonov and the TV methods, as well as their combinations or so-called hybrids. The proposed models contain "spectrum parameters" that are learned from training data sets through solving optimization problems. This parameter-learning feature gives these models the flexibility to adapt to desired target solutions. We devise efficient algorithms for all the proposed models and conduct comprehensive numerical experiments to evaluate their performance as compared to established models. Numerical results show a generally superior quality in recovered images by our approach from under-sampled linear measurements. Using the proposed algorithms, one can often obtain much improved quality at a moderate increase in computational time.Show more Item A study on conditions for sparse solution recovery in compressive sensing(2008) Eydelzon, Anatoly; Zhang, YinShow more It is well-known by now that tinder suitable conditions ℓ1 minimization can recover sparse solutions to under-determined linear systems of equations. More precisely, by solving the convex optimization problem min{∥ x∥1 : Ax = b}, where A is an m x n measurement matrix with m < n, one can obtain the sparsest solution x* to Ax = b provided that the measurement matrix A has certain properties and the sparsity level k of x* is sufficiently small. This fact has led to active research in the area of compressive sensing and other applications. The central question for this problem is the following. Given a type of measurements, a signal's length n and sparsity level k, what is the minimum measurement size m that ensures recovery? Or equivalently, given a type of measurements, a signal length n and a measurement size m, what is the maximum recoverable sparsity level k? The above fundamental question has been answered, with varying degrees of precision, by a number of researchers for a number of different random or semi-random measurement matrices. However, all the existing results still involve unknown constants of some kind and thus are unable to provide precise answers to specific situations. For example, let A be an m x n partial DCT matrix with n = 107 and m = 5 x 105 (n/m = 20). Can we provide a reasonably good estimate on the maximum recoverable sparsity k? In this research we attempt to provide a more precise answer to the central question raised above. By studying new sufficient conditions for exact recovery of sparse solutions, we propose a new technique to estimate recoverable sparsity for different kinds of deterministic, random and semi-random matrices. We will present empirical evidence to show the practical success of our approach, though further research is still needed to formally establish its effectiveness.Show more Item A Successive Linear Programming Approach to IMRT Optimization Problem(2002-12) Merritt, Michael; Zhang, Yin; Liu, Helen; Mohan, RadheShow more We propose to solve the IMRT optimization problem through a successive linear programming approach. Taking advantage of the sensitivity information in linear programming and the re-optimization ability of simplex methods, the proposed approach provides an affordable methodology to efficiently solve problems with dose-volume constraints. Preliminary computational results indicate that, compared to the standard weighted least squares approach, the new approach leads to higher tumor dosage escalation and better conformation.Show more