CMOR Publications
Permanent URI for this collection
CMOR Faculty Publications
Browse
Browsing CMOR Publications by Issue Date
Now showing 1 - 20 of 85
Results Per Page
Sort Options
Item An Algebraic Exploration of Dominating Sets and Vizing's Conjecture(The Electronic Journal of Combinatorics, 2012) Margulies, S.; Hicks, I.V.Systems of polynomial equations are commonly used to model combinatorial problems such as independent set, graph coloring, Hamiltonian path, and others. We formulate the dominating set problem as a system of polynomial equations in two di erent ways: rst, as a single, high-degree polynomial, and second as a collection of polynomials based on the complements of domination-critical graphs. We then provide a su cient criterion for demonstrating that a particular ideal representation is already the universal Gr obner bases of an ideal, and show that the second representation of the dominating set ideal in terms of domination-critical graphs is the universal Gr obner basis for that ideal. We also present the rst algebraic formulation of Vizing's conjecture, and discuss the theoretical and computational rami cations to this conjecture when using either of the two dominating set representations described above.Item Filtering Deterministic Layer Effects in Imaging(Society for Industrial and Applied Mathematics, 2012) Borcea, L.; del Cueto, F. Gonzalez; Papanicolaou, G.; Tsogka, C.Sensor array imaging arises in applications such as nondestructive evaluation of materials with ultrasonic waves, seismic exploration, and radar. The sensors probe a medium with signals and record the resulting echoes, which are then processed to determine the location and reflectivity of remote reflectors. These could be defects in materials such as voids, fault lines or salt bodies in the earth, and cars, buildings, or aircraft in radar applications. Imaging is relatively well understood when the medium through which the signals propagate is smooth, and therefore nonscattering. But in many problems the medium is heterogeneous, with numerous small inhomogeneities that scatter the waves. We refer to the collection of inhomogeneities as clutter, which introduces an uncertainty in imaging because it is unknown and impossible to estimate in detail. We model the clutter as a random process. The array data is measured in one realization of the random medium, and the challenge is to mitigate cumulative clutter scattering so as to obtain robust images that are statistically stable with respect to different realizations of the inhomogeneities. Scatterers that are not buried too deep in clutter can be imaged reliably with the coherent interferometric (CINT) approach. But in heavy clutter the signal-to-noise ratio (SNR) is low and CINT alone does not work. The “signal,” the echoes from the scatterers to be imaged, is overwhelmed by the “noise,” the strong clutter reverberations. There are two existing approaches for imaging at low SNR: The first operates under the premise that data are incoherent so that only the intensity of the scattered field can be used. The unknown coherent scatterers that we want to image are modeled as changes in the coefficients of diffusion or radiative transport equations satisfied by the intensities, and the problem becomes one of parameter estimation. Because the estimation is severely ill-posed, the results have poor resolution, unless very good prior information is available and large arrays are used. The second approach recognizes that if there is some residual coherence in the data, that is, some reliable phase information is available, it is worth trying to extract it and use it with well-posed coherent imaging methods to obtain images with better resolution. This paper takes the latter approach and presents a first attempt at enhancing the SNR of the array data by suppressing medium reverberations. It introduces filters, or annihilators of layer backscatter, that are designed to remove primary echoes from strong, isolated layers in a medium with additional random layering at small, subwavelength scales. These strong layers are called deterministic because they can be imaged from the data. However, our goal is not to image the layers, but to suppress them and thus enhance the echoes from compact scatterers buried deep in the medium. Surprisingly, the layer annihilators work better than intended, in the sense that they suppress not only the echoes from the deterministic layers, but also multiply scattered ones in the randomly layered structure. Following the layer annihilators presented here, other filters of general, nonlayered heavy clutter have been developed. We review these more recent developments and the challenges of imaging in heavy clutter in the introduction in order to place the research presented here in context. We then present in detail the layer annihilators and show with analysis and numerical simulations how they work.Item Ritz Value for Non-Hermitian Matrices(Society for Industrial and Applied Mathematics, 2012) Carden, Russell L.; Embree, MarkRayleigh-Ritz eigenvalue estimates for Hermitian matrices obey Cauchy interlacing, which has helpful implications for theory, applications, and algorithms. In contrast, few results about the Ritz values of non-Hermitian matrices are known, beyond their containment within the numerical range. To show that such Ritz values enjoy considerable structure, we establish regions within the numerical range in which certain Ritz values of general matrices must be contained. To demonstrate that localization occurs even for extreme examples, we carefully analyze possible Ritz value combinations for a three-dimensional Jordan block.Item Short-term Recurrence Krylov Subspace Methods for Nearly Hermitian Matrices(Society for Industrial and Applied Mathematics, 2012) Embree, Mark; Sifuentes, Josef A.; Soodhalter, Kirk M.; Szyld, Daniel B.; Xue, FeiThe progressive GMRES algorithm, introduced by Beckermann and Reichel in 2008, is a residual-minimizing short-recurrence Krylov subspace method for solving a linear system in which the coefficient matrix has a low-rank skew-Hermitian part. We analyze this algorithm, observing a critical instability that makes the method unsuitable for some problems. To work around this issue we introduce a different short-term recurrence method based on Krylov subspaces for such matrices, which can be used as either a solver or a preconditioner. Numerical experiments compare this method to alternative algorithms.Item Edge Guided Reconstruction for Compressive Imaging(Society for Industrial and Applied Mathematics, 2012-07-03) Guo, Weihong; Yin, Wotao; National Science Foundation; Office of Naval Research; Alfred P. Sloan FoundationWe propose EdgeCS—an edge guided compressive sensing reconstruction approach—to recover images of higher quality from fewer measurements than the current methods. Edges are important image features that are used in various ways in image recovery, analysis, and understanding. In compressive sensing, the sparsity of image edges has been successfully utilized to recover images. However, edge detectors have not been used on compressive sensing measurements to improve the edge recovery and subsequently the image recovery. This motivates us to propose EdgeCS, which alternatively performs edge detection and image reconstruction in a mutually beneficial way. The edge detector of EdgeCS is designed to faithfully return partial edges from intermediate image reconstructions even though these reconstructions may still have noise and artifacts. For complex-valued images, it incorporates joint sparsity between the real and imaginary components. EdgeCS has been implemented with both isotropic and anisotropic discretizations of total variation and tested on incomplete k-space (spectral Fourier) samples. It applies to other types of measurements as well. Experimental results on large-scale real/complex-valued phantom and magnetic resonance (MR) images show that EdgeCS is fast and returns high-quality images. For example, it exactly recovers the 256×256 Shepp–Logan phantom from merely 7 radial lines (3.03% k-space), which is impossible for most existing algorithms. It is able to accurately reconstruct a 512 × 512 MR image with 0.05 white noise from 20.87% radial samples. On complex-valued MR images, it obtains recoveries with faithful phases, which are important in many medical applications. Each of these tests took around 30 seconds on a standard PC. Finally, the algorithm is GPU friendly.Item Local Error Analysis of Discontinuous Galerkin Methods for Advection-Dominated Elliptic Linear-Quadratic Optimal Control Problems(Society for Industrial and Applied Mathematics, 2012-08-15) Leykekhman, Dmitriy; Heinkenschloss, Matthias; National Science Foundation; Air Force Office of Scientific ResearchThis paper analyzes the local properties of the symmetric interior penalty upwind discontinuous Galerkin (SIPG) method for the numerical solution of optimal control problems governed by linear reaction-advection-diffusion equations with distributed controls. The theoretical and numerical results presented in this paper show that for advection-dominated problems the convergence properties of the SIPG discretization can be superior to the convergence properties of stabilized finite element discretizations such as the streamline upwind Petrov Galerkin (SUPG) method. For example, we show that for a small diffusion parameter the SIPG method is optimal in the interior of the domain. This is in sharp contrast to SUPG discretizations, for which it is known that the existence of boundary layers can pollute the numerical solution of optimal control problems everywhere even into domains where the solution is smooth and, as a consequence, in general reduces the convergence rates to only first order. In order to prove the nice convergence properties of the SIPG discretization for optimal control problems, we first improve local error estimates of the SIPG discretization for single advection-dominated equations by showing that the size of the numerical boundary layer is controlled not by the mesh size but rather by the size of the diffusion parameter. As a result, for small diffusion, the boundary layers are too “weak” to pollute the SIPG solution into domains of smoothness in optimal control problems. This favorable property of the SIPG method is due to the weak treatment of boundary conditions, which is natural for discontinuous Galerkin methods, while for SUPG methods strong imposition of boundary conditions is more conventional. The importance of the weak treatment of boundary conditions for the solution of advection dominated optimal control problems with distributed controls is also supported by our numerical results.Item Synthetic Aperture Radar Imaging and Motion Estimation via Robust Principal Component Analysis(arXiv, 2012-08-22) Borcea, Liliana; Callaghan, Thomas; Papanicolaou, GeorgeWe consider the problem of synthetic aperture radar (SAR) imaging and motion estimation of complex scenes. By complex we mean scenes with multiple targets, stationary and in motion. We use the usual setup with one moving antenna emitting and receiving signals. We address two challenges: (1) the detection of moving targets in the complex scene and (2) the separation of the echoes from the stationary targets and those from the moving targets. Such separation allows high resolution imaging of the stationary scene and motion estimation with the echoes from the moving targets alone. We show that the robust principal component analysis (PCA) method which decomposes a matrix in two parts, one low rank and one sparse, can be used for motion detection and data separation. The matrix that is decomposed is the pulse and range compressed SAR data indexed by two discrete time variables: the slow time, which parametrizes the location of the antenna, and the fast time, which parametrizes the echoes received between successive emissions from the antenna. We present an analysis of the rank of the data matrix to motivate the use of the robust PCA method. We also show with numerical simulations that successful data separation with robust PCA requires proper data windowing. Results of motion estimation and imaging with the separated data are presented, as well.Item A mathematical framework for inverse wave problems in heterogeneous media(IOP Publishing, 2013) Blazek, Kirk D.; Stolk, Christiaan; Symes, William W.; The Rice Inversion ProjectThis paper provides a theoretical foundation for some common formulations of inverse problems in wave propagation, based on hyperbolic systems of linear integro-differential equations with bounded and measurable coefficients. The coefficients of these time-dependent partial differential equations represent parametrically the spatially varying mechanical properties of materials. Rocks, manufactured materials, and other wave propagation environments often exhibit spatial heterogeneity in mechanical properties at a wide variety of scales, and coefficient functions representing these properties must mimic this heterogeneity. We show how to choose domains (classes of nonsmooth coefficient functions) and data definitions (traces of weak solutions) so that optimization formulations of inverse wave problems satisfy some of the prerequisites for application of Newton's method and its relatives. These results follow from the properties of a class of abstract first-order evolution systems, of which various physical wave systems appear as concrete instances. Finite speed of propagation for linear waves with bounded, measurable mechanical parameter fields is one of the by-products of this theory.Item Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions(SIAM, 2013) Liu, Xin; Wen, Zaiwen; Zhang, YinIn many data-intensive applications, the use of principal component analysis and other related techniques is ubiquitous for dimension reduction, data mining, or other transformational purposes. Such transformations often require efficiently, reliably, and accurately computing dominant singular value decompositions (SVDs) of large and dense matrices. In this paper, we propose and study a subspace optimization technique for significantly accelerating the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured dense matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms.Item Time-Dependent Coupling of Navier-Stokes and Darcy Flows(Cambridge University Press, 2013) Cesmelioglu, Aycil; Girault, Vivette; Riviere, BeatriceA weak solution of the coupling of time-dependent incompressible NavierヨStokes equations with Darcy equations is defined. The interface conditions include the BeaversヨJosephヨSaffman condition. Existence and uniqueness of the weak solution are obtained by a constructive approach. The analysis is valid for weak regularity interfaces.Item A Trust-Region Algorithm with Adaptive Stochastic Collocation for PDE Optimization under Uncertainty(SIAM, 2013) Kouri, D.P.; Heinkenschloss, M.; Ridzal, D.; van Bloemen Waanders, B.G.The numerical solution of optimization problems governed by partial differential equations (PDEs) with random coefficients is computationally challenging because of the large number of deterministic PDE solves required at each optimization iteration. This paper introduces an efficient algorithm for solving such problems based on a combination of adaptive sparse-grid collocation for the discretization of the PDE in the stochastic space and a trust-region framework for optimization and fidelity management of the stochastic discretization. The overall algorithm adapts the collocation points based on the progress of the optimization algorithm and the impact of the random variables on the solution of the optimization problem. It frequently uses few collocation points initially and increases the number of collocation points only as necessary, thereby keeping the number of deterministic PDE solves low while guaranteeing convergence. Currently an error indicator is used to estimate gradient errors due to adaptive stochastic collocation. The algorithm is applied to three examples, and the numerical results demonstrate a significant reduction in the total number of PDE solves required to obtain an optimal solution when compared with a Newton conjugate gradient algorithm applied to a fixed high-fidelity discretization of the optimization problem.Item A New Compressive Video Sensing Framework for Mobile Broadcast(IEEE, 2013-03) Li, Chengbo; Jiang, Hong; Wilford, Paul; Zhang, Yin; Scheutzow, MikeA new video coding method based on compressive sampling is proposed. In this method, a video is coded using compressive measurements on video cubes. Video reconstruction is performed by minimization of total variation (TV) of the pixelwise discrete cosine transform coefficients along the temporal direction. A new reconstruction algorithm is developed from TVAL3, an efficient TV minimization algorithm based on the alternating minimization and augmented Lagrangian methods. Video coding with this method is inherently scalable, and has applications in mobile broadcast.Item A Posteriori Error Estimation for DEIM Reduced Nonlinear Dynamical Systems(SIAM, 2014) Wirtz, D.; Sorensen, D.C.; Haasdonk, B.In this work an efficient approach for a posteriori error estimation for POD-DEIM reduced nonlinear dynamical systems is introduced. The considered nonlinear systems may also include time- and parameter-affine linear terms as well as parametrically dependent inputs and outputs. The reduction process involves a Galerkin projection of the full system and approximation of the system's nonlinearity by the DEIM method [S. Chaturantabut and D. C. Sorensen,ᅠSIAM J. Sci. Comput., 32 (2010), pp. 2737--2764]. The proposed a posteriori error estimator can be efficiently decomposed in an offline/online fashion and is obtained by a one-dimensional auxiliary ODE during reduced simulations. Key elements for efficient online computation are partial similarity transformations and matrix-DEIM approximations of the nonlinearity Jacobians. The theoretical results are illustrated by application to an unsteady Burgers equation and a cell apoptosis model.Item Model reduction of strong-weak neurons(Frontiers Media, 2014) Du, Bosen; Sorensen, Danny; Cox, Steven J.We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs). We combine the best of these two strategies via a predictor-corrector decomposition scheme and achieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.Item The Effects of Theta Precession on Spatial Learning and Simplicial Complex Dynamics in a Topological Model of the Hippocampal Spatial Map(Public Library of Science, 2014) Arai, Mamiko; Brandt, Vicky; Dabaghian, YuriLearning arises through the activity of large ensembles of cells, yet most of the data neuroscientists accumulate is at the level of individual neurons; we need models that can bridge this gap. We have taken spatial learning as our starting point, computationally modeling the activity of place cells using methods derived from algebraic topology, especially persistent homology. We previously showed that ensembles of hundreds of place cells could accurately encode topological information about different environments (?learn? the space) within certain values of place cell firing rate, place field size, and cell population; we called this parameter space the learning region. Here we advance the model both technically and conceptually. To make the model more physiological, we explored the effects of theta precession on spatial learning in our virtual ensembles. Theta precession, which is believed to influence learning and memory, did in fact enhance learning in our model, increasing both speed and the size of the learning region. Interestingly, theta precession also increased the number of spurious loops during simplicial complex formation. We next explored how downstream readout neurons might define co-firing by grouping together cells within different windows of time and thereby capturing different degrees of temporal overlap between spike trains. Our model's optimum coactivity window correlates well with experimental data, ranging from ~150-200 msec. We further studied the relationship between learning time, window width, and theta precession. Our results validate our topological model for spatial learning and open new avenues for connecting data at the level of individual neurons to behavioral outcomes at the neuronal ensemble level. Finally, we analyzed the dynamics of simplicial complex formation and loop transience to propose that the simplicial complex provides a useful working description of the spatial learning process.Item Genetic Suppression of Transgenic APP Rescues Hypersynchronous Network Activity in a Mouse Model of Alzeimer's Disease(Society for Neuroscience, 2014) Born, Heather A.; Kim, Ji-Yoen; Savjani, Ricky R.; Das, Pritam; Dabaghian, Yuri A.; Guo, Qinxi; Yoo, Jong W.; Schuler, Dorothy R.; Cirrito, John R.; Zheng, Hui; Golde, Todd E.; Noebels, Jeffrey L.; Jankowsky, Joanna L.Alzheimer's disease (AD) is associated with an elevated risk for seizures that may be fundamentally connected to cognitive dysfunction. Supporting this link, many mouse models for AD exhibit abnormal electroencephalogram (EEG) activity in addition to the expected neuropathology and cognitive deficits. Here, we used a controllable transgenic system to investigate how network changes develop and are maintained in a model characterized by amyloid β (Aβ) overproduction and progressive amyloid pathology. EEG recordings in tet-off mice overexpressing amyloid precursor protein (APP) from birth display frequent sharp wave discharges (SWDs). Unexpectedly, we found that withholding APP overexpression until adulthood substantially delayed the appearance of epileptiform activity. Together, these findings suggest that juvenile APP overexpression altered cortical development to favor synchronized firing. Regardless of the age at which EEG abnormalities appeared, the phenotype was dependent on continued APP overexpression and abated over several weeks once transgene expression was suppressed. Abnormal EEG discharges were independent of plaque load and could be extinguished without altering deposited amyloid. Selective reduction of Aβ with a γ-secretase inhibitor has no effect on the frequency of SWDs, indicating that another APP fragment or the full-length protein was likely responsible for maintaining EEG abnormalities. Moreover, transgene suppression normalized the ratio of excitatory to inhibitory innervation in the cortex, whereas secretase inhibition did not. Our results suggest that APP overexpression, and not Aβ overproduction, is responsible for EEG abnormalities in our transgenic mice and can be rescued independently of pathology.Item A Matrix-Free Trust-Region SQP Method for Equality Constrained Optimization(SIAM, 2014) Heinkenschloss, Matthias; Ridzal, DenisWe develop and analyze a trust-region sequential quadratic programming (SQP) method for the solution of smooth equality constrained optimization problems, which allows the inexact and hence iterative solution of linear systems. Iterative solution of linear systems is important in large-scale applications, such as optimization problems with partial differential equation constraints, where direct solves are either too expensive or not applicable. Our trust-region SQP algorithm is based on a composite-step approach that decouples the step into a quasi-normal and a tangential step. The algorithm includes critical modifications of substep computations needed to cope with the inexact solution of linear systems. The global convergence of our algorithm is guaranteed under rather general conditions on the substeps. We propose algorithms to compute the substeps and prove that these algorithms satisfy global convergence conditions. All components of the resulting algorithm are specified in such a way that they can be directly implemented. Numerical results indicate that our algorithm converges even for very coarse linear system solves.Item Numerical method of characteristics for one-dimensional blood flow(Elsevier, 2015) Acosta, Sebastian; Puelz, Charles; Riviére, Béatrice; Penny, Daniel J.; Rusin, Craig G.Mathematical modeling at the level of the full cardiovascular system requires the numerical approximation of solutions to a one-dimensional nonlinear hyperbolic system describing flow in a single vessel. This model is often simulated by computationally intensive methods like finite elements and discontinuous Galerkin, while some recent applications require more efficient approaches (e.g. for real-time clinical decision support, phenomena occurring over multiple cardiac cycles, iterative solutions to optimization/inverse problems, and uncertainty quantification). Further, the high speed of pressure waves in blood vessels greatly restricts the time step needed for stability in explicit schemes. We address both cost and stability by presenting an efficient and unconditionally stable method for approximating solutions to diagonal nonlinear hyperbolic systems. Theoretical analysis of the algorithm is given along with a comparison of our method to a discontinuous Galerkin implementation. Lastly, we demonstrate the utility of the proposed method by implementing it on small and large arterial networks of vessels whose elastic and geometrical parameters are physiologically relevant.Item An Efficient Gauss--Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations(Society for Industrial and Applied Mathematics, 2015) Liu, Xin; Wen, Zaiwen; Zhang, YinWe derive and study a Gauss--Newton method for computing a symmetric low-rank product $XX^{{T}}$, where $X \in{\mathbb{R}}^{n\times k}$ for $kItem A Comparison of High Order Interpolation Nodes for the Pyramid(Society for Industrial and Applied Mathematics, 2015) Chan, Jesse; Warburton, T.The use of pyramid elements is crucial to the construction of efficient hex-dominant meshes [M. Bergot, G. Cohen, and M. Duruflé, J. Sci. Comput., 42 (2010), pp. 345--381]. For conforming nodal finite element methods with mixed element types, it is advantageous for nodal distributions on the faces of the pyramid to match those on the faces and edges of hexahedra and tetrahedra. We adapt existing procedures for constructing optimized tetrahedral nodal sets for high order interpolation to the pyramid with constrained face nodes, including two generalizations of the explicit warp and blend construction of nodes on the tetrahedron [T. Warburton, J. Engrg. Math., 56 (2006), pp. 247--262]. Comparisons between nodal sets show that the lowest Lebesgue constants are given by warp and blend nodes for order $N\leq 7$ and Fekete nodes for $N>7$, though numerical experiments show little variation in the conditioning and accuracy of all surveyed nonequidistant points.