Browsing by Author "Sorensen, Danny"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Model order reduction and domain decomposition for large-scale dynamical systems(2008) Sun, Kai; Sorensen, DannyDomain decomposition and model order reduction are both very important techniques for scientific and engineering computing. Their goals are both trying to speed up computations, however by different approaches. Domain decomposition is based on the general concept of divide-and-conquer which partitions a large-scale problem into a sequence of smaller and easy-to-solve problems. Model order reduction tries to relieve the simulation loads of dynamical systems by reducing the size of the systems dramatically. In this thesis, I investigate some problems arising from these two areas and propose some potential applications of them. At first, I give a sensitivity analysis of Smith method via iterative solvers. Smith method is a very important for balance truncation model order reduction. Secondly, we introduce a new effective approach to compute the reduced order model based on balanced truncation for a class of descriptor systems. Computational results were presented which indicate this new approach is promising and very efficient computationally. Thirdly, by combining balanced truncation model order reduction and domain decomposition techniques together, the reduced order models for systems of discretized partial differential equations with a spatially localized nonlinearities is derived. Finally, I present fully parallel domain decomposition techniques for another kind of problems, fast simulations of large-scale linear circuits such as power grids.Item Model reduction of strong-weak neurons(Frontiers Media, 2014) Du, Bosen; Sorensen, Danny; Cox, Steven J.We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs). We combine the best of these two strategies via a predictor-corrector decomposition scheme and achieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.Item Nonnormality in Lyapunov Equations(2016-04-22) Baker, Jonathan; Sorensen, Danny; Embree, MarkThe singular values of the solution to a Lyapunov equation determine the potential accuracy of the low-rank approximations constructed by iterative methods. Low- rank solutions are more accurate if most of the singular values are small, so a priori bounds that describe coefficient matrix properties that correspond to rapid singular value decay are valuable. Previous bounds take similar forms, all of which weaken (quadratically) as the coefficient matrix departs from normality. Such bounds suggest that the more nonnormal the coefficient matrix becomes, the slower the singular values of the solution will decay. However, simple examples typically exhibit an eventual acceleration of decay if the coefficient becomes very nonnormal. We will show that this principle is universal: decay always improves as departure from normality increases beyond a given threshold, specifically as the numerical range of the coefficient matrix extends farther into the right half-plane. We also give examples showing that similar behavior can occur for general Sylvester equations, though the right-hand side plays a more important role.