- Browse by Title

# ECE Theses and Dissertations

## Permanent URI for this collection

## Browse

### Browsing ECE Theses and Dissertations by Title

Now showing 1 - 20 of 597

###### Results Per Page

###### Sort Options

Item 3-D segmentation and volume estimation of radiologic images by a novel, feature driven, region growing technique(1992) Agris, Jacob Martin; de Figueiredo, Rui J. P.Show more Magnetic Resonance (MR) imaging is a 3-D, multi-slice, radiological technique that acquires multiple intensities corresponding to each voxel. The transverse relaxation time, T$\sb1$, and the axial relaxation time, T$\sb2$, are two commonly obtained intensities that tend to be orthogonal. Automated segmentation of 3-D regions is very difficult because some borders may be delineated only in T$\sb1$ images, while others are delineated only in T$\sb2$ images. Classical segmentation techniques based on either global histogram segmentation or local edge detection often fail due to the non-unique and random nature of MR intensities. A 3-D, neighborhood based, segmentation method was developed based on both spatial and intensity criteria. The spatial criterion requires that only voxels connected by an edge or face to a voxel known to be in the region be considered for inclusion. Therefore, the region "grows" outward from an initial voxel. An intensity criterion that tries to balance local and global properties must also be satisfied. It determines the vector distance between the intensity of the voxel in question and a characteristic intensity for the neighboring voxels known to be in the region. Voxel intensities within a 95% confidence interval of the characteristic intensity are considered part of the region. The kernel size used to determine the characteristic intensity determines the balance between global and local properties. The segmentation terminates when no additional voxels satisfy both spatial and error criteria. Some regions, such as the brain compartments, are highly convoluted, resulting in a large number of border voxels containing a mixture of adjoining tissues. A sub-voxel estimate of the fractional composition is necessary for accurate quantification. A least-squares estimator was derived for the fractional composition of each voxel. Additionally, a maximum likelihood estimator was derived to globally estimate the fraction for all mixture voxels. This estimator is a minimum variance estimator in contrast to the least-squares estimator. The estimation methods in conjunction with the 3-D, neighborhood based, segmentation method resulted in an automated, highly accurate, quantification technique shown to be successful even for the brain compartments. Widespread applicability of these methods was further demonstrated by segmentation of kidneys in CT images.Show more Item A behavioral approach to positive interpolation(2005) Mayo, Andrew; Antoulas, Athanasios C.Show more We study interpolation by positive functions from a behavioural point of view. In particular, by considering the notion of mirror image data, the interpolation problem with passivity constraint is transformed into an unconstrained behavioural modeling one. It will be shown that the generating system for this problem has to be unitary with respect to an indefinite matrix. Using this approach, several results in the theory of interpolation by positive functions are derived in a very natural manner. The use of generating systems leads in a natural way to the recent results obtained by Byrnes et al concerning parametrizing the set of interpolants by the spectral zeros. We then apply the same approach to interpolation on the boundary.Show more Item A closed-loop model of the ovine cardiovascular system(2003) Qian, Junhui; Clark, John W., Jr.Show more The conscious sheep is an important large animal model for the study of human cardiovascular and cardiopulmonary system. In this study we develop a closed-loop mathematical model of its cardiovascular system. A distributed approach is taken in describing the systemic circulation, which is divided into cerebral, coronary, foreleg, thoracic, abdominal, and hind-limb circulations. Nonlinear aspects of the systemic venous system are described, which include nonlinear pressure-volume characteristics of small and large veins and pressure-operated valves in large veins. The complete integrated model mimics typical steady-state hemodynamic data in the supine position. It is also used to predict the blood volume shifts and hemodynamic changes that accompany standing up. These include the short-term neurally mediated cardiovascular response to the orthostatic stress. Additional studies predict the circulatory response to an increased afterload (balloon inflation) presented to the right ventricle. This model is further used to predict the response of the ovine cardiovascular system to the implantation of the PAL (Para-corporeal Artificial Lung device and to test the putative effectiveness of different PAL device designs.Show more Item A coding theoretic approach to image segmentation(2001) Ndili, Unoma Ifeyinwa; Nowak, Robert D.Show more Using a coding theoretic approach, we achieve unsupervised image segmentation by implementing Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions in images. MDL offers a mathematical foundation for balancing brevity of descriptions against their fidelity to the data by penalizing overly complex representations. Our image model is a Gaussian random field whose mean and variance functions are piecewise constant. The image pixels are conditionally independent and Gaussian, given the mean and variance functions. Our model is aimed at identifying regions of constant intensity (mean) and texture (variance). We adopt a multi-scale encoding approach to the segmentation problem, and develop two different schemes. One algorithm is based on an adaptive (greedy) rectangular partitioning, while the second algorithm is an optimally-pruned wedgelet-decorated dyadic partitioning scheme. We compare the two algorithms with the more common signal plus constant noise schemes, which account for variations in mean only. We explore applications of our algorithms on Synthetic Aperture Radar (SAR) imagery. Based on our segmentation scheme, we implement a robust Constant False alarm Rate (CFAR) detector towards Automatic Target Recognition (ATR) on Laser Radar (LADAR) and Infra-Red (IR) images.Show more Item A communications and interaction model for intelligent cooperating robots(1993) Ciscon, Lawrence Albert; Johnson, Don H.Show more In complex robotic operating environments in which robots must cooperate in a flexible and event-driven manner, a cooperative distributed environment for intelligent control is required. We develop a realistic technique for going beyond the model of a central controller for a multi-robot environment and replacing it with a schema of interacting, reconfigurable, cooperating robots. This schema provides the following main features: an open model of cooperation capable of supporting a wide variety of representations and algorithms for planning and executing tasks, a dynamic environment in which physical and reasoning capabilities can be added, removed, and reconfigured while performing tasks to best utilize limited resources, the capability of detecting and correcting errors and failures, a rich interaction model capable of handling the complexity and variety of communications and cooperation necessary between intelligent agents, and a realistic method of achieving global goals from localized actions. We formulate this model of interacting robots as a social system. We define this social system by specifying the members of the society, the interactions of these members, and the fundamental guidelines of the society used to judge the actions of the members. We successfully implement a prototype system that incorporates these concepts, and demonstrate it on some example situations involving multiple cooperating robots. Using the results of these examples, we also develop a qualitative analysis of this model against two other common models of intelligent control for multi-robot systems.Show more Item A Compressive Phase-Locked Loop(2011) Schnelle, Stephen; Baraniuk, Richard G.Show more We develop a new method for tracking narrowband signals acquired through compressive sensing, called the compressive sensing phase-locked loop (CS-PLL). The CS-PLL enables one to track oscillating signals in very large bandwidths using a small number of measurements. Not only does the CS-PLL potentially operate below the Nyquist rate, it can extract phase and frequency information without the computational complexity normally associated with compressive sensing signal re-construction. The CS-PLL has a wide variety of applications, including but not limited to communications, phase tracking, robust control, sensing, and FM demodulation. In particular we emphasize the advantages of using this system in wideband surveillence systems. Our design modifies classical PLL designs to operate with CS-based sampling systems. Performance results are shown for PLLs operating on both real and complex data. In addition to explaining general performance tradeoffs, implementations using several different CS sampling systems are explored.Show more Item A CONTRIBUTION TO THE FIELD OF ULTRASHORT LIGHT-PULSE GENERATION AND DETECTION(1973) RUIZ-CARDENAS, HECTOR DE JESUSShow more Item A CONTRIBUTION TO THE LINEAR CODING FOR TWO WAY CHANNELS(1971) CAPRIHAN, ARVINDShow more Item A CONTRIBUTION TO THE STABILITY THEORY OF DISTRIBUTED PARAMETER SYSTEMS(1968) CHAO, KWONG SHUShow more Item A Data and Platform-Aware Framework For Large-Scale Machine Learning(2015-04-24) Mirhoseini, Azalia; Koushanfar, Farinaz; Aazhang, Behnaam; Baraniuk, Richard; Jermaine, ChristopherShow more This thesis introduces a novel framework for execution of a broad class of iterative machine learning algorithms on massive and dense (non-sparse) datasets. Several classes of critical and fast-growing data, including image and video content, contain dense dependencies. Current pursuits are overwhelmed by the excessive computation, memory access, and inter-processor communication overhead incurred by processing dense data. On the one hand, solutions that employ data-aware processing techniques produce transformations that are oblivious to the overhead created on the underlying computing platform. On the other hand, solutions that leverage platform-aware approaches do not exploit the non-apparent data geometry. My work is the first to develop a comprehensive data- and platform-aware solution that provably optimizes the cost (in terms of runtime, energy, power, and memory usage) of iterative learning analysis on dense data. My solution is founded on a novel tunable data transformation methodology that can be customized with respect to the underlying computing resources and constraints. My key contributions include: (i) introducing a scalable and parametric data transformation methodology that leverages coarse-grained parallelism in the data to create versatile and tunable data representations, (ii) developing automated methods for quantifying platform-specific computing costs in distributed settings, (iii) devising optimally-bounded partitioning and distributed flow scheduling techniques for running iterative updates on dense correlation matrices, (iv) devising methods that enable transforming and learning on streaming dense data, and (v) providing user-friendly open-source APIs that facilitate adoption of my solution on multiple platforms including (multi-core and many-core) CPUs and FPGAs. Several learning algorithms such as regularized regression, cone optimization, and power iteration can be readily solved using my APIs. My solutions are evaluated on a number of learning applications including image classification, super-resolution, and denoising. I perform experiments on various real-world datasets with up to 5 billion non-zeros on a range of computing platforms including Intel i7 CPUs, Amazon EC2, IBM iDataPlex, and Xilinx Virtex-6 FPGAs. I demonstrate that my framework can achieve up to 2 orders of magnitude performance improvement in comparison with current state-of-the-art solutions.Show more Item A DETAILED ANALYSIS OF BULK INSTABILITIES IN SEMICONDUCTOR DEVICES WITH NONUNIFORM BOUNDARY CONDITIONS(1970) SHAH, PRADEEP LILACHANDShow more Item A DISTRIBUTION-FREE MODEL ORDER ESTIMATION TECHNIQUE USING ENTROPY(1986) Kumar, Anand RamachandranShow more Item A dual acousto-optic laser scanning microscope system for the study of dendritic integration: Design, construction, and preliminary results(2003) Iyer, Vijay; Saggau, PeterShow more Recent research has highlighted the vital role played by dendrites in effecting the computational properties of single neurons in the central nervous system (CNS). An ultraviolet (UV) acousto-optic laser scanning microscope system was developed that enables UV laser pulses to be delivered to multiple user-selected sites in the microscope's specimen plane with high spatial (<10mum) and temporal (<20mus) resolution. By employing "caged" neurotransmitters, the system can effect physiologically realistic spatio-temporal patterns of "synaptic" stimulation to the dendrites of a single cultured neuron. This system was combined with a previously developed acousto-optic laser scanning system for fast, multi-site optical recording of electrical activity (Bullen et al. 1999). This combination---the "Dual Scanner"---allows the study of important dendritic questions such as the underlying mechanisms of spatial and temporal summation. This thesis describes several current outstanding questions of dendritic integration, the design and construction of the system, and some promising preliminary results.Show more Item A globally convergent algorithm for training multilayer perceptrons for data classification and interpolation(1991) Madyastha, Raghavendra K.; Aazhang, BehnaamShow more This thesis addresses the issue of applying a "globally" convergent optimization scheme to the training of multi-layer perceptrons, a class of Artificial Neural Networks, for the detection and classification of signals in single- and multi-user communication systems. The research is motivated by the fact that a multi-layer perceptron is theoretically capable of approximating any nonlinear function to within any specified accuracy. The object function to which we apply the optimization algorithm is the error function of the multilayer perceptron, i.e., the average of the sum of the squares of the differences between the actual and the desired outputs to specified inputs. Until recently, the most widely used training algorithm has been the Backward Error Propagation algorithm, which is based on the algorithm for "steepest descent" and hence, is at best linearly convergent. The algorithm discussed here combines the merits of two well known "global" algorithms--the Conjugate Gradients and the Trust Region algorithms. A further technique known as preconditioning is used to speed up the convergence by clustering the eigenvalues of the "effective Hessian". The Preconditioned Conjugate Gradients--Trust Regions algorithm is found to be superlinearly convergent and hence, outperforms the standard backpropagation routine.Show more Item A hierarchical wavelet-based framework for pattern analysis and synthesis(2000) Scott, Clayton Dean; Nowak, Robert D.Show more Despite their success in other areas of statistical signal processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations inherent in most pattern observations. In this thesis we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown pattern transformations. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR, a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. If we are given several trained models for different patterns, our framework provides a low-dimensional subspace classifier that is invariant to unknown pattern transformations as well as background clutter.Show more Item A HIERARCHICAL, RESTRUCTURABLE MULTI-MICROPROCESSOR ARCHITECTURE(1976) ARNOLD, ROBERT GLENShow more Item A HIGH RESOLUTION DATA-ADAPTIVE TIME-FREQUENCY REPRESENTATION(1987) JONES, DOUGLAS LLEWELLYN; Parks, ThomasShow more The short-time Fourier transform and the Wigner distribution are the time-frequency representations that have received the most attention. The Wigner distribution has a number of desirable properties, but it introduces nonlinearities called cross-terms that make it difficult to interpret when applied to real multi-component signals. The short-time Fourier transform has achieved widespread use in applications, but it often has poor resolution of signal components and can bias the estimate of signal parameters. A need exists for a time-frequency representation without the shortcomings of the current techniques. This dissertation develops a data-adaptive time-frequency representation that overcomes the often poor resolution of the traditional short-time Fourier transform, while avoiding the nonlinearities that make the Wigner distribution and other bilinear representations difficult to interpret and use. The new method uses an adaptive Gaussian basis, with the basis parameters varying at different time-frequency locations to maximize the local signal concentration in time-frequency. Two methods for selecting the Gaussian parameters are presented: a method that maximizes a measure of local signal concentration, and a parameter estimation approach. The new representation provides much better performance than any of the currently known techniques in the analysis of multi-modal dispersive waveforms.Show more Item A hybrid relaying protocol for the parallel-relay network(2010) Summerson, Samantha Rose; Aazhang, BehnaamShow more Cooperation among radios in wireless networks has been shown to improve communication in several aspects. We analyze a wireless network which employs multiple parallel relay transceivers to assist in communication between a single source-destination pair, demonstrating that gains are achieved when a random subset of relays is selected. We derive threshold values for the received signal-to-noise ratios (SNRs) at the relays based on outage probabilities; these thresholds essentially determine the active subset of relays in each time frame for our parallel relay network; due the random nature of wireless channels, this active subset is a random. Two established forwarding protocols for the relays, Amplify-and-Forward and Decode-and-Forward, are combined to create a hybrid relaying protocol which is analyzed in conjunction with both regenerative coding and distributed space-time coding at the relays. Finally, the allocation of power resources to minimize the end-to-end probability of outage is considered.Show more Item A MATHEMATICAL MODEL FOR RELATING EEG TO CERTAIN STIMULUS FIELDS(1964) WELCH, ASHLEY JAMESShow more Item A MATHEMATICAL MODEL OF LEFT VENTRICULAR FUNCTION AND ITS SYMPATHETIC CONTROL(1973) GREENE, MICHAEL EDWARD; Clark, John W., Jr.Show more