Rice University Theses and Dissertations
Permanent URI for this collection
Rice University makes all graduate theses and dissertations (1916-present) available online at no cost to end users.
Occasionally a thesis or dissertation may be be missing from the repository. If you are unable to find a specific dissertation, please let us know and we will attempt to make it available through the repository, provided that the author has not elected for it to be embargoed.
News
Visit the web site for Rice University's Office of Graduate and Postdoctoral Studies for more information about Rice graduate student requirements for theses and dissertations.
Browse
Browsing Rice University Theses and Dissertations by Author "Aazhang, Behnaam"
Now showing 1 - 20 of 95
Results Per Page
Sort Options
Item A Cross-Tier Scheduling Scheme for Multi-tier mmWave Wireless Networks(2017-12-01) Fan, Boqiang; Aazhang, BehnaamDue to abundant frequency resources, millimeter wave (mmWave) spectrum draws much attention as a solution to bandwidth scarcity. However, characteristics of mmWave transmissions, such as blockage and reduced coverage, make conventional network architectures inefficient for use in mmWave communications. A recently proposed multi-tier mmWave network architecture allows for relaying around blockages and enlarges the coverage of each Backhaul at an acceptable deployment cost. Nevertheless, this architecture introduces major challenges to scheduling. The necessity of both enabling flexible user association and fully exploiting wireless Backhaul requires cross-tier consideration of multi-tier mmWave networks. This thesis comprehensively analyzes the scheduling of downlink multi-tier mmWave networks by jointly regulating transmissions in all tiers. The cross-tier optimization problem is NP-hard, but a sub-optimal scheme which iteratively optimizes schedules of different network tiers is proposed with polynomial computational complexity. Simulations show that our algorithm significantly outperforms benchmarks in spectral efficiency and fairness with various user distributions.Item A Data and Platform-Aware Framework For Large-Scale Machine Learning(2015-04-24) Mirhoseini, Azalia; Koushanfar, Farinaz; Aazhang, Behnaam; Baraniuk, Richard; Jermaine, ChristopherThis thesis introduces a novel framework for execution of a broad class of iterative machine learning algorithms on massive and dense (non-sparse) datasets. Several classes of critical and fast-growing data, including image and video content, contain dense dependencies. Current pursuits are overwhelmed by the excessive computation, memory access, and inter-processor communication overhead incurred by processing dense data. On the one hand, solutions that employ data-aware processing techniques produce transformations that are oblivious to the overhead created on the underlying computing platform. On the other hand, solutions that leverage platform-aware approaches do not exploit the non-apparent data geometry. My work is the first to develop a comprehensive data- and platform-aware solution that provably optimizes the cost (in terms of runtime, energy, power, and memory usage) of iterative learning analysis on dense data. My solution is founded on a novel tunable data transformation methodology that can be customized with respect to the underlying computing resources and constraints. My key contributions include: (i) introducing a scalable and parametric data transformation methodology that leverages coarse-grained parallelism in the data to create versatile and tunable data representations, (ii) developing automated methods for quantifying platform-specific computing costs in distributed settings, (iii) devising optimally-bounded partitioning and distributed flow scheduling techniques for running iterative updates on dense correlation matrices, (iv) devising methods that enable transforming and learning on streaming dense data, and (v) providing user-friendly open-source APIs that facilitate adoption of my solution on multiple platforms including (multi-core and many-core) CPUs and FPGAs. Several learning algorithms such as regularized regression, cone optimization, and power iteration can be readily solved using my APIs. My solutions are evaluated on a number of learning applications including image classification, super-resolution, and denoising. I perform experiments on various real-world datasets with up to 5 billion non-zeros on a range of computing platforms including Intel i7 CPUs, Amazon EC2, IBM iDataPlex, and Xilinx Virtex-6 FPGAs. I demonstrate that my framework can achieve up to 2 orders of magnitude performance improvement in comparison with current state-of-the-art solutions.Item A Data-Driven Information Theoretic Approach for Neural Network Connectivity Inference(2017-04-21) Cai, Zhiting; Aazhang, BehnaamA major challenge in neuroscience is to develop effective tools that infer the circuit connectivity from large-scale recordings of neuronal activity patterns, such that we can study how structures of neural networks enable brain functioning. To tackle this challenge, we used context tree maximizing (CTM) to estimate directed information (DI), which measures causal influences among neural spike trains in order to infer synaptic connections. In contrast to existing methods, our method is data-driven and can readily identify both linear and nonlinear relations between neurons. This CTM-DI method reliably identified circuit structures underlying simulations of realistic conductance-based networks. It detected direct connections, eliminated indirect connections, quantified the amount of information flow, reliably distinguished synaptic excitation from inhibition and inferred the time-course of the synaptic influence. From voltage-sensitive dye recordings of the buccal ganglion of Aplysia, our method detected many putative motifs and patterns. This method can be applied to other large-scale recordings as well. It offers a systematic tool to map network connectivity and to track changes in network structure such as synaptic strengths as well as the degrees of connectivity of individual neurons, which in turn could provide insights into how modifications produced by learning are distributed in a neural network. Furthermore, this information theoretic approach can be extended to the analysis of other recordings that can be modeled as point processes, such as internet traffic, disease outbreak and seismic activity.Item A globally convergent algorithm for training multilayer perceptrons for data classification and interpolation(1991) Madyastha, Raghavendra K.; Aazhang, BehnaamThis thesis addresses the issue of applying a "globally" convergent optimization scheme to the training of multi-layer perceptrons, a class of Artificial Neural Networks, for the detection and classification of signals in single- and multi-user communication systems. The research is motivated by the fact that a multi-layer perceptron is theoretically capable of approximating any nonlinear function to within any specified accuracy. The object function to which we apply the optimization algorithm is the error function of the multilayer perceptron, i.e., the average of the sum of the squares of the differences between the actual and the desired outputs to specified inputs. Until recently, the most widely used training algorithm has been the Backward Error Propagation algorithm, which is based on the algorithm for "steepest descent" and hence, is at best linearly convergent. The algorithm discussed here combines the merits of two well known "global" algorithms--the Conjugate Gradients and the Trust Region algorithms. A further technique known as preconditioning is used to speed up the convergence by clustering the eigenvalues of the "effective Hessian". The Preconditioned Conjugate Gradients--Trust Regions algorithm is found to be superlinearly convergent and hence, outperforms the standard backpropagation routine.Item A hybrid relaying protocol for the parallel-relay network(2010) Summerson, Samantha Rose; Aazhang, BehnaamCooperation among radios in wireless networks has been shown to improve communication in several aspects. We analyze a wireless network which employs multiple parallel relay transceivers to assist in communication between a single source-destination pair, demonstrating that gains are achieved when a random subset of relays is selected. We derive threshold values for the received signal-to-noise ratios (SNRs) at the relays based on outage probabilities; these thresholds essentially determine the active subset of relays in each time frame for our parallel relay network; due the random nature of wireless channels, this active subset is a random. Two established forwarding protocols for the relays, Amplify-and-Forward and Decode-and-Forward, are combined to create a hybrid relaying protocol which is analyzed in conjunction with both regenerative coding and distributed space-time coding at the relays. Finally, the allocation of power resources to minimize the end-to-end probability of outage is considered.Item A Matter of Perspective: Reliable Communication and Coping with Interference with Only Local Views(2012-09-05) Kao, David; Sabharwal, Ashutosh; Aazhang, Behnaam; Knightly, Edward W.; Tapia, Richard A.; Chiang, MungThis dissertation studies interference in wireless networks. Interference results from multiple simultaneous attempts to communicate, often between unassociated sources and receivers, preventing extensive coordination. Moreover, in practical wireless networks, learning network state is inherently expensive, and nodes often have incomplete and mismatched views of the network. The fundamental communication limits of a network with such views is unknown. To address this, we present a local view model which captures asymmetries in node knowledge. Our local view model does not rely on accurate knowledge of an underlying probability distribution governing network state. Therefore, we can make robust statements about the fundamental limits of communication when the channel is quasi-static or the actual distribution of state is unknown: commonly faced scenarios in modern commercial networks. For each local view, channel state parameters are either perfectly known or completely unknown. While we propose no mechanism for network learning, a local view represents the result of some such mechanism. We apply the local view model to study the two-user Gaussian interference channel: the smallest building block of any interference network. All seven possible local views are studied, and we find that for five of the seven, there exists no policy or protocol that universally outperforms time-division multiplexing (TDM), justifying the orthogonalized approach of many deployed systems. For two of the seven views, TDM-beating performance is possible with use of opportunistic schemes where opportunities are revealed by the local view. We then study how message cooperation --- either at transmitters or receivers --- increases capacity in the local view two-user Gaussian interference channel. The cooperative setup is particularly appropriate for modeling next-generation cellular networks, where costs to share message data among base stations is low relative to costs to learn channel coefficients. For the cooperative setting, we find: (1) opportunistic approaches are still needed to outperform TDM, but (2) opportunities are more abundant and revealed by more local views. For all cases studied, we characterize the capacity region to within some known gap, enabling computation of the generalized degrees of freedom region, a visualization of spatial channel resource usage efficiency.Item A Resource-Aware Streaming-based Framework for Big Data Analysis(2015-12-02) Darvish Rouhani, Bita; Koushanfar, Farinaz; Aazhang, Behnaam; Baraniuk, RichardThe ever growing body of digital data is challenging conventional analytical techniques in machine learning, computer vision, and signal processing. Traditional analytical methods have been mainly developed based on the assumption that designers can work with data within the confines of their own computing environment. The growth of big data, however, is changing that paradigm especially in scenarios where severe memory and computational resource constraints exist. This thesis aims at addressing major challenges in big data learning problem by devising a new customizable computing framework that holistically takes into account the data structure and underlying platform constraints. It targets a widely used class of analytical algorithms that model the data dependencies by iteratively updating a set of matrix parameters, including but not limited to most regression methods, expectation maximization, and stochastic optimizations, as well as the emerging deep learning techniques. The key to our approach is a customizable, streaming-based data projection methodology that adaptively transforms data into a new lower-dimensional embedding by simultaneously considering both data and hardware characteristics. It enables scalable data analysis and rapid prototyping of an arbitrary matrix-based learning task using a sparse-approximation of the collection that is constantly updated inline with the data arrival. Our work is supported by a set of user-friendly Application Programming Interfaces (APIs) that ensure automated adaptation of the proposed framework to various datasets and System on Chip (SoC) platforms including CPUs, GPUs, and FPGAs. Proof of concept evaluations using a variety of large contemporary datasets corroborate the practicability and scalability of our approach in resource-limited settings. For instance, our results demonstrate 50-fold improvement over the best known prior-art in terms of memory, energy, power, and runtime for training and execution of deep learning models in deployment of different sensing applications including indoor localization and speech recognition on constrained embedded platforms used in today's IoT enabled devices such as autonomous vehicles, robots, and smartphone.Item A sample realization approach for optimization of code division multiple access systems(1994) Mandayam, Narayan B.T.; Aazhang, BehnaamEfforts in performance analysis of Code Division Multiple Access (CDMA) systems have concentrated on obtaining asymptotic approximations and bounds for system error probabilities. As such, these cannot capture the sensitivities of the system performance to any class of parameters, and the optimization of such systems (with respect to any class of parameters) presents itself to no analytical solutions. A discrete event dynamic systems (DEDS) formulation is developed for CDMA systems whereby the sensitivity of the average probability of error can be evaluated with respect to a wide class of system parameters via sample path based gradient estimation techniques like infinitesimal perturbation analysis (IPA) and the likelihood ratio (LR) method. Appropriate choice of the sample path and the corresponding sample performance function leads to analyzing the sensitivity of the average probability of error to near-far effects, power control, and code parameters. Further, these sensitivity analysis methods are incorporated in gradient algorithms for optimizing system performance in terms of the minimum probability of detection error. Specifically, for direct-sequence CDMA systems, IPA based stochastic gradient algorithms are used to develop a class of adaptive linear detectors that are optimum in that they minimize the average probability of bit-error. These detectors outperform both the matched filter and MMSE detectors, and also alleviate the disadvantage of multiuser detection schemes that require implicit information on the multiple access interference. For CDMA systems in the optical domain, IPA based stochastic algorithms are used to develop a class of adaptive threshold detectors that minimize the average probability of bit-error. These detectors outperform the correlation detector and also preclude the need for assumptions on the interference statistics required by existing optimum one-shot detectors. All adaptive detection schemes developed here are easily implementable owing to the simple recursive structures that arise out of our sample realization based approach. The sequential versions of the adaptive detectors developed in here require no preamble, which makes them a viable choice for CDMA channels subject to temporal variations due to dispersion effects and variable number of users in the channel.Item Addressing Indirect Functional Connectivity in Neuroscience via Graphical Information Theory: Causality and Coherence(2020-12-03) Young, Joseph; Aazhang, BehnaamAccurate inference of functional connectivity, i.e. statistical relationships between brain regions, is critical for understanding brain function. Distinguishing between direct and indirect relationships is particularly important because this corresponds to identifying if brain regions are directly connected. Although solutions exist for linear Gaussian cases, we introduce a general framework that can address nonlinear and non-Gaussian cases, which are more relevant for neural data. Previous model-free methods have limited ability to identify indirect connections because of inadequate scaling with dimensionality. This poor scaling performance reduces the number of nodes, e.g. brain regions, that can be included in conditioning. By contrast, we develop model-free techniques quantifying (1) causality in the time domain and (2) coherence in the frequency domain that scale markedly better and thereby enable minimization of indirect functional connectivity. Our first model-free framework, graphical directed information (GDI), enables pairwise directed functional connections to be conditioned on substantially more processes, producing a more accurate graph of direct causal functional connectivity in the time domain. GDI correctly inferred the circuitry of simulated arbitrary Gaussian, nonlinear, and conductance-based networks. Furthermore, GDI inferred many connections of a model of a central pattern generator (CPG) circuit in Aplysia, while also reducing many indirect connections. GDI can be used on a variety of scales and data types to provide accurate direct causal connectivity graphs. Our second model-free framework, partial generalized coherence (PGC), expands prior work by allowing pairwise frequency coupling analyses to be conditioned on other processes, enabling model-free partial frequency coupling results. Our technique scales well with dimensionality, making it possible to condition on many processes and to even produce a partial frequency coupling graph. We analyzed both linear Gaussian and nonlinear simulated networks containing indirect frequency coupling which was correctly eliminated by PGC. We then performed PGC analysis of calcium recordings from the rodent olfactory bulb and quantified the dominant influence of breathing-related activity on the pairwise relationships between glomeruli for low frequencies. Overall, we introduce a technique capable of eliminating indirect frequency coupling in a model-free way.Item Advanced techniques for next-generation wireless systems(1999) Sendonaris, Andrew; Aazhang, BehnaamIn order to meet the demands of next-generation wireless systems, which will be required to support multirate multimedia at high data rates, it is necessary to employ advanced algorithms and techniques that enable the system to guarantee the quality of service desired by the various media classes. In this work, we present a few novel methods for improving wireless system performance and achieving next-generation goals. Our proposed methods include finding signal sets that are designed for fading channels and support multirate, exploiting knowledge of the fading statistics during the data detection process, exploiting the existence of Doppler in the received signal, and allowing mobile users to cooperate in order to send their information to the base station. We evaluate the performance of our proposed ideas and show that they provide gains with respect to conventional systems. The benefits include multirate support, higher data rates, and more stable data rates. It should be mentioned that, while we focus mainly on a CDMA framework for analyzing our ideas, many of these ideas may also be applied to other wireless system environments.Item Advancing life science with adaptive intelligent microscopy(2022-04-22) Safaei, Seyed Mojtaba; St-Pierre, Francois; Aazhang, BehnaamBiology is dynamic: the location, shape, and function of molecules and cells change with time. When studying biological systems, it is critical to adapt data collection and analysis to fit their current states. However, the lack of real-time interactions in traditional microscopy makes it impossible to guide the experiments and adapt to the biological events. In this thesis, we introduce closed-loop microscopy (CLM) approaches that address the current shortcomings by providing real-time interactions between acquisition and analysis. CLM is implemented in an event-driven way; acquisition events notify the downstream analysis, resulting in feedback that triggers real-time actions. CLM is particularly suited for long experiments that study rare biological events; experiments in which adapting to the real-time changes increases the probability of success. We demonstrated examples in which CLM reduced sample size variation across trials, achieved five times higher throughput in fluorescent protein characterization, and enabled the study of rotavirus at low multiplicities of infection.Item Algorithms Toward a Next Generation Pacemaker(2021-04-30) Banta, Anton Reza; Aazhang, BehnaamThis work describes the development and implementation of machine learning algorithms to enhance the functionality of pacemaker technology. This is done by approaching two limitations of current pacemakers: the inability to remotely monitor the $12$-lead surface electrocardiogram (ECG) and the need to hand pick the pacing parameters in the device over time. First, we propose a method to facilitate the remote follow up of patients suffering from cardiac pathologies and treated with an implantable device, by reconstructing a $12$-lead ECG from the intracardiac electrograms (EGM) recorded by the device. The method proposed to perform this reconstruction is a convolutional neural network. These methods are evaluated on a dataset retroactively collected from $14$ patients. Correlation coefficients calculated between the reconstructed and the actual ECG show that the proposed convolutional neural network method represents an efficient and accurate way to synthesize a $12$-lead ECG. Second, we propose a framework for automatically choosing the optimal parameters for a pacemaker. In a typical pacemaker implantation procedure, a cardiologist must determine optimal pacing parameters for the patient that results in healthy blood flow and heart conductance. Thirty percent of patients do not achieve a healthy vascular condition from standard pacemakers, which is believed to be due to the choice of parameters being sub optimal. Thus, the objective of this work is to develop an algorithm that finds an optimal choice of pacing parameters for a given patient. To do this, a set of $48$ different pacing parameters is used on a porcine model and the corresponding $12$-lead ECG, pressure-volume data, and intracardiac signals were measured. Metrics were calculated from this data to provide a framework for choosing the best choice of pacing parameters based on a multinomial classification model. These two bodies of work demonstrate the application and development of machine learning algorithms to pacemaker technology to improve the diagnostic and therapeutic abilities of the device.Item Allusions, Illusions & Delusions(2013-09-16) Bachicha, Stephen; Gottschalk, Arthur W.; Aazhang, Behnaam; Bailey, Walter B.; Lavenda, RichardAllusions, Illusions and Delusions (2013) is an eight minute work for full orchestra that blends elements of lyricism with fast kinetic music, orchestral tutti with smaller groupings and solos, and familiar harmonic language with more exotic combinations. The piece begins with a bang, employing a figure that blurs the distinction between major and minor triads. After the ensuing short introduction, the flugelhorn’s lyrical theme becomes the main focus; indeed, elements of this solo line help to shape the entire piece. Following an expansive orchestral tutti built on this theme, the line and the ensemble are broken down and small groups of instruments begin a climb to the fast section of the piece. The longest portion of the score, this fast section takes the listener on a roller coaster ride with sharp turns and many ups and downs. The ride continues building more and more intensity and energy until the climax, marked in the score “huge and bombastic.” As this cacophonous “wall of sound” dies down, four solo strings and a clarinet emerge, recalling moments of the flugelhorn solo. A solo bucket muted trumpet presents a final paraphrase of the theme, bringing the piece to a calm and soothing resolution. Allusions, Illusions and Delusions takes its title from elements of the piece itself and from a number of external influences. The lyrical flugelhorn solo beginning at measure 27, the rapidly changing harmonies of the fast section, polychordal segments (such as the Eb major/d minor simultaneous sonority found in measures 87 through 89), and the climax at J, allude to the sounds of triadic harmonies from common practice tonal music. Aspects of these harmonies also create a sense of illusion: The main melodic and harmonic sounds used in the piece are intervals of seconds and thirds, and their inversions. By using minor seconds simultaneously as melodic and harmonic intervals, the quality of a triad or chord is often blurred, fooling the listener into thinking that they are hearing a triad, when five or more notes might actually be present. Delusion refers to the way a listener might react to the music. Often listeners invent a story to go along with a piece of music as a way for them to organize and understand the musical journey that they are experiencing. When there is no extra-musical idea tied to the piece at all, as in this instance, listeners might well be deluding themselves.Item Analyzing brain networks in language and social tasks using data-driven approaches(2021-08-13) Yellapantula, Sudha; Aazhang, BehnaamHumans are innately social, and we express ourselves primarily through language. We effortlessly articulate 2-3 words per second in fluent speech, yet this deceptively simple task is a highly complex multistage process in our brains. Unfortunately millions are affected by disease or brain-disorders leading to language and social dysfunction, with devastating consequences to their quality of life. The goal of this work was to improve our understanding of higher-order cognitive processes in the domain of language and social behavior, using a multitude of recording modalities and data-driven approaches. Three main questions were studied in this work - (1) Are language specific cognitive functions discretely computed within well-localized brain regions or rather by distributed networks? (2) When the brain receives ambiguous stimuli from the outside world, do more brain regions need to be involved to resolve this ambiguity, versus when the stimuli are unambiguous? (3) What is the effect of visual features on social cooperation, and its dependence on social context? These questions were studied using a variety of cognitive tasks and recording modalities - ECoG, EEG and spike recordings. To study the first question, we used intra-cranial electrocorticogram (ECoG) recordings from a picture naming task, to analyze the network phenomena of distributed cortical substrates supporting language. We estimated causality among brain regions with Directed Information, followed by a graph theoretic framework to extract task related dynamics from the causal estimates. Finally, we validated these functionally defined networks against the gold standard for causal inference - behavioral disruption with direct cortical stimulation. We demonstrate that the network measures combined with power have greater predictive capability for identifying critical language regions than discrete, regional power analyses alone. For the second question, we quantified the ambiguity in speech perception using network phenomena of distributed cortical substrates supporting language. We estimated statistical dependence among brain regions with Mutual Information. Using innovative baseline normalization, time-varying graphs were derived from the EEG data. These measures were tested in healthy subjects by comparing network measures derived from both ambiguous and clear stimuli. Finally, to further validate the hypothesis, we also evaluated the brain networks of an aphasic subject, who perceived all stimuli as ambiguous. We demonstrate that the network measures could clearly distinguish the aphasic patient's processing from the healthy subjects, providing evidence to support the increased brain activity to process more ambiguous stimuli. This allows for better understanding of the cognitive processes to measure patient impairment. For the third question, multi-unit recordings from freely moving non-human primates were obtained, while they performed a social cooperative task. They were equipped with wireless recording system, and scene and eye cameras to capture the field of view. We identified fixations, and the objects within receptive fields during the fixations. We tested the classification accuracy of the neural data to distinguish visual features identified during fixations, and find the social learning is captured by the improvement in distinguishing socially relevant objects with time. We test these effects in two brain regions - dorsolateral pre-frontal cortex and the higher visual area V4, and test the effects of attention on the task, lack of attention and effect of the social behavior of the partner monkey. Understanding the dynamics underlying these higher order language and social tasks can advance our understanding of cognitive disorders that can occur due to traumatic injury or disease, and could yield better remedial treatments or therapies in the future.Item Antenna arrays for wireless CDMA communication systems(1997) Madyastha, Raghavendra K.; Aazhang, BehnaamThe estimation of code delays along with amplitudes and phases of different users constitutes the first stage in the demodulation process in a CDMA communication system. The delay estimation stage is termed the acquisition stage and forms the bottleneck for the detection of users' bitstreams; accurate detection necessitates accurate acquisition. Most existing schemes incorporate a single sensor at the receiver, which leads to an inherent limit in the acquisition based capacity, which is the number of users that can be simultaneously acquired. In this thesis we combine the benefits of spatial processing in the form of an antenna array at the receiver along with code diversity to gain an increase in the capacity of the system. An additional parameter to be estimated now is the direction of arrival (DOA) of each user. We demonstrate the gains in parameter estimation with the incorporation of spatial diversity. We propose two classes of delay-DOA estimation algorithms--a maximum likelihood algorithm and a subspace based algorithm (MUSIC). With reasonable assumptions on the system we are able to derive computationally efficient estimation algorithms and demonstrate the gains achieved in exploiting multiple sensors at the receiver. In addition, we also investigate the benefits of spatial diversity in linear multiuser detection. We consider two linear multiuser detectors, the decorrelating detector and the linear MMSE detector (chosen for their near-far properties) and characterize the performance increase in the multisensor case. We observe that in many cases, the gain can be directly captured in terms of the number of sensors in the array.Item Application of Embedded Dynamic Mode Decomposition on Epileptic Data for Seizure Prediction(2020-05-27) Erfanian Taghvayi, Negar; Aazhang, BehnaamThe underlying spatiotemporal mechanism that leads to the formation of seizures in the brain has been an interesting topic for decades. Different techniques have been proposed to extract the dynamics of epileptic recordings that are involved in seizure formation. Methods have been used to measure the synchrony between two or more epileptic recordings. These techniques are often model-based or suffer from poor time-frequency resolution. In this project, we introduce a data-driven toolbox called the Dynamic Mode Decomposition (DMD) with time-delay embedding to extract the underlying spatio-temporal dynamics of seizure formation. These techniques will enable us to focus on similarities among seizures in our attempt to better understand, detect, and predict seizures. The inferred information on the underlying dynamics of an epileptic system are essential in terms of improving the capability of stimulation-based treatments of epileptic patients.Item Automated Detection and Differential Diagnosis of Non-small Cell Lung Carcinoma Cell Types Using Label-free Molecular Vibrational Imaging(2012-09-05) Hammoudi, Ahmad; Varman, Peter J.; Massoud, Yehia; Wong, Stephen T. C.; Clark, John W., Jr.; Aazhang, BehnaamLung carcinoma is the most prevalent type of cancer in the world, considered to be a relentlessly progressive disease, with dismal mortality rates to patients. Recent advances in targeted therapy hold the premise for the delivery of better, more effective treatments to lung cancer patients, that could significantly enhance their survival rates. Optimizing care delivery through targeted therapies requires the ability to effectively identify and diagnose lung cancer along with identifying the lung cancer cell type specific to each patient, $$\textit{small cell carcinoma}$$, $$\textit{adenocarcinoma}$$, or $$\textit{squamous cell carcinoma}$$. Label free optical imaging techniques such as the $$\textit{Coherent anti-stokes Raman Scattering microscopy}$$ have the potential to provide physicians with minimally invasive access to lung tumor sites, and thus allow for better cancer diagnosis and sub-typing. To maximize the benefits of such novel imaging techniques in enhancing cancer treatment, the development of new data analysis methods that can rapidly and accurately analyze the new types of data provided through them is essential. Recent studies have gone a long way to achieving those goals but still face some significant bottlenecks hindering the ability to fully exploit the diagnostic potential of CARS images, namely, the streamlining of the diagnosis process was hindered by the lack of ability to automatically detect cancer cells, and the inability to reliably classify them into their respective cell types. More specifically, data analysis methods have thus far been incapable of correctly identifying and differentiating the different non-small cel lung carcinoma cell types, a stringent requirement for optimal therapy delivery. In this study we have addressed the two bottlenecks named above, through designing an image processing framework that is capable of, automatically and accuratly, detecting cancer cells in two and three dimensional CARS images. Moreover, we built upon this capability with a new approach at analyzing the segmented data, that provided significant information about the cancerous tissue and ultimately allowed for the automatic differential classification of non-small cell lung carcinoma cell types, with superb accuracies.Item Beyond Interference Avoidance: Distributed Sun-network Scheduling in Wireless Networks with Local Views(2013-09-16) Santacruz, Pedro; Sabharwal, Ashutosh; Aazhang, Behnaam; Knightly, Edward W.; Hicks, Illya V.In most wireless networks, nodes have only limited local information about the state of the network, which includes connectivity and channel state information. With limited local information about the network, each node’s knowledge is mismatched; therefore, they must make distributed decisions. In this thesis, we pose the following question - if every node has network state information only about a small neighborhood, how and when should nodes choose to transmit? While link scheduling answers the above question for point-to-point physical layers which are designed for an interference-avoidance paradigm, we look for answers in cases when interference can be embraced by advanced code design, as suggested by results in network information theory. To make progress on this challenging problem, we propose two constructive distributed algorithms, one conservative and one aggressive, which achieve rates higher than link scheduling based on interference avoidance, especially if each node knows more than one hop of network state information. Both algorithms schedule sub-networks such that each sub-network can employ advanced interference-embracing coding schemes to achieve higher rates. Our innovation is in the identification, selection and scheduling of sub-networks, especially when sub-networks are larger than a single link. Using normalized sum-rate as the metric of network performance, we prove that the proposed conservative sub-network scheduling algorithm is guaranteed to have performance greater than or equal to pure coloring-based link scheduling. In addition, the proposed aggressive sub-network scheduling algorithm is shown, through simulations, to achieve better normalized sum-rate than the conservative algorithm for several network classes. Our results highlight the advantages of extending the design space of possible scheduling strategies to include those that leverage local network information.Item Capacity of low power multiuser systems with antenna arrays(2005) Muharemovic, Tarik; Aazhang, Behnaam; Sabharwal, AshutoshIn this thesis, we study wireless multiuser communication systems in the regime of low spectral efficiencies, where users and the multiple access point are equipped with antenna arrays. Our first contribution is to develop a generic mathematical framework which captures tradeoffs between fundamental parameters of a low power multiuser system: spectral efficiency and energy per information bit, of each user. Using the framework that we developed we next consider variable data rate multiple access problem, in low power systems, where we remove the usual assumption of tight user coordination, and we allow users to select their own data rates and trans mit powers, without coordinating, and without negotiating with the access point. Here, every user has a set of low power codebooks, that we name the policy, which accommodates a range of small spectral efficiencies, but particular data rates of other users are assumed to be an unknown---compound parameter---at each mobile. In antenna-array transmission and reception, we demonstrate an elegant interpretation of users policies, where each policy is represented by partitioning spatial dimensions into blocks, and each block is dedicated to a different user. Finally, we address the paradigm of statistically correlated antenna arrays, where we derive the effective number of uncorrelated receive spatial dimensions, which we partition to represent users policies. As more correlated antennas are packed into a limited area we show that effective receive dimensionality converges to a finite limit which we evaluate for some simple geometries.Item Channel estimation for code division multiple access communication systems(1994) Bensley, Stephen Edward; Aazhang, BehnaamWe consider the estimation of channel parameters for code division multiple access (CDMA) communication systems operating over channels with either single or multiple propagation paths. We present two approaches for decomposing this multiuser channel estimation problem into a series of single user problems. In the first method, the interfering users are treated as colored, non-Gaussian noise, and the maximum likelihood estimate is formed from the sample mean and sample covariance matrix of the received signal. In the second method, we exploit the eigenstructure of the sample correlation matrix to partition the observation space into a signal subspace and a noise subspace. The channel estimate is formed by projecting a given user's spreading waveform into the estimated noise subspace and then either maximizing the likelihood or minimizing the Euclidean norm of this projection. Both of these approaches yield algorithms which are near-far resistant and are capable of tracking slowly varying channels.