Browsing by Author "Johnson, Don H."
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item A communications and interaction model for intelligent cooperating robots(1993) Ciscon, Lawrence Albert; Johnson, Don H.In complex robotic operating environments in which robots must cooperate in a flexible and event-driven manner, a cooperative distributed environment for intelligent control is required. We develop a realistic technique for going beyond the model of a central controller for a multi-robot environment and replacing it with a schema of interacting, reconfigurable, cooperating robots. This schema provides the following main features: an open model of cooperation capable of supporting a wide variety of representations and algorithms for planning and executing tasks, a dynamic environment in which physical and reasoning capabilities can be added, removed, and reconfigured while performing tasks to best utilize limited resources, the capability of detecting and correcting errors and failures, a rich interaction model capable of handling the complexity and variety of communications and cooperation necessary between intelligent agents, and a realistic method of achieving global goals from localized actions. We formulate this model of interacting robots as a social system. We define this social system by specifying the members of the society, the interactions of these members, and the fundamental guidelines of the society used to judge the actions of the members. We successfully implement a prototype system that incorporates these concepts, and demonstrate it on some example situations involving multiple cooperating robots. Using the results of these examples, we also develop a qualitative analysis of this model against two other common models of intelligent control for multi-robot systems.Item A geometry for detection theory(1993) Dabak, Anand Ganesh; Johnson, Don H.The optimal detector for a binary detection problem under a variety of criteria is the likelihood ratio test. Despite this simple characterization of the detector, analytic performance analysis in most cases is difficult because of the complexity of integrals involved in its computation. When the two hypotheses are signals in additive Gaussian noise, performance analysis leads to a geometric notion, whereby the signals are considered as elements of a Euclidean space and the distance between them being a measure of the performance. We extend this notion to non-Gaussian problems, assuming only that the nominal densities are mutually absolutely continuous. We adopt a completely non-parametric approach, considering the two hypotheses as points in the space of all the probability measures over the observation space. Employing the tools of differential geometry, we induce a manifold structure on this space of all probability measures, and enforce a detection theoretic specific geometry on it. The only Riemannian metric on this manifold is the Fisher information, whenever it exists. Because the detection theoretic covariant derivative is incompatible with this metric the manifold for the space of probability measures is non-Riemannian. We show that geodesics on this manifold are the exponential mixture densities. Despite not being able to define distance metric on our non-Riemannian manifold, we show that the Kullback information can play the role of "squared" distance. Because the Kullback information is asymptotically related to the performance of the optimal detector, geometry and performance are linked. We apply this geometry to solve some classic problems in detection theory. Using the Kullback information to define the contamination neighborhoods around the nominals, the likelihood ratio of the nominals is shown to yield the robust detector. We obtain a density "halfway" between the nominals to employ as the importance sampling biasing density. Using this density, we demonstrate that the importance sampling gain is inversely related to probability of error. In an M-ary hypotheses testing problem in communications, the M signals constitute a "signal constellation" in the space of all probability measures and the underlying geometry can be employed in signal set design.Item A mathematical model of the vagally driven primary pacemaker cell membrane of the SA node of the heart(1982) Bristow, David Graham; Clark, John W.; Glantz, Raymon M.; Johnson, Don H.The study of the electrophyslologlcal activity of the SA node of the heart is currently a subject of considerable Interest In the research community. Information regarding the electrical behavior of the SA node Is by no means complete, yet sufficiently detailed Information Is available In the literature to allow the formulation of a reasonably quantitative model of the primary pacemaker cell membrane, and its vagal Innervation. In this study, the well-known McAlIIster-Noble-Tslen model of cardiac Purklnje fiber Is modified to account for the electrical activity of the SA node. To model the effects of vagal activity on the sinus rhythm a muscarinic channel with dynamics as suggested by Purves and Noma et al. has been added to the basic membrane model. The model mimics the published data quite well and Is capable of characterizing the free running behavior of the SA node, as well as Its response to electrotonic and vagal stimulation.Item A mathematical reconstruction of the frog atrial action potential based on voltage clamp data(1983) Robinson, Keith; Clark, John W.; Glantz, Raymon M.; Johnson, Don H.Recent advances in cardiac electrophysiology have allowed for the preparation of viable single frog atrial cells. These cells have properties which make them ideal for voltage clamp studies. The ionic currents observed under voltage clamp conditons from single frog atrial cells are analyzed with the use of automatic methods programmed on a computer. From the analyzed data, mathematical models are formed which describe the time and voltage dependence of the various ionic currents observed under voltage clamp conditions. These models are then combined, and a membrane action potential is reconstructed based on the analyzed ionic current data. This membrane action potential is then used as input to a nonlinear least-squares fitting routine in an attempt to accurately fit the model to experimental action potential data.Item A nonlinear adaptive equalizer(1980) Wendt, Richard Ernest; Figueiredo, Rui J. P. de; Clark, John W.; Johnson, Don H.The problem of removal of distortion caused by nonlinearities is investigated in light of a known technique of equalization. This inline, open loop, procedure, involving a contraction mapping, is well suited to compensating systems whose inputs are inaccessible. The present contribution is to add an adaptive capability to the scheme, extending its applicability to systems whose nonlinear parameters are unknown or drifting. Results of computer simulations are presented.Item A ray model for head waves in a fluid-filled borehole(1982) Scheibner, David James; Parks, Thomas W.; Johnson, Don H.; Figueiredo, Rui J. P. deA model, suggested by tbe ray expansion of Roever et al., is constructed to rapidly generate the compressional and shear refracted arrivals, known as head waves, received from a point source on the axis of an ideal fluid-filled borehole. An impulse response is derived, and its frequency characteristics are investigated. The waves are compared to those obtained by the real axis integration method of Tsang and Rader, which results in a complete waveform, including the modes as well as the refracted arrivals. The ray model gives accurate results for the compressional head wave. The shear region of the complete waveform contains strong modal interference, making it difficult to evaluate the quality of the ray model shear wave. A useful filter results from insight provided by the basic structure of the model. This filter can be used to estimate the borehole diameter or formation compressional velocity. It can also remove the second and later compressional arrivals, thus providing an accurate estimate of the source pulse and and a relatively uncorrupted vie of the initial arrival in the shear region.Item A wavelet-based approach to three-dimensional confocal microscopy image reconstruction(2004) Graf, Ben David; Johnson, Don H.An algorithm based on the Haar wavelet basis and implementing an expectation maximization-maximum penalized likelihood estimator in 3-D is shown to provide dramatic improvement over traditional stopped-EM algorithms in terms of mean-squared error on simulated data for confocal microscopy systems. Confocal microscopy is one of many modern medical imaging systems changing the landscape of medical research and practice, and the blurred and grainy images produced are much more useful when suitable, accurate reconstruction algorithms are applied. The industry standard, the stopped expectation-maximization algorithm proves unreliable and inadequate when compared to penalized likelihood estimators based on spatially adaptive bases such as wavelets. In addition, processing confocal microscopy images in 3-D, rather than slice-wise in 2-D, takes into account the blurring that occurs between slices as a result of the microscope's point spread function.Item All optical nanoscale sensor(2011-10-25) Halas, Nancy J.; Johnson, Don H.; Bishnoi, Sandra Whaley; Levin, Carly S.; Rozell, Christopher John; Johnson, Bruce R.; Rice University; United States Patent and Trademark OfficeA composition comprising a nanoparticle and at least one adsorbate associated with the nanoparticle, wherein the adsorbate displays at least one chemically responsive optical property. A method comprising associating an adsorbate with a nanoparticle, wherein the nanoparticle comprises a shell surrounding a core material with a lower conductivity than the shell material and the adsorbate displays at least one chemically responsive optical property, and engineering the nanoparticle to enhance the optical property of the adsorbate. A method comprising determining an optical response of an adsorbate associated with a nanoparticle as a function of a chemical parameter, and parameterizing the optical response to produce a one-dimensional representation of at least a portion of a spectral window of the optical response in a high dimensional vector space.Item Analog system for computing sparse codes(2010-08-24) Rozell, Christopher John; Johnson, Don H.; Baraniuk, Richard G.; Olshausen, Bruno A.; Ortman, Robert Lowell; Rice University; United States Patent and Trademark OfficeA parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.Item Analysis of long-range dependence in auditory-nerve fiber recordings(1994) Kelly, Owen E.; Johnson, Don H.The pattern of occurrence of isolated action potentials recorded from the cat's auditory nerve fiber is modeled over short time scales as a renewal process. For counting times greater than one second, the count variance-to-mean ratio grows as a power of the counting time. Such behavior is consistent with a renewal process driven by a fractal random waveform process (1/f-type spectrum). Based on 108 recordings each 600 seconds long, we conclude that the presence of the fractal noise is independent of characteristic frequency and stimulus level. This noise appears to originate in the cochlear inner hair cells. We measured the low frequency power of the fractal noise, finding its coefficient of variation to decrease as firing rate increases. Such behavior is consistent with multiplicative random variations in the permeability of the hair cell membrane to neuro-transmitter and also with increased level discrimination acuity at high stimulus levels.Item Analyzing dynamics and stimulus feature dependence in the information processing of crayfish sustaining fibers(2002) Rozell, Christopher John; Johnson, Don H.The sustaining fiber (SF) stage of the crayfish visual system converts analog stimulus representations to spike train signals. A recent theory quantifies a system's information processing capabilities and relates to statistical signal processing. To analyze SF responses to light stimuli, we extend a wavelet-based algorithm for separating analog input signals and spike output waveforms in composite intracellular recordings. We also present a time-varying RC circuit model to capture nonstationary membrane noise spectral characteristics. In our SF analysis, information transfer ratios are generally on the order of 10-4. The SF information processing dynamics show transient peaks followed by decay to steady-state values. A simple theoretical spike generator is analyzed analytically and shows general dynamic and steady-state properties similar to SFs. The information transfer ratios increase with spike rate and dynamic properties are due to direct spike generator dependence on input changes.Item Analyzing statistical dependencies in neural populations(2005) Goodman, Ilan N.; Johnson, Don H.Neurobiologists recently developed tools to record from large populations of neurons, and early results suggest that neurons interact to encode information jointly. However, traditional statistical analysis techniques are inadequate to elucidate these interactions. This thesis develops two multivariate statistical dependence measures that, unlike traditional measures, encompass all high-order and non-linear interactions. These measures decompose the contributions of distinct subpopulations to the total dependence. Applying the dependence analysis to recordings from the crayfish visual system, I show that neural populations exhibit complex dependencies that vary with the stimulus. Using Fisher information to analyze the effectiveness of population codes, I show that optimal rate coding requires negatively dependent responses. Since positive dependence through overlapping stimulus attributes is an inherent characteristic of many neural systems, such neurons can only achieve the optimal code by cooperating.Item Application of distributed arithmetic to digital signal processing(1979) Chu, Shuni; Burrus, C. Sidney; Glantz, Raymon M.; Johnson, Don H.Distributed arithmetic trades memory for logic circuits and speed, it is suitable for some fixed computations like the DFT computation and the filter calculation with fixed coefficients. A prime length N DFT computation can be converted to two length (Nl)/2 real convolutions and distributed arithmetic can be applied to these convolution computations. Since all the computations of a prime factor FFT reside in a few short length DFT computations, we can do all the prime factor FFT computations by distributed arithmetic. When the input to a DFT is read, we can save half of the computations of a prime factor FFT algorithm by computing only half of the output without computing the other half and get the other half by the symmetric relation. Using an input index table and an output index table in a prime factor FFT algorithm, we avoid any index calculations for any dimension transform. The transpose form of filter structures using distributed arithmetic have a different arrangement of memory and accumulators from that of direct structures. In software implementation, the transpose structure has the advantage of less process with the input or output data to get the address to address the table in the memory but with the disadvantage of more accumulations when compared to the direct structure. Altogether, an IIR filter with transpose structure will have a little higher speed than that with direct structure when implemented on a microprocessor. Distributed arithmetic reduces the DFT and filter computations to simple and repeated addressing and accumulating operations which can be done by simple logic. A general, external logic can be designed to do both the DFT and filter calculations with a microprocessor.Item Binaural localization using interaural cues(1990) Dabak, Anand Ganesh; Johnson, Don H.Major nuclei of the superior olivary complex--the lateral superior olive (LSO) and the medial superior olive (MSO) are presumed to play a major role in the localization of sound signals using interaural level and interaural phase differences between the signals arriving at the two ears. The present work develops a novel approach--function based modeling--for assessing the role of these nuclei in binaural localization. The interaural level difference is shown to be the sufficient statistic at high frequencies when only level cues are available. This level difference is processed optimally when the inputs are excitatory from one ear and inhibitory from the other ear. Response characteristics of LSO single units are remarkably similar to the optimal processor's, strongly supporting the notion that LSO units are intimately involved in high-frequency binaural hearing. For low frequencies the optimal processor makes use of the interaural phase difference cue by correlating the inputs to the two ears thus requiring that the two inputs be excitatory. Hence, high and low frequency localization systems are shown to differ greatly, suggesting separate pathways for each.Item Building a map for robot path planning by fusing video images and laser rangefinder data(1993) Reynolds, Steven Lamar; Johnson, Don H.This thesis describes algorithms that fuse the data from a single video camera and a laser rangefinder. By merging information from these sensors, the algorithms build a digital map of the local environment for a robot to use in navigation. The video image is segmented, and these segments are used to construct a vector scan for the laser rangefinder. The vector scan is a sequence of straight line segments that is created by a greedy algorithm; the range measurements along this vector scan are used to build the map. To build or augment the map, the range measurements are projected to the floor plane and then accumulated in a histogram. This histogram is then summed in an uncertainty region around each point, and points with probability mass greater than one half are marked as obstacle points. The algorithms are tested on an example scenario incorporating actual video images and simulated laser rangefinder data. The map produced by the algorithms shows a good representation of the obstacles without any false alarms.Item Change detection using types for non-stationary processes(1999) Walker, Raymond Leroy, Jr; Johnson, Don H.Space shuttle operation requires the monitoring of a large number of stationary signals in the search for "anomalies." This problem amounts to determining whether a change has occurred in a signal having a partially known structure. We employ empirical methods based on type theory to tackle this problem. We show that through pre-processing of the signal and through modifications in the type method itself, certain types of changes can be detected. The probability of detection, and probability of false alarm are determined by the amount of training data and a threshold, both of which are under user control.Item Digital filters with thinned numerators(1980) Boudreaux-Bartels, Gloria Faye; Parks, Thomas W.; Johnson, Don H.; Burrus, C. S.An algorithm is described for designing digital filters that require few multiplies to produce good frequency response. The process of reducing the number of multiplies needed to implement a digital filter is called thinning. The thinning algorithm uses Dynamic Programming techniques to optimally approximate a desired Finite Impulse Response (FIR) filter with another FIR filter that requires significantly fewer non-zero coefficients to produce similar frequency response characteristics. The effects of coefficient quantization and finite-precision computer arithmetic upon the thinned filter structure are also described. Examples of thinned narrowband, broadband, lowpass, and bandpass filters are given. Several of these thinned filters require fewer than one-third the number of multiplies required for the corresponding unthinned filter while still retaining desirable frequency response characteristics.Item Distributed redundant representations in man-made and biological sensing systems(2007) Rozell, Christopher John; Johnson, Don H.The ability of a man-made or biological system to understand its environment is limited by the methods used to process sensory information. In particular, the data representation is often a critical component of such systems. Neural systems represent sensory information using distributed populations of neurons that are highly redundant. Understanding the role of redundancy in distributed systems is important both to understanding neural systems and to efficiently solving many modern signal processing problems. This thesis makes contributions to understanding redundant representations in distributed processing systems in three specific areas. First, we explore the robustness of redundant representations by generalizing existing results regarding noise-reduction to Poisson process modulation. Additionally, we characterize how the noise-reduction ability of redundant representation is weakened when we enforce a distributed processing constraint on the system. Second, we explore the task of managing redundancy in the context of distributed settings through the specific example of wireless sensor and actuator networks (WSANs). Using a crayfish reflex behavior as a guide, we develop an analytic WSAN model that implements control laws in a completely distributed manner. We also develop an algorithm to optimize the system resource allocation by adjusting the number of bits used to quantize messages on each sensor-actuator communication link. This optimal power scheduling yields several orders of magnitude in power savings over uniform allocation strategies that use a fixed number of bits on each communication link. Finally, we explore the flexibility of redundant representations for sparse approximation. Neuroscience and signal processing both need a sparse approximation algorithm (i.e., representing a signal with few non-zero coefficients) that is physically implementable in a parallel system and produces smooth coefficient time-series for time-varying signals (e.g., video). We present a class of locally competitive algorithms (LCAs) that minimize a weighted combination of mean-squared error and a coefficient cost function. LCAs produce coefficients with sparsity levels comparable to centralized algorithms while being more realistic for physical implementation. The resultant LCA coefficients for video sequences are more regular (i.e., smoother and more predictable) than the coefficients produced by existing algorithms.Item Dual-frequency modulation and range disambiguation in laser rangefinding systems(1994) Melton, Darren; Johnson, Don H.Laser rangefinders, which measure distances by detecting the phase shift of intensity modulated laser light reflected by objects, are very useful for obtaining 3-D models of the surrounding environment. A drawback of single-frequency laser rangefinders is that the phase measurement is always ambiguous. The consequence of this phase ambiguity is a range ambiguity interval equal to one-half of the wavelength of the modulation signal. Objects outside this interval are not ranged correctly. This thesis describes the theory and implementation of a dual-frequency laser rangefinding system in which the sum of two sinusoidal signals modulates the laser intensity. The resulting two ambiguous distance measurements are used to calculate the true distance to the target. This method allows a greater ambiguity interval without sacrificing measurement accuracy. The system built at Rice University has performed as expected, verified the algorithms and theories presented, and demonstrated the practicality of dual-frequency laser rangefinders.Item Empirical detection for spread spectrum and code division multiple access (CDMA) communications(1998) Lee, Yuan Kang; Johnson, Don H.In this thesis, the method of "classification with empirically observed statistics"--also known as empirical classification, empirical detection, universal classification, and type-based detection--is configured and applied to the despreading/detection receiver operation of a spread-spectrum (SS), code division multiple access (CDMA) communications system. In static and Rayleigh-fading environments, the empirical detector is capable of adapting to unknown noise environments in a superior manner than the linear matched-filter despreader/detector, and done with reasonable amounts of training. Compared to the optimum detector, when known, the empirical detector always approaches optimal performance, again, with reasonable amounts of training. In an interference-limited channel, we show that the single-user likelihood-ratio detector, which is the optimum single-user detector, can greatly outperform the matched filter in certain imperfect power-control situations. The near optimality of the empirical detector implies that it, too, will outperform the matched filter in these situations. Although the empirical detector has the added cost of requiring chip-based phase synchronization, its consistent and superior performance in all environments strongly suggests its application in lieu of the linear detector for SS/CDMA systems employing long, pseudo-random spreading. In order to apply empirical classification to digital communications, we derive the empirical forced-decision detector and show that it is asymptotically optimal over a large class of empirical classifiers.
- «
- 1 (current)
- 2
- 3
- »