Browsing by Author "Boominathan, Vivek"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Bioluminescent flashes drive nighttime schooling behavior and synchronized swimming dynamics in flashlight fish(Public Library of Science, 2019) Gruber, David F.; Phillips, Brennan T.; O’Brien, Rory; Boominathan, Vivek; Veeraraghavan, Ashok; Vasan, Ganesh; O’Brien, Peter; Pieribone, Vincent A.; Sparks, John S.Schooling fishes, like flocking birds and swarming insects, display remarkable behavioral coordination. While over 25% of fish species exhibit schooling behavior, nighttime schooling has rarely been observed or reported. This is due to vision being the primary modality for schooling, which is corroborated by the fact that most fish schools disperse at critically low light levels. Here we report on a large aggregation of the bioluminescent flashlight fish Anomalops katoptron that exhibited nighttime schooling behavior during multiple moon phases, including the new moon. Data were recorded with a suite of low-light imaging devices, including a high-speed, high-resolution scientific complementary metal-oxide-semiconductor (sCMOS) camera. Image analysis revealed nighttime schooling using synchronized bioluminescent flashing displays, and demonstrated that school motion synchrony exhibits correlation with relative swim speed. A computer model of flashlight fish schooling behavior shows that only a small percentage of individuals need to exhibit bioluminescence in order for school cohesion to be maintained. Flashlight fish schooling is unique among fishes, in that bioluminescence enables schooling in conditions of no ambient light. In addition, some members can still partake in the school while not actively exhibiting their bioluminescence. Image analysis of our field data and model demonstrate that if a small percentage of fish become motivated to change direction, the rest of the school follows. The use of bioluminescence by flashlight fish to enable schooling in shallow water adds an additional ecological application to bioluminescence and suggests that schooling behavior in mesopelagic bioluminescent fishes may be also mediated by luminescent displays.Item Designing miniature computational cameras for photography, microscopy, and artificial intelligence(2019-07-25) Boominathan, Vivek; Veeraraghavan, AshokThe fields of robotics, Internet of things, health-monitoring, neuroscience, and many others have consistently trended toward miniaturization of sensing devices over the past few decades. This miniaturization has enabled new applications in areas such as connected devices, wearables, implantable medical devices, in vivo microscopy and micro-robotics. By integrating visual sensing capabilities in such devices, we can enable improved spatial, temporal and contextual information collection for artificial intelligence and data processing. However, cameras are reaching their limits in size reduction due to restrictions of traditional lensed-optics. If we can combine the design of camera optics and computational algorithms, we can potentially achieve miniaturization beyond traditional optics. In this dissertation, we explore designing unconventional optics coupled with computational algorithms to achieve miniaturization. Recent works have shown the use of flat diffractive optics, placed at a focus distance from the imaging sensor, as a substitute for lensing and the use of computational algorithms to correct and sharpen the images. We take this a step further and place a thin diffractive mask at a very close distance (range of 100s of microns to a millimeter) from the imaging sensor, thereby achieving an even smaller form-factor. Such flat camera geometry calls for new ways of modeling the system, methods to optimize the mask design and computational algorithms to recover high-resolution images. Moreover, retaining the thin geometry, we develop a framework to design optical masks to off-load some of the computational processing to inherently zero-power optical processing. With the developed methods, we demonstrate (1) ultraminiature microscopy, (2) thickness constrained high-resolution imaging, (3) optical Gabor feature extraction, and an example of (4) hybrid optical-electronic computer vision system.Item EDoF-ToF: extended depth of field time-of-flight imaging(Optical Society of America, 2021) Tan, Jasper; Boominathan, Vivek; Baraniuk, Richard; Veeraraghavan, AshokConventional continuous-wave amplitude-modulated time-of-flight (CWAM ToF) cameras suffer from a fundamental trade-off between light throughput and depth of field (DoF): a larger lens aperture allows more light collection but suffers from significantly lower DoF. However, both high light throughput, which increases signal-to-noise ratio, and a wide DoF, which enlarges the system’s applicable depth range, are valuable for CWAM ToF applications. In this work, we propose EDoF-ToF, an algorithmic method to extend the DoF of large-aperture CWAM ToF cameras by using a neural network to deblur objects outside of the lens’s narrow focal region and thus produce an all-in-focus measurement. A key component of our work is the proposed large-aperture ToF training data simulator, which models the depth-dependent blurs and partial occlusions caused by such apertures. Contrary to conventional image deblurring where the blur model is typically linear, ToF depth maps are nonlinear functions of scene intensities, resulting in a nonlinear blur model that we also derive for our simulator. Unlike extended DoF for conventional photography where depth information needs to be encoded (or made depth-invariant) using additional hardware (phase masks, focal sweeping, etc.), ToF sensor measurements naturally encode depth information, allowing a completely software solution to extended DoF. We experimentally demonstrate EDoF-ToF increasing the DoF of a conventional ToF system by 3.6 ×, effectively achieving the DoF of a smaller lens aperture that allows 22.1 × less light. Ultimately, EDoF-ToF enables CWAM ToF cameras to enjoy the benefits of both high light throughput and a wide DoF.Item Improving Light Field Capture Using Hybrid Imaging(2016-12-01) Boominathan, Vivek; Veeraraghavan, AshokLight Field imaging gives us the ability to perform post-capture focus manipulation, like refocusing and varying the depth-of-field (DOF). Although this ability is sought after in photography, an inherent problem arises when mapping the 4D light field function on a 2D sensor: reduction in resolution. Current light field cameras face this problem, producing low spatial resolution and hence has a limited DOF control. In contrast, a traditional high resolution camera, like DSLR, provide high spatial resolution and narrow DOF control at capture but no post-capture control. In this work, I propose a hybrid imaging system consisting of the two complementary imaging modalities: light field camera and standard high resolution camera, and show that the combined system enables (a) high-resolution digital refocusing, (b) better DOF control than light field cameras, and (c) render graceful high-resolution viewpoint variations. All of the above abilities were previously unachievable. In order to combine the output of the two modalities, I propose a simple patch-based algorithm that super-resolves the low-resolution views of the light field using the high-resolution patches captured using the high-resolution SLR camera. The algorithm does not require the light field camera and the DSLR to be co-located or for any calibration information regarding the two imaging systems. To depict the abilities of the hybrid imaging system, I built a prototype using a Lytro camera (380x380 pixel spatial resolution) and an 18 megapixel (MP) Canon DSLR camera. Via the prototype, I show 9x improvement in spatial resolution of the final light field (11.7 MP spatial resolution) and the ability to achieve 1/9th of the DOF of the Lytro camera. I show several experimental results on challenging scenes containing occlusions, specularities, and complex non-lambertian materials, demonstrating the effectiveness of our approach.Item Lensless imaging device for microscopy and fingerprint biometric(2020-08-25) Veeraraghavan, Ashok; Baraniuk, Richard; Robinson, Jacob; Boominathan, Vivek; Adams, Jesse; Avants, Benjamin; Rice University; United States Patent and Trademark OfficeIn one aspect, embodiments disclosed herein relate to a lens-free imaging system. The lens-free imaging system includes: an image sampler, a radiation source, a mask disposed between the image sampler and a scene, and an image sampler processor. The image sampler processor obtains signals from the image sampler that is exposed, through the mask, to radiation scattered by the scene which is illuminated by the radiation source. The image sampler processor then estimates an image of the scene based on the signals from the image sampler, processed using a transfer function that relates the signals and the scene.Item Passive and single-viewpoint 3D imaging system(2023-06-13) Wu, Yicheng; Boominathan, Vivek; Chen, Huaijin; Sankaranarayanan, Aswin C.; Veeraraghavan, Ashok; William Marsh Rice University; Carnegie Mellon University; United States Patent and Trademark OfficeA method for a passive single-viewpoint 3D imaging system comprises capturing an image from a camera having one or more phase masks. The method further includes using a reconstruction algorithm, for estimation of a 3D or depth image.Item Passive and single-viewpoint 3D imaging system(2024-08-27) Wu, Yicheng; Boominathan, Vivek; Chen, Huaijin; Sankaranarayanan, Aswin C.; Veeraraghavan, Ashok; Rice University; United States Patent and Trademark OfficeA method for a passive single-viewpoint 3D imaging system comprises capturing an image from a camera having one or more phase masks. The method further includes using a reconstruction algorithm, for estimation of a 3D or depth image.Item Real-time, deep-learning aided lensless microscope(Optica Publishing Group, 2023) Wu, Jimin; Boominathan, Vivek; Veeraraghavan, Ashok; Robinson, Jacob T.; Bioengineering; Electrical and Computer Engineering; Computer ScienceTraditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.Item Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope(AAAS, 2017) Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok; Nanophotonic Computational Imaging and Sensing LaboratoryModern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world's tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high-frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.