Browsing by Author "Tan, Shiyu"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Computational Imaging System for 3D Sensing and Reconstruction(2024-12-06) Tan, Shiyu; Veeraraghavan, AshokThe thesis explores three challenges in 3D imaging with different applications: 3D stereo imaging with large depth-of-field, 3D sensing with a compact device, and 3D microscopy of thick scattering samples with fast scanning speed. The first part of this thesis focuses on a stereo imaging system that can get large depth-of-field and high-quality 3D reconstruction in light-limited environments. To overcome the fundamental trade-off between imaging volume and signal-to-noise ratio (SNR) that appears in conventional stereo, a novel end-to-end learning-based technique is proposed by introducing a phase mask at the aperture plane of the cameras in a stereo imaging system. The phase mask creates a depth-dependent yet numerically invertible point spread function, allowing us to recover sharp image texture and stereo correspondence over a significantly extended depth of field (EDOF) than conventional stereo. The second part of the thesis exploits the strongly dispersive property of metasurfaces to propose a compact, single-shot, and passive 3D imaging camera. The proposed device consists of a metalens engineered to focus different wavelengths at different depths and two deep networks to recover depth and RGB texture information from chromatic, defocused images acquired by the system. The third part of the thesis explores a learning-based method that can rapidly capture 3D volumetric images of thick scattering samples using a traditional wide-field microscope. The key idea is to use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of lateral resolution, z-sectioning , and image contrast.Item Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning(Optica Publishing Group, 2022) Li, Bowen; Tan, Shiyu; Dong, Jiuyang; Lian, Xiaocong; Zhang, Yongbing; Ji, Xiangyang; Ji, Xiangyang; Veeraraghavan, Ashok; Veeraraghavan, AshokConfocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to "teach" a traditional wide-field microscope, one that’s available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.