Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Heckel, Reinhard"

Now showing 1 - 7 of 7
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A Characterization of the DNA Data Storage Channel
    (Springer Nature, 2019) Heckel, Reinhard; Mikutis, Gediminas; Grass, Robert N.
    Owing to its longevity and enormous information density, DNA, the molecule encoding biological information, has emerged as a promising archival storage medium. However, due to technological constraints, data can only be written onto many short DNA molecules that are stored in an unordered way, and can only be read by sampling from this DNA pool. Moreover, imperfections in writing (synthesis), reading (sequencing), storage, and handling of the DNA, in particular amplification via PCR, lead to a loss of DNA molecules and induce errors within the molecules. In order to design DNA storage systems, a qualitative and quantitative understanding of the errors and the loss of molecules is crucial. In this paper, we characterize those error probabilities by analyzing data from our own experiments as well as from experiments of two different groups. We find that errors within molecules are mainly due to synthesis and sequencing, while imperfections in handling and storage lead to a significant loss of sequences. The aim of our study is to help guide the design of future DNA data storage systems by providing a quantitative and qualitative understanding of the DNA data storage channel.
  • Loading...
    Thumbnail Image
    Item
    A Robust Algorithm for Identification of Motion Artifacts in Photoplethysmography Signals
    (2018-12-03) Maity, Akash Kumar; Sabharwal, Ashutosh; Veeraraghavan, Ashok; Heckel, Reinhard
    Photoplethysmography(PPG) is commonly used as a means of continuous health monitoring. Many clinically relevant parameters like heart rate (HR), blood oxygenaton level (SPO2) are derived from the sensor measurements using PPG. Presence of motion artifacts in the signal decreases the accuracy of estimating the parameters and therefore reduces the reliabilty of these sensor devices. Motion artifacts can be both periodic or aperiodic. Existing state-of-the-art methods for motion detection rely on the semi-periodic structure of PPG to distinguish from aperiodic motion artifacts. Periodic motion artifacts that can be introduced by perioidic movements like hand tapping, jogging, cannot be detected by current methods reliably. In this thesis, we propose a novel technique, PPGMotion, for identifying all motion artifacts in PPG signals. PPGMotion relies on the morphological structure of artifact-free PPG signal, which has a fast systolic phase and a slowly decaying diastolic phase. We note that in the presence of motion artifacts, the recorded PPG signals do not exhibit the characteristic PPG shape. Our approach uses this prior information about the PPG morphology to reliable detect periodic motion artifacts, without the need of any additional hardware components like an accelerometer. To evaluate the proposed method, we adopt both a simulation and real data collection. For simulation-based iii analysis, we use a generative model for motion artifacts to simulate different cases of motion artifacts. For real data, we have compared our approach against recent works on motion identification using 3 datasets, where we record the PPG from a pulse-oximeter attached to a finger with subjects making (1) random finger movements, (2) periodic movements like periodic finger tapping and (3) PPG recordings from Maxim smartwatch with subjects running on a treadmill. Dataset (2) and (3) are expected to introduce periodic motion artifacts in the measured PPG signals. We demonstrate that while our approach is similar in performance to previous methods when random motion artifacts are introduced, the performance is significantly better in the presence of periodic motion artifacts. We show that for simulated dataset, the performance of PPGMotion is significantly better than existing work as the contaminated PPG tends to become periodic, with an increase in sensitivity of atleast 10% over state-of-the-art method. For real data, PPGMotion is successful in identifying the periodic motion artifacts, with mean sensitivity of 95% and accuracy of 95.8%, compared to the state-of-the-art method with mean sensitivity of 66% and accuracy of 89% for dataset (2). For dataset (1), PPGMotion achieves an accuracy of 96.35% with sensitivity of 95.29%, and for dataset (3), PPGMotion achieves an accuracy of 91.89% and sensitivity of 93.03%, compared to the second best method with accuracy 81.23% and sensitivity 74.99%.
  • Loading...
    Thumbnail Image
    Item
    Accelerated MRI with Un-trained Neural Networks
    (2021-03-05) Zalbagi Darestani, Mohammad; Heckel, Reinhard; Veeraraghavan, Ashok
    Convolutional Neural Networks (CNNs) are highly effective for image reconstruction problems. Typically, CNNs are trained on large amounts of training images. Recently, however, un-trained CNNs such as the Deep Image Prior and Deep Decoder have achieved excellent performance for image reconstruction problems such as denoising and inpainting, without using any training data. In this work, we study the performance of un-trained neural networks for the reconstruction problem arising in accelerated Magnetic Resonance Imaging (MRI). For this purpose, we view the performance from two perspectives: reconstruction accuracy, and reconstruction reliability. Reconstruction accuracy: One of the main goals in solving image reconstruction tasks is obtaining a high-quality image. In order to measure the quality of the images reconstructed by an un-trained network, we first propose a highly-optimized un-trained recovery approach based on a variation of the Deep Decoder. Afterward, through extensive experiments, we show that the resulting method significantly outperforms conventional un-trained methods such as total-variation norm minimization, as well as naive applications of un-trained networks. Most importantly, the proposed approach achieves on-par performance with a standard trained baseline, the U-net, on the FastMRI dataset, a dataset for benchmarking deep-learning based reconstruction methods. This demonstrates that there is less benefit in learning a prior for solving the accelerated MRI reconstruction problem. This conclusion is drawn by comparison with a baseline trained neural network, however, state-of-the-art methods still slightly outperform U-net (and hence the proposed approach in this work). Reconstruction reliability: Recent works on accelerated MRI reconstruction suggest that trained neural networks are not reliable for image reconstruction tasks, albeit achieving excellent accuracy. For example, at the inference time, small changes (also referred to as adversarial perturbations) in the input of the network can result in significant reconstruction artifacts. In this regard, we analyze the robustness of trained and un-trained methods. Specifically, we consider three notions of robustness: (i) robustness against small changes in the input, (ii) robustness in recovering small details in the image, and (iii) robustness to distribution shifts. Our main findings from this analysis are the followings: (i) contrary to the current belief, neither of the trained and un-trained methods is robust to small changes in the input, and (ii) in opposition to trained neural networks, un-trained methods are naturally robust to data distribution shifts, and interestingly, an un-trained neural network outperforms a trained one after the distribution shift. This work promotes the use of un-trained neural networks for accelerated MRI reconstruction through the following conclusions. First, in terms of accuracy, un-trained neural networks yield high-quality reconstructions, significantly better than conventional un-trained methods and similar to baseline trained methods. Second, a key advantage of un-trained networks over trained ones is a better generalization to unseen data distributions.
  • Loading...
    Thumbnail Image
    Item
    Bias-variance Trade-off and Uncertainty Quantification: Effects of Data Distribution in Image Classification
    (2022-11-18) Yilmaz, Fatih Furkan; Heckel, Reinhard; Segarra, Santiago
    Understanding the training and generalization dynamics of deep neural networks as well as the actual accuracy of the network predictions when deployed in the wild are important open problems in machine learning. In this thesis, we study these two topics in the context of image classification. In the first part, we study the generalization properties of deep neural networks with respect to the regularization of the network training for standard image classification tasks. In the second part, we study the performance of conformal prediction based uncertainty estimation methods. Conformal prediction methods quantify the uncertainty of the predictions of a neural network in practical applications. We study the setup where the test distribution may induce a drop in the accuracy of the predictions due to distribution shift. The training of deep neural networks is often regularized either implicitly, for example by early stopping the gradient descent, or explicitly, by adding an $\ell_2$-penalty to the loss function, in order to prevent overfitting to spurious patterns or noise. Even though these regularization methods are well established in the literature, recently it was uncovered that the test error of the network can exhibit novel phenomena such as yielding a double descent shape with respect to the regularization amount. In the first part of this thesis, we develop a theoretical understanding of the double descent phenomenon with respect to model regularization. For this, we study regression tasks, in both the underparameterized and overparameterized regimes, for linear and non-linear models. We find that for linear regression, a double descent shaped risk is caused by a superposition of bias-variance tradeoffs corresponding to different parts of the data/model and can be mitigated by the proper scaling of the stepsizes or regularization strengths while improving the best-case performance. We next study a non-linear two-layer neural network and characterize the early-stopped gradient descent risk as a superposition of bias-variance tradeoffs and also show that double descent as a function of the L2-regularization coefficient occurs outside of the regime where the risk can be characterized using the existing tools in the literature. We empirically study deep networks trained on standard image classification datasets and show that our results well explain the dynamics of the network training. In the second part of this thesis, we consider the effects of data distribution shift at test time for standard deep neural network classifiers. While recent uncertainty quantification methods like conformal prediction can generate provably valid confidence measures for any pre-trained black-box image classifier, these guarantees fail when there is a distribution shift. We propose a simple test-time recalibration method based on only unlabeled examples that provides excellent uncertainty estimates under natural distribution shifts. We show that our method provably succeeds on a theoretical toy distribution shift problem. Empirically, we show the success of our method for various natural distribution shifts of the popular ImageNet dataset.
  • Loading...
    Thumbnail Image
    Item
    Camera-based Vital Signs: Towards Driver Monitoring and Face Liveness Verification
    (2018-08-20) Nowara, Ewa Magdalena; Veeraraghavan, Ashok; Heckel, Reinhard
    I show how remote photoplethysmography (rPPG), which are blood flow induced intensity variations in the skin observed with a camera, can improve driver monitoring and face liveness verifcation. A leading cause of car accidents is driver distraction. These accidents could be prevented by monitoring drivers rPPG signals while driving. However, it is challenging to measure rPPG signals in a moving vehicle due to drastic illumination variations and large motion. I built a narrow-band near-infrared set up to reduce outside illumination variations and I developed an algorithm called SparsePPG to exploit spatial low rankness and sparsity in frequency of rPPG signals. Face recognition algorithms can provide highly secure user authentication due to their high accuracy; however, they cannot distinguish between authentic faces and face attacks, such as photographs. I developed an algorithm called PPGSecure which uses rPPG signals from a face video recording and machine learning to detect these face attacks.
  • Loading...
    Thumbnail Image
    Item
    Improving the Robustness of Deep Learning Based Image Reconstruction Models Against Natural Distribution Shifts
    (2023-04-14) Zalbagi Darestani, Mohammad; Heckel, Reinhard; Baraniuk, Richard G.
    Deep neural networks give state-of-the-art accuracy for reconstructing images from few and noisy measurements, a problem arising for example in accelerated magnetic resonance imaging (MRI). However, recent works have raised concerns that deep-learning-based image reconstruction methods are sensitive to perturbations and are less robust than traditional methods: Neural networks (i) may be sensitive to small, yet adversarially-selected perturbations, (ii) may perform poorly under distribution shifts, and (iii) may fail to recover small but important features in an image. In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as well as traditional sparsity-based methods. We find, contrary to prior works, that both trained and un-trained methods are vulnerable to adversarial perturbations. Moreover, both trained and un-trained methods tuned for a particular dataset suffer very similarly from distribution shifts. Finally, we demonstrate that an image reconstruction method that achieves higher reconstruction quality, also performs better in terms of accurately recovering fine details. Our results indicate that the state-of-the-art deep-learning-based image reconstruction methods provide improved performance than traditional methods without compromising robustness. Based on the afore-mentioned conclusions, we target deep learning based models in order to improve their robustness to natural distribution shifts. Note that this is a natural choice since deep learning based models are equally vulnerable to distribution shifts compared to other families of reconstruction methods, yet they give superior performance. We also selected natural distribution shifts since they occur frequently in practice. For example, we train a network on data from one hospital, and apply the network to data from a different hospital. Or we train on data acquired with one scanner type and acquisition mode, and apply it to a different scanner type or acquisition mode. In this work, we propose a domain adaptation method for deep learning based compressive sensing that relies on self-supervision during training paired with test-time training at inference. We show that for four natural distribution shifts, this method essentially closes the distribution shift performance gap for state-of-the-art architectures for accelerated MRI.
  • Loading...
    Thumbnail Image
    Item
    Learning to classify images without explicit human annotations
    (2020-04-22) Yilmaz, Fatih Furkan; Heckel, Reinhard; Veeraraghavan, Ashok
    Image classification problems today are often solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples. The second, manual labeling step is often the most expensive one as it requires manually going through all examples. In this thesis we propose to i) skip the manual labeling step entirely, ii) directly train the deep neural network on the noisy candidate labels, and iii) early stop the training to avoid overfitting. With this procedure we exploit an intriguing property of overparameterized neural networks: While they are capable of perfectly fitting the noisy data, gradient descent fits clean labels faster than noisy ones. Thus, training and early stopping on noisy labels resembles training on clean labels only. Our results show that early stopping the training of standard deep networks (such as ResNet-18) on a subset of the Tiny Images dataset (which is obtained without any explicit human labels and only about half of the labels are correct), gives a significantly higher test performance than when trained on the clean CIFAR-10 training dataset (which is obtained by labeling a subset of the Tiny Images dataset). We also demonstrate that the performance gains are consistent across all the classes and are not a result of trivial or non-trivial overlaps between the datasets. In addition, our results show that the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels, which is the noise most prevalent in works demonstrating noise robustness of neural networks. We also confirm that our results continue to hold for other datasets by considering the large-scale problem of classifying a sub-set of the ImageNet with the images we obtain from Flickr, only by keyword searches and without any manual labeling.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892