On the Momentum-based Methods for Training and Designing Deep Neural Networks

dc.contributor.advisorBaraniuk, Richard Gen_US
dc.creatorNguyen, Minh Tanen_US
dc.date.accessioned2020-09-14T20:25:18Zen_US
dc.date.available2020-09-14T20:25:18Zen_US
dc.date.created2020-08en_US
dc.date.issued2020-09-14en_US
dc.date.submittedAugust 2020en_US
dc.date.updated2020-09-14T20:25:19Zen_US
dc.description.abstractTraining and designing deep neural networks (DNNs) are an art that often involves expensive search over candidate architectures and optimization algorithms. In my thesis, we develop novel momentum-based methods to speed up deep neural networks training and facilitate the process of designing them. For training DNNs, stochastic gradient descent (SGD) algorithms with constant momentum and its variants such as Adam are the optimization methods of choice for training DNNs. There is great interest in speeding up the convergence of these methods due to their high computational expense. Nesterov accelerated gradient (NAG) improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst. We propose scheduled restart SGD (SRSGD), a new NAG-style scheme for training DNNs. SRSGD replaces the constant momentum in SGD by the increasing momentum in NAG but stabilizes the iterations by resetting the momentum to zero according to a schedule. Using a variety of models and benchmarks for image classification, we demonstrate that, in training DNNs, SRSGD significantly improves convergence and generalization; for instance, in training ResNet-200 for ImageNet classification, SRSGD achieves an error rate of 20.93% vs. the benchmark of 22.13%. These improvements become more significant as the network grows deeper. Furthermore, on both CIFAR and ImageNet, SRSGD reaches similar or even better error rates with significantly fewer training epochs compared to the SGD baseline. For designing DNNs, we focus on the recurrent neural networks (RNNs) and establish a connection between the hidden state dynamics in an RNN and gradient descent (GD). We then integrate momentum into this framework and propose a new family of RNNs, called MomentumRNNs. We theoretically prove and numerically demonstrate that MomentumRNNs alleviate the vanishing gradient issue in training RNNs. We also demonstrate that MomentumRNN is applicable to many types of recurrent cells, including those in the state-of-the-art orthogonal RNNs. Finally, we show that other advanced momentum-based optimization methods, such as Adam and NAG with a restart, can be easily incorporated into the MomentumRNN framework for designing new recurrent cells with even better performance.en_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationNguyen, Minh Tan. "On the Momentum-based Methods for Training and Designing Deep Neural Networks." (2020) Diss., Rice University. <a href="https://hdl.handle.net/1911/109343">https://hdl.handle.net/1911/109343</a>.en_US
dc.identifier.urihttps://hdl.handle.net/1911/109343en_US
dc.language.isoengen_US
dc.rightsCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.en_US
dc.subjectmomentum methodsen_US
dc.subjectscheduled restart SGDen_US
dc.subjectrecurrent neural networksen_US
dc.titleOn the Momentum-based Methods for Training and Designing Deep Neural Networksen_US
dc.typeThesisen_US
dc.type.materialTexten_US
thesis.degree.departmentElectrical and Computer Engineeringen_US
thesis.degree.disciplineEngineeringen_US
thesis.degree.grantorRice Universityen_US
thesis.degree.levelDoctoralen_US
thesis.degree.nameDoctor of Philosophyen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
NGUYEN-DOCUMENT-2020.pdf
Size:
7.79 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
5.84 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.6 KB
Format:
Plain Text
Description: