Learning to classify images without explicit human annotations

dc.contributor.advisorHeckel, Reinharden_US
dc.contributor.advisorVeeraraghavan, Ashoken_US
dc.creatorYilmaz, Fatih Furkanen_US
dc.date.accessioned2020-04-23T16:12:24Zen_US
dc.date.available2020-11-01T05:01:09Zen_US
dc.date.created2020-05en_US
dc.date.issued2020-04-22en_US
dc.date.submittedMay 2020en_US
dc.date.updated2020-04-23T16:12:25Zen_US
dc.description.abstractImage classification problems today are often solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples. The second, manual labeling step is often the most expensive one as it requires manually going through all examples. In this thesis we propose to i) skip the manual labeling step entirely, ii) directly train the deep neural network on the noisy candidate labels, and iii) early stop the training to avoid overfitting. With this procedure we exploit an intriguing property of overparameterized neural networks: While they are capable of perfectly fitting the noisy data, gradient descent fits clean labels faster than noisy ones. Thus, training and early stopping on noisy labels resembles training on clean labels only. Our results show that early stopping the training of standard deep networks (such as ResNet-18) on a subset of the Tiny Images dataset (which is obtained without any explicit human labels and only about half of the labels are correct), gives a significantly higher test performance than when trained on the clean CIFAR-10 training dataset (which is obtained by labeling a subset of the Tiny Images dataset). We also demonstrate that the performance gains are consistent across all the classes and are not a result of trivial or non-trivial overlaps between the datasets. In addition, our results show that the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels, which is the noise most prevalent in works demonstrating noise robustness of neural networks. We also confirm that our results continue to hold for other datasets by considering the large-scale problem of classifying a sub-set of the ImageNet with the images we obtain from Flickr, only by keyword searches and without any manual labeling.en_US
dc.embargo.terms2020-11-01en_US
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationYilmaz, Fatih Furkan. "Learning to classify images without explicit human annotations." (2020) Master’s Thesis, Rice University. <a href="https://hdl.handle.net/1911/108339">https://hdl.handle.net/1911/108339</a>.en_US
dc.identifier.urihttps://hdl.handle.net/1911/108339en_US
dc.language.isoengen_US
dc.rightsCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.en_US
dc.subjectImage classificationen_US
dc.subjectDeep Neural Networksen_US
dc.subjectEarly stoppingen_US
dc.subjectInductive biasen_US
dc.subjectOverparameterized modelsen_US
dc.subjectLearning from noisy labelsen_US
dc.titleLearning to classify images without explicit human annotationsen_US
dc.typeThesisen_US
dc.type.materialTexten_US
thesis.degree.departmentElectrical and Computer Engineeringen_US
thesis.degree.disciplineEngineeringen_US
thesis.degree.grantorRice Universityen_US
thesis.degree.levelMastersen_US
thesis.degree.majorMachine Learningen_US
thesis.degree.nameMaster of Scienceen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
YILMAZ-DOCUMENT-2020.pdf
Size:
13.37 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
5.84 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.61 KB
Format:
Plain Text
Description: