Skewers, the Carnegie Classification, and the Hybrid Bootstrap
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Principal component analysis is an important statistical technique for dimension reduction and exploratory data analysis. However, it is not robust to outliers and may obfuscate important data structure such as clustering. We propose a version of principal component analysis based on the robust L2E method. The technique seeks to find the principal components of potentially highly non-spherical distribution components of a Gaussian mixture model. The algorithm requires neither specification of the number of clusters nor estimation of a full covariance matrix in order to run.
The Carnegie classification is a decades-old (updated approximately every five years) taxonomy for research universities. However, it is based on questionable statistical methodology and suffers from a number of issues. We present a criticism of the Carnegie methodology, and offer two alternatives that are designed to be consistent with Carnegie's goals but also more statistically sound. We also present a visualization application where users can explore both the Carnegie system and our proposed systems.
Preventing overfitting is an important topic in the field of machine learning, where it is common or even mundane to fit models with millions of parameters. One of the most popular algorithms for preventing overfitting is dropout. We present a drop-in replacement for dropout that offers superior performance on standard benchmark datasets and is relatively insensitive to hyperparameter choice.
Description
Advisor
Degree
Type
Keywords
Citation
Kosar, Robert. "Skewers, the Carnegie Classification, and the Hybrid Bootstrap." (2017) Diss., Rice University. https://hdl.handle.net/1911/105553.