Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Kosar, Robert"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Skewers, the Carnegie Classification, and the Hybrid Bootstrap
    (2017-11-30) Kosar, Robert; Scott, David W
    Principal component analysis is an important statistical technique for dimension reduction and exploratory data analysis. However, it is not robust to outliers and may obfuscate important data structure such as clustering. We propose a version of principal component analysis based on the robust L2E method. The technique seeks to find the principal components of potentially highly non-spherical distribution components of a Gaussian mixture model. The algorithm requires neither specification of the number of clusters nor estimation of a full covariance matrix in order to run. The Carnegie classification is a decades-old (updated approximately every five years) taxonomy for research universities. However, it is based on questionable statistical methodology and suffers from a number of issues. We present a criticism of the Carnegie methodology, and offer two alternatives that are designed to be consistent with Carnegie's goals but also more statistically sound. We also present a visualization application where users can explore both the Carnegie system and our proposed systems. Preventing overfitting is an important topic in the field of machine learning, where it is common or even mundane to fit models with millions of parameters. One of the most popular algorithms for preventing overfitting is dropout. We present a drop-in replacement for dropout that offers superior performance on standard benchmark datasets and is relatively insensitive to hyperparameter choice.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892