Influence based bit-quantization for machine learning: Cost Quality Tradeoffs

Date
2020-10-30
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Due to the significant computational cost associated with machine learning architectures such as neural networks or network for short, there has been significant interest in quantizing or reducing the number of bits used. Current quantization approaches treat all of the network parameters equally by allocating the same bit width budget to all of them. In this work we are proposing a quantization approach which allocates bit budgets to parameters preferentially based on their influence. Here, our notion of influence is inspired by the traditional definition of this concept from the Fourier analysis of Boolean functions. We show that guiding investment of bit budgets using influence can get acceptable accuracy with lower overall bit budgets when compared to approaches that do not use quantization. We show that by trading 4.5% in accuracy, we can gain in bit budgets by a factor of 28. To better understand our approach, we also considered allocating bit budgets through random allocations and found that an our influence based approach outperforms most of the time by noticeable margins. All of these results are based on the MNIST data set and our algorithm for computing influence is based on a simple and easy to implement greedy approach.

Description
Degree
Master of Science
Type
Thesis
Keywords
Neural Networks, Quantization, Bit Influence
Citation

Jiang, Mingchao. "Influence based bit-quantization for machine learning: Cost Quality Tradeoffs." (2020) Master’s Thesis, Rice University. https://hdl.handle.net/1911/109494.

Has part(s)
Forms part of
Published Version
Rights
Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
Link to license
Citable link to this page