Differentiable Program Learning with an Admissible Neural Heuristic

Date
2020-08-11
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

We study the problem of learning differentiable functions expressed as programs in a domain-specific language. Such programmatic models can offer benefits such as composability and interpretability; however, learning them requires optimizing over a combinatorial space of program “architectures”. We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax. Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs, which can then be used to complete any partial program. This relaxed program is differentiable and can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search. We instantiate our approach on top of the A* algorithm and an iteratively deepened branch-and-bound search, and use these algorithms to learn programmatic classifiers in three sequence classification tasks. Our experiments show that the algorithms outperform state-of-the-art methods for program learning, and that they discover programmatic classifiers that yield natural interpretations and achieve competitive accuracy.

Description
Degree
Master of Science
Type
Thesis
Keywords
Machine Learning, Program Synthesis, Functional Programming, Differentiable Progamming
Citation

Shah, Ameesh. "Differentiable Program Learning with an Admissible Neural Heuristic." (2020) Master’s Thesis, Rice University. https://hdl.handle.net/1911/109184.

Has part(s)
Forms part of
Published Version
Rights
Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
Link to license
Citable link to this page