Iterative Data-flow Analysis, Revisited
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The iterative algorithm is widely used to solve instances of data-flow analysis problems. The algorithm is attractive because it is easy to implement and robust in its behavior. The theory behind the algorithm shows that, for a broad class of problems, it terminates and produces correct results. The theory also establishes a set of conditions where the algorithm runs in at most d(G) + 3 passes over the graph—a round-robin algorithm, running a "rapid'' framework, on a reducible graph. Fortunately, these restrictions encompass many practical analyses used in code optimization. In practice, compilers encounter situations that lie outside this carefully described region. Compilers encounter irreducible graphs—probably more often than the early studies suggest. They use variations of the algorithm other than the round-robin form. They run on problems that are not rapid. This paper explores both the theory and practice of iterative data-flow analysis. It explains the role of reducibility in the classic Kam-Ullman time bound. It presents experimental data to show that different versions of the iterative algorithm have distinctly different behavior. It gives practical advice that can improve the performance of iterative solvers on both reducible and irreducible graphs.
Description
Advisor
Degree
Type
Keywords
Citation
Cooper, Keith D., Harvey, Timothy J. and Kennedy, Ken. "Iterative Data-flow Analysis, Revisited." (2004) https://hdl.handle.net/1911/96324.