Improving Effective Bandwidth through Compiler Enhancement of Global and Dynamic Cache Reuse

dc.contributor.authorDing, Chenen_US
dc.date.accessioned2017-08-02T22:02:46Zen_US
dc.date.available2017-08-02T22:02:46Zen_US
dc.date.issued2000-01-21en_US
dc.date.noteJanuary 21, 2000en_US
dc.descriptionThis work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19488en_US
dc.description.abstractWhile CPU speed has been improved by a factor of 6400 over the past twenty years, memory bandwidth has increased by a factor of only 139 during the same period. Consequently, on modern machines the limited data supply simply cannot keep a CPU busy, and applications often utilize only a few percent of peak CPU performance. The hardware solution, which provides layers of high-bandwidth data cache, is not effective for large and complex applications primarily for two reasons: far-separated data reuse and large-stride data access. The first repeats unnecessary transfer and the second communicates useless data. Both waste memory bandwidth. This dissertation pursues a software remedy. It investigates the potential for compiler optimizations to alter program behavior and reduce its memory bandwidth consumption. To this end, this research has studied a two-step transformation strategy: first fuse computations on the same data and then group data used by the same computation. Existing techniques such as loop blocking can be viewed as an application of this strategy within a single loop nest. In order to carry out this strategy to its full extent, this research has developed a set of compiler transformations that perform computation fusion and data grouping over the whole program and during the entire execution. The major new techniques and their unique contributions areMaximal loop fusion: an algorithm that achieves maximal fusion among all program statements and bounded reuse distance within a fused loop. Inter-array data regrouping: the first to selectively group global data structures and to do so with guaranteed profitability and compile-time optimalityLocality grouping and dynamic packing: the first set of compiler-inserted and compiler-optimized computation and data transformations at run time. These optimizations have been implemented in a research compiler and evaluated on real-world applications on SGI Origin2000. The result shows that, on average, the new strategy eliminates 41% of memory loads in regular applications and 63% in irregular and dynamic programs. As a result, the overall execution time is shortened by 12% to 77%. In addition to compiler optimizations, this research has developed a performance model and designed a performance tool. The former allows precise measurement of the memory bandwidth bottleneck; the latter enables effective user tuning and accurate performance prediction for large applications: neither goal was achieved before this thesis.en_US
dc.format.extent133 ppen_US
dc.identifier.citationDing, Chen. "Improving Effective Bandwidth through Compiler Enhancement of Global and Dynamic Cache Reuse." (2000) https://hdl.handle.net/1911/96271.en_US
dc.identifier.digitalTR00-352en_US
dc.identifier.urihttps://hdl.handle.net/1911/96271en_US
dc.language.isoengen_US
dc.rightsYou are granted permission for the noncommercial reproduction, distribution, display, and performance of this technical report in any format, but this permission is only for a period of forty-five (45) days from the most recent time that you verified that this technical report is still available from the Computer Science Department of Rice University under terms that include this permission. All other rights are reserved by the author(s).en_US
dc.titleImproving Effective Bandwidth through Compiler Enhancement of Global and Dynamic Cache Reuseen_US
dc.typeTechnical reporten_US
dc.type.dcmiTexten_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TR00-352.pdf
Size:
1.02 MB
Format:
Adobe Portable Document Format