Browsing by Author "Torczon, Linda"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item ACME: Adaptive Compilation Made Efficient/Easy(2005-06-17) Cooper, Keith D.; Grosul, Alexander; Harvey, Timothy J.; Reeves, Steven W.; Subramanian, Devika; Torczon, LindaResearch over the past five years has shown significant performance improvements are possible using adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to guide a series of compilations towards some performance goal, such as minimizing execution time. Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the complexity inherent in a feedback-driven adaptive system makes it difficult to build and hard to use, and the large amounts of time that the system needs to perform the many compilations and executions prohibits most users from adopting these techniques. We have developed a technique called {\em virtual execution} to decrease the time requirements for adaptive compilation. Virtual execution runs the program a single time and preserves information that allows us to accurately predict performance with different optimization sequences. This technology significantly reduces the time required by our adaptive compiler. In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. It limits the amount of information that the user must provide to get started, by providing appropriate defaults. At the same time, it lets the user exert fine-grained control over the parameters that control the system. In particular, the user has direct and obvious control over the maximum amount of time the compiler can spend, as well as the ability to choose the number of routines to be examined. (The tool uses profiling to identify the most-executed procedures.) The GUI provides an output screen so that the user can monitor the progress of the compilation.Item Building Adaptive Compilers(2005-01-29) Almagor, L.; Cooper, Keith D.; Grosul, Alexander; Harvey, Timothy J.; Reeves, Steven W.; Subramanian, Devika; Torczon, Linda; Waterman, ToddTraditional compilers treat all programs equally -that is, they apply the same set of techniques to every program that they compile. Compilers that adapt their behavior to fit specific input programs can produce better results. This paper describes out experience building and using adaptive compilers. It presents experimental evidence to show two problems for which adaptive behavior can lead to better results: choosing compilation orders and choosing block sizes. It present data from experimental characterizations of the search spaces in which these adaptive systems operate and describes search algorithms that successfully operate in these spaces. Building these systems has taught us a number of lessons about the construction of modular and reconfigurable compilers. The paper describes some of the problems that we encountered and the solutions that we adopted. It also outlines a number of fertile areas for future research in adaptive compilation.Item Compilation Order Matters: Exploring the Structure of the Space of Compilation Sequences Using Randomized Search Algorithms(2004-06-18) Almagor, L.; Cooper, Keith D.; Grosul, Alexander; Harvey, Timothy J.; Reeves, Steven W.; Subramanian, Devika; Torczon, Linda; Waterman, ToddMost modern compilers operate by applying a fixed sequence of code optimizations, called a compilation sequence, to all programs. Compiler writers determine a small set of good, general-purpose, compilation sequences by extensive hand-tuning over particular benchmarks. The compilation sequence makes a significant difference in the quality of the generated code; in particular, we know that a single universal compilation sequence does not produce the best results over all programs. Three questions arise in customizing compilation sequences: (1) What is the incremental benefit of using a customized sequence instead of a universal sequence? (2) What is the average computational cost of constructing a customized sequence? (3) When does the benefit exceed the cost? We present one of the first empirically derived cost-benefit tradeoff curves for custom compilation sequences. These curves are for two randomized sampling algorithms: descent with randomized restarts and genetic algorithms. They demonstrate the dominance of these two methods over simple random sampling in sequence spaces where the probability of finding a good sequence is very low. Further, these curves allow compilers to decide whether custom sequence generation is worthwhile, by explicitly relating the computational effort required to obtain a program-specific sequence to the incremental improvement in quality of code generated by that sequence.Item Interprocedural constant propagation: A study of jump function implementations(1993) Grove, Daniel Dwight; Torczon, LindaProcedure calls have long been recognized as an impediment to performance in compiled code. This happens because procedure calls hide information from the compiler. Interprocedural constant propagation attempts to discover the formal parameters and global variables that are constant on every invocation of a procedure. An implementation of interprocedural constant propagation must model the transmission of values through each procedure in the program. In the framework proposed by Callahan, Cooper, Kennedy, and Torczon, this transmission is modeled with jump functions. While Callahan et al. propose several jump functions, they give no data to help choose among them. This thesis describes the results obtained by employing several jump functions. Our study examined scientific FORTRAN codes. It shows that different jump functions find different numbers of constants, and suggests a particular function, the pass-through parameter jump function, as the most cost-effective in practice. The importance of interprocedural MOD information is also discussed.Item The Platform-Aware Compilation Environment: Preliminary Design Document(2010-09-15) Cooper, Keith D.; Mellor-Crummey, John; Merényi, Erzsébet; Sadayappan, P.; Sarkar, Vivek; Torczon, Linda; Burke, Michael G.The Platform-Aware Compilation Environment (PACE) is an ambitious attempt to construct a portable compiler that produces code capable of achieving high levels of performance on new architectures. The key strategies in PACE are the design and development of an optimizer and runtime system that are parameterized by system characteristics, the automatic measurement of those characteristics, the extensive use of measured performance data to help drive optimization, and the use of machine learning to improve the long-term effectiveness of the compiler and runtime system.Item The Platform-Aware Compilation Environment: Status and Future Directions(2012-06-13) Cooper, Keith D.; Khan, Rishi; Lele, Sanjiva; Mellor-Crummey, John; Merényi, Erzsébet; Palem, Krishna; Sadayappan, P.; Sarkar, Vivek; Tatge, Reid; Torczon, LindaThe Platform-Aware Compilation Environment (PACE) is an ambitious attempt to construct a portable compiler that produces code capable of achieving high levels of performance on new architectures. The key strategies in PACE are the design and development of an optimizer and runtime system that are parameterized by system characteristics, the automatic measurement of those characteristics, the extensive use of measured performance data to help drive optimization, and the use of machine learning to improve the long-term effectiveness of the compiler and runtime system.