Browsing by Author "Grosul, Alexander"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A Related-Key Cryptanalysis of RC4(2000-06-08) Grosul, Alexander; Wallach, Dan S.In this paper we present analysis of the RC4 stream cipher and show that for each 2048-bit key there exists a family of related keys, differing in one of the byte positions. The keystreams generated by RC4 for a key and its related keys are substantially similar in the initial hundred bytes before diverging. RC4 is most commonly used with a 128-bit key repeated 16 times;this variant does not suffer from the weaknesses we describe. We recommend that applications of RC4 with keys longer than 128 bits (and particularly those using the full 2048-bit keys) discard the initial 256 bytes of the keystream output.Item ACME: Adaptive Compilation Made Efficient/Easy(2005-06-17) Cooper, Keith D.; Grosul, Alexander; Harvey, Timothy J.; Reeves, Steven W.; Subramanian, Devika; Torczon, LindaResearch over the past five years has shown significant performance improvements are possible using adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to guide a series of compilations towards some performance goal, such as minimizing execution time. Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the complexity inherent in a feedback-driven adaptive system makes it difficult to build and hard to use, and the large amounts of time that the system needs to perform the many compilations and executions prohibits most users from adopting these techniques. We have developed a technique called {\em virtual execution} to decrease the time requirements for adaptive compilation. Virtual execution runs the program a single time and preserves information that allows us to accurately predict performance with different optimization sequences. This technology significantly reduces the time required by our adaptive compiler. In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. It limits the amount of information that the user must provide to get started, by providing appropriate defaults. At the same time, it lets the user exert fine-grained control over the parameters that control the system. In particular, the user has direct and obvious control over the maximum amount of time the compiler can spend, as well as the ability to choose the number of routines to be examined. (The tool uses profiling to identify the most-executed procedures.) The GUI provides an output screen so that the user can monitor the progress of the compilation.Item Adaptive ordering of code transformations in an optimizing compiler(2005) Grosul, Alexander; Cooper, Keith D.It has long been known that the quality of the code produced by an optimizing compiler is dependent upon the ordering of transformations applied to the code. In this dissertation, we show that the best orderings vary in unpredictable ways according to the properties of the input code and performance objectives, making adaptation a necessity to obtain the best results. We further demonstrate the most practical techniques to search the spaces of transformation orderings. Our analysis of six exhaustively enumerated subspaces of limited size determines the choice and parameters of search algorithms described and implemented in this work: random sampling, greedy methods, variations of the stochastic hillclimber, and genetic algorithms. We then apply the search algorithms to the full spaces of all available transformations, which are too big to enumerate. We evaluate the performance and cost of running these algorithms and discuss the tradeoffs between the quality of discovered orderings and an effort to find them. Stochastic hillclimbers discover effective orderings within approximately 500 evaluations. Compared to a fixed ordering of transformations, they result in 5%--40% improvements for a variety of input programs and performance objectives. To reduce the computational overhead in finding these orderings, we introduce and analyze a novel approach to precise static estimation of runtime frequencies of basic blocks. Termed "Estimated Virtual Execution", this approach reduces the search time by 40%--60%.Item Building Adaptive Compilers(2005-01-29) Almagor, L.; Cooper, Keith D.; Grosul, Alexander; Harvey, Timothy J.; Reeves, Steven W.; Subramanian, Devika; Torczon, Linda; Waterman, ToddTraditional compilers treat all programs equally -that is, they apply the same set of techniques to every program that they compile. Compilers that adapt their behavior to fit specific input programs can produce better results. This paper describes out experience building and using adaptive compilers. It presents experimental evidence to show two problems for which adaptive behavior can lead to better results: choosing compilation orders and choosing block sizes. It present data from experimental characterizations of the search spaces in which these adaptive systems operate and describes search algorithms that successfully operate in these spaces. Building these systems has taught us a number of lessons about the construction of modular and reconfigurable compilers. The paper describes some of the problems that we encountered and the solutions that we adopted. It also outlines a number of fertile areas for future research in adaptive compilation.Item Compilation Order Matters: Exploring the Structure of the Space of Compilation Sequences Using Randomized Search Algorithms(2004-06-18) Almagor, L.; Cooper, Keith D.; Grosul, Alexander; Harvey, Timothy J.; Reeves, Steven W.; Subramanian, Devika; Torczon, Linda; Waterman, ToddMost modern compilers operate by applying a fixed sequence of code optimizations, called a compilation sequence, to all programs. Compiler writers determine a small set of good, general-purpose, compilation sequences by extensive hand-tuning over particular benchmarks. The compilation sequence makes a significant difference in the quality of the generated code; in particular, we know that a single universal compilation sequence does not produce the best results over all programs. Three questions arise in customizing compilation sequences: (1) What is the incremental benefit of using a customized sequence instead of a universal sequence? (2) What is the average computational cost of constructing a customized sequence? (3) When does the benefit exceed the cost? We present one of the first empirically derived cost-benefit tradeoff curves for custom compilation sequences. These curves are for two randomized sampling algorithms: descent with randomized restarts and genetic algorithms. They demonstrate the dominance of these two methods over simple random sampling in sequence spaces where the probability of finding a good sequence is very low. Further, these curves allow compilers to decide whether custom sequence generation is worthwhile, by explicitly relating the computational effort required to obtain a program-specific sequence to the incremental improvement in quality of code generated by that sequence.