Browsing by Author "Schielke, Philip"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item An Experimental Evaluation of List Scheduling(1998-09-30) Cooper, Keith D.; Schielke, Philip; Subramanian, DevikaWhile altering the scope of instruction scheduling has a rich heritage in compiler literature, instruction scheduling algorithms have received little coverage in recent times. The widely held belief is that greedy heuristic techniques such as list scheduling are "good" enough for most practical purposes. The evidence supporting this belief is largely anecdotal with a few exceptions. In this paper we examine some hard evidence in support of list scheduling. To this end we present two alternative algorithms to list scheduling that use randomization: randomized backward forward list scheduling, and iterative repair. Using these alternative algorithms we are better able to examine the conditions under which list scheduling performs well and poorly. Specifically, we explore the efficacy of list scheduling in light of available parallelism, the list scheduling priority heuristic, and number of functional units. While the generic list scheduling algorithm does indeed perform quite well overall, there are important situations which may warrant the use of alternate algorithms.Item Issues in Instruction Scheduling(1998-09-15) Schielke, PhilipInstruction scheduling is a code reordering transformation that attempts to hide latencies present in modern day microprocessors. Current applications of these microprocessors and the microprocessors themselves present new parameters under which the scheduler must operate. For example, some multiple functional unit processors have partitioned register sets. In some applications, increasing the static size of a program may not be an acceptable tradeoff for gaining improved running time. The interaction between the scheduler and the register allocator can also dramatically affect the performance of the compiled code. In this work we will look at global scheduling techniques that do not replicate code, including scheduling overextended basic blocks. We also look at a replacement to the traditional list scheduler based on the techniques of iterative repair. Finally, we explore the interaction between instruction scheduling and register allocation, and look at ways of combining the two problems.Item Stochastic Instruction Scheduling(2000-11-13) Schielke, PhilipInstruction scheduling is a code reordering transformation used to hide latencies present in modern day microprocessors. Scheduling is often critical in achieving peak performance from these processors. The designer of a compiler's instruction scheduler has many choices to make, including the scope of scheduling, the underlying scheduling algorithm, and handling interactions between scheduling and other transformations. List scheduling algorithms, and variants thereof, have been the dominate algorithms used by instruction schedulers for years. In this work we explore the strengths and weaknesses of this algorithm aided by the use of stochastic scheduling techniques. These new techniques we call RBF (randomized backward and forward scheduling) and iterative repair or IR. We examine how the algorithms perform in a variety of contexts, including different scheduling scope, different scheduling problem instances, different architectural features, and scheduling in the presence of register allocation. IR is a search framework enjoying a lot of attention in the artificial intelligence community. Ir scheduling techniques have shown promise on other scheduling problems such as shuttle mission scheduling. In this work we describe how to target the framework for compiler instruction scheduling. We describe the evolution of our algorithm, how to integrate register pressure concerns, and the technique's performance. We evaluate our alternative algorithms based on a set of real applications and random instruction scheduling problems. Not surprisingly, list scheduling performs very well when scheduling basic blocks of machine instructions. However, there is some opportunity for alternative techniques when scheduling over larger scopes and targeting more complicated architectures. We describe an interesting link between list scheduling efficacy and the amount of parallelism available in a random problem instance. Increasingly we see complex microprocessors being used in embedded systems applications. Often the price of such systems is affected by the amount of on-board memory needed to store executable code for the microprocessor. Much work on instruction scheduling has improved the running time of scheduled code while sacrificing code size. Thus, finding scheduling techniques that do not increase code size is another focus of this work. We also develop techniques to decrease code size without increasing running time using genetic algorithms-another stochastic search method.