Performance analysis for parallel programs from multicore to petascale

dc.contributor.advisorMellor-Crummey, John
dc.creatorTallent, Nathan Russell
dc.date.accessioned2011-07-25T02:06:22Z
dc.date.available2011-07-25T02:06:22Z
dc.date.issued2010
dc.description.abstractCutting-edge science and engineering applications require petascale computing. Petascale computing platforms are characterized by both extreme parallelism (systems of hundreds of thousands to millions of cores) and hybrid parallelism (nodes with multicore chips). Consequently, to effectively use petascale resources, applications must exploit concurrency at both the node and system level --- a difficult problem. The challenge of developing scalable petascale applications is only partially aided by existing languages and compilers. As a result, manual performance tuning is often necessary to identify and resolve poor parallel and serial efficiency. Our thesis is that it is possible to achieve unique, accurate, and actionable insight into the performance of fully optimized parallel programs by measuring them with asynchronous-sampling-based call path profiles; attributing the resulting binary-level measurements to source code structure; analyzing measurements on-the-fly and postmortem to highlight performance inefficiencies; and presenting the resulting context- sensitive metrics in three complementary views. To support this thesis, we have developed several techniques for identifying performance problems in fully optimized serial, multithreaded and petascale programs. First, we describe how to attribute very precise (instruction-level) measurements to source-level static and dynamic contexts in fully optimized applications --- all for an average run-time overhead of a few percent. We then generalize this work with the development of logical call path profiling and apply it to work-stealing-based applications. Second, we describe techniques for pinpointing and quantifying parallel inefficiencies such as parallel idleness, parallel overhead and lock contention in multithreaded executions. Third, we show how to diagnose scalability bottlenecks in petascale applications by scaling our our measurement, analysis and presentation tools to support large-scale executions. Finally, we provide a coherent framework for these techniques by sketching a unique and comprehensive performance analysis methodology. This work forms the basis of Rice University's HPCTOOLKIT performance tools.
dc.format.mimetypeapplication/pdf
dc.identifier.callnoTHESIS E.E. 2010 TALLENT
dc.identifier.citationTallent, Nathan Russell. "Performance analysis for parallel programs from multicore to petascale." (2010) Diss., Rice University. <a href="https://hdl.handle.net/1911/62106">https://hdl.handle.net/1911/62106</a>.
dc.identifier.urihttps://hdl.handle.net/1911/62106
dc.language.isoeng
dc.rightsCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
dc.subjectComputer science
dc.subjectApplied sciences
dc.titlePerformance analysis for parallel programs from multicore to petascale
dc.typeThesis
dc.type.materialText
thesis.degree.departmentElectrical Engineering
thesis.degree.disciplineEngineering
thesis.degree.grantorRice University
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
3421196.PDF
Size:
12.42 MB
Format:
Adobe Portable Document Format