Invited talk 1
Title: Compiler Optimization, Specialization and Autotuning: Achieving Productivity and High Performance for Diverse Architectures
Presenter: Mary Hall (University of Utah, USA)
Abstract: As current and future architectures becomes increasingly diverse, the challenges of developing high-performance applications are increasingly onerous. The goal of compiler optimization in high-performance computing is to take as input a computation that is architecture independent and maintainable and produce as output efficient implementations of the computation that are specialized for the target architecture. A compiler that is specialized for an application domain can tailor its optimization strategy to increase effectiveness. Autotuning empirically evaluates a search space of possible implementations of a computation to identify the implementation that best meets its optimization criteria (e.g., performance, power, or both). Combining the three concepts, autotuning compilers generate this search space of highly-tuned implementations either automatically or with programmer guidance. This talk will explore the role of compiler-based specialization and autotuning in achieving very high levels of performance, comparable to what is obtained manually by experts. We will consider three case studies using this approach to highlight some of the aggressive optimizations required to achieve this goal: geometric multigrid and the stencil computations within them, tensor contractions and sparse matrix computations.
Biography: Mary Hall is a professor in the School of Computing at University of Utah. She received a PhD in Computer Science in 1991 from Rice University. Her research focuses on compiler technology for exploiting performance-enhancing features of a variety of computer architectures: automatic parallelization for multi-cores and GPUs, superword-level parallelism, processing-in-memory architectures and FPGAs. Professor Hall is an ACM Distinguished Scientist and ACM’s representative on the Computing Research Association Board of Directors. She is deeply interested in computing history, having served on the ACM History Committee since 2005 and as chair from 2009-2014. She also actively participates in outreach programs to encourage the participation of women and underrepresented minorities in computer science.
Invited talk 2
Title: Models, Math, and Multiple Objectives in Automatic Performance Tuning
Presenter: Paul Hovland (Argonne National Laboratory, USA)
Abstract: Automatic performance tuning (autotuning) can benefit from the use of models in order to guide or restrict the search process. We discuss a variety of methods for constructing such models, ranging from fully empirical models constructed using active learning to fully analytic models constructed using source code analysis. Mathematics can aid autotuning both by providing a high-level description of the computation to be tuned and by providing a framework for developing search algorithms. We describe autotuning for tensor contractions and the adaptation of mathematical optimization algorithms to autotuning search. Historically, automatic performance tuning has focused on minimizing execution time. However, on emerging architectures power, energy, and memory footprint are also important considerations. We examine strategies for dealing with multiple, possibly competing objectives.
Biography: Paul Hovland's research focuses on program analysis and transformation tools for high performance scientific computing applications. He holds a B.S. in computer engineering and an M.S. in computer science from Michigan State University. He received his Ph.D. in computer science with a computational science and engineering option from the University of Illinois at Urbana-Champaign, advised by Michael T. Heath. He is a Senior Computer Scientist, Deputy Division Director, and the Strategic Lead for Applied Mathematics in the Mathematics and Computer Science Division at Argonne National Laboratory.