Invited talk 1
Title: On Optimizing Communication for Tensor Contractions: Theory and Practice
Presenter: Ponnuswamy Sadayappan (Ohio State University)
Abstract: Tensor contractions are higher dimensional analogs of matrix-matrix multiplication. They constitute the compute-intensive core of many high-accuracy methods in computational chemistry, such as the coupled cluster method. We discuss a framework for systematic enumeration of a family of distributed algorithms for tensor contractions. Depending on tensor shapes/sizes, topology/size of the target computer system, and per-node physical memory, a communication-optimal member of the family of algorithms can be identified. Several previously developed communication-efficient distributed matrix multiplication algorithms, such as the SUMMA algorithm and the so-called 3D and 2.5D algorithms, are special cases in this framework. The practical benefits of the distributed algorithms are demonstrated via experimental measurements.
A second theme of the talk is that of modeling lower bounds on the data movement complexity of algorithms. Such lower bounds are useful in understanding minimal data movement requirements for an algorithm, which must be satisfied by any possible implementation of the algorithm. Lower bounds on data movement complexity of tensor contractions and other algorithms will be discussed.
Biography: P. (Saday) Sadayappan is a Professor of Computer Science and Engineering at The Ohio State University. His current research centers around domain-specific compiler optimization and high-performance scientific computing. Some recent projects include the PolyOpt polyhedral optimizer in the ROSE compiler, and the Tensor Contraction Engine, a domain-specific compiler for automated synthesis of high-performance codes for tensor expressions arising in coupled cluster and other ab initio methods in quantum chemistry. Sadayappan is a Fellow of the IEEE.
Invited talk 2
Title: Performance Tuning of Next-Generation Sequencing Assembly via Gaussian Process Model with Branching and Nested Factors
Presenter: Ray-Bing Chen (National Cheng Kung University)
Abstract: For next-generation sequencing data, the selection of the assembly tool and the corresponding parameters has a great impact on the quality of de novo assembly. In this work, three different assembly tools, Velvet, SOAPdenovo and ABySS, are considered. Among these three tools, there are some shared parameters, and some parameters are specific only to particular tools. Thus it is a challenging problem to choose the proper assembly tool due to the complex model structures. Here we transfer the assembly tool selection problem as an optimization problem in computer experiments by treating assembly tool as a branching factor and the parameters within different tools as nested factors. Then based on the Gaussian process model with branching and nested factors, a surrogate tuning procedure is proposed to select the proper assembly tool and tune the corresponding parameters simultaneously. The performance of the proposed procedure is demonstrated via several numerical experiments. Finally a real example is used to illustrate the usefulness of the proposed procedure.
Biography: Ray-Bing Chen is currently Professor at the Department of Statistics, National Cheng Kung University. He received his B.S. in Applied Mathematics from National Sun Yat-sen University in 1994 and Ph.D. in Statistics from the University of California, Los Angeles in 2003. His research interests include statistical and machine learning, statistical modeling, computer experiment, and experimental design.