Reinforcement Learning for Automated Performance Tuning: Initial Evaluation
for Sparse Matrix Format Selection
Warren Armstrong
(Australian National University)
Alistair P. Rendell
(Australian National University)
The field of reinforcement learning has developed techniques for
choosing beneficial actions within a dynamic environment. Such
techniques learn from experience and do not require teaching.
This paper explores how reinforcement learning techniques might
be used to determine the optimal storage format for sparse
matrices. Three different storage formats are considered:
coordinate format, compressed sparse row, and blocked compressed
sparse row. Which format performs best depends heavily on the
nature of the matrix and the computer system being used.
To test the above a program has been written to generate a
series of sparse matrices, where any given matrix performs
optimally using one of the three different storage types. For
each matrix a number of sparse matrix vector products are
performed. The goal of the learning agent is to predict the
optimal sparse matrix storage format for that matrix.
The proposed agent uses five attributes of the sparse matrix
(the number of rows/columns, the number of non-zero elements,
the standard deviation of non-zeroes per row, and the mean
number of neigbours), and is characterized by two parameters
(an exploration rate and a parameter that determines how the
state space is partitioned). The ability of the agent to
successfully predict the optimal storage format is analyzed for
a series of 1000automatically generated test matrices.