Using tiling to improve performance in a sparse symmetric...

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C708S520000

Reexamination Certificate

active

06470368

ABSTRACT:

BACKGROUND
1. Field of the Invention
The present invention relates to computer systems for performing sparse matrix computations. More particularly, the present invention relates to a method and an apparatus that uses a hybrid approach to efficiently perform a cmod operation in solving a system of linear algebraic equations involving a sparse coefficient matrix.
2. Related Art
The solution of large sparse symmetric linear systems of equations constitutes the primary computational cost in numerous applications, such as finite-element design, linear programming, circuit simulation and semiconductor device modeling. Efficient solution of such systems has long been the subject of research, and considerable progress has been made in developing efficient algorithms to this end. A direct solution technique known as “Cholesky Factorization” is the most widely used approach to solve such a system. Under Cholesky factorization, the complete solution sequence requires many stages, including matrix reordering, symbolic factorization, numerical factorization and triangular solution. Of these stages, numerical factorization is typically the most computationally expensive.
One method of performing numerical factorization is based on a right-looking supernode-supernode method described in “Parallel Algorithms for Sparse Linear Systems” by Michael T. Heath, Esmond Ng and Barry W. Peyton, in “Parallel Algorithms for Matrix Computations” by Gallivan, et al. (Editors), SIAM (1994) (referred to as HNP). In a sparse matrix, a supernode is a set of contiguous columns that have essentially the same sparsity structure. Supernodes can be used to organize the numerical factorization stage around matrix-vector (supernode-column) and matrix-matrix (supernode-supernode) primitive operations leading to a substantial performance improvement arising from more efficient use of the processor caches and pipelining units.
The numerical factorization step involves two fundamental sub-tasks:
(1) cdiv(j): division of column j of factor by a scalar; and
(2) cmod(j,k): modification of column j by column k, k<j.
These sub-tasks can be organized around supernodes. For example, cdiv(j) can be organized as an internal factorization/update of supernode j, and cmod(j,k) can be organized as a modification of supernode j by supernode k, k<j.
Typically, the second sub-task is where the bulk of the computational cost is incurred. In order to increase computational efficiency, the cmod(j,k) operation can be divided into three steps:
(a) computation of the update and accumulation into a temporary array;
(b) carrying out the non-zero index matching between the first columns of the source and destination supernodes and computing relative indices; and
(c) scattering updates from the temporary vector into the target destination supernode.
By dividing the cmod(j,k) operation in this way, it is possible to apply techniques used in dense matrix operations in the step (a). Note that step (a) is where the dominant amount of time is spent. In the discussion that follows, we refer the step (a) as the “local dense cmod operation”. The local dense cmod operation involves computing a trapezoidal-shaped dense update that can be represented as a combination of a dense rank-k update and a matrix multiplication.
Library routines can be used to speed up the cmod computation. These library routines are typically written in assembly language and are hand-tuned for a specific machine architecture. For example, on the current generation of UltraSparc-II-based machines, the Sun Performance Library (see http://www.sun.com/workshop/performance/wp-perflib/) provides SPARC assembly-language implementations of BLAS
1
, BLAS
2
and BLAS
3
routines. These hand-tuned assembly language implementations can yield performance close to the theoretical peak of the underlying processor.
For example, portions of the cmod operation can be efficiently performed by invoking the BLAS
3
“dgemm” matrix multiplication code from Sun Performance library. Unfortunately, invoking the BLAS
3
dgemm matrix multiplication code requires supernodes to be copied into and out of temporary storage because of incompatibility between data-structures used by a typical sparse solver and those expected by the dgemm API. This copying can add a significant overhead. Consequently, using the BLAS
3
“dgemm” matrix multiplication code only makes sense for supernodes above a certain size. Otherwise, the performance gains from using the BLAS
3
“dgemm” library code are cancelled out by the additional overhead involved in copying.
Hence, what is needed is a system that performs the cmod operation using library routines in cases where the performance gains from using the library routines exceed the computational overhead required to use the library routines.
Another difficulty in attaining high performance in numerical factorization is due to the fact that supernodes can have varying shapes, sizes, and sparsity patterns. These varying shapes, sizes and sparsity patterns can greatly influence computational performance. In order to optimize computational performance for the cmod operation, the supernodes of varying shapes and sizes must be divided into smaller sub-units for computation so as to balance computational operations with memory references in a way that is tuned for the particular machine architecture on which the computation is to be run.
SUMMARY
One embodiment of the present invention provides a system for efficiently perform a cmod operation in solving a system of linear algebraic equations involving a sparse coefficient matrix. Note that CMOD and CDIV are used interchangeably with cmod and cdiv. Both cmod and cdiv are described below in conjunction with FIG.
3
. The system operates by identifying supernodes in the sparse matrix, wherein each supernode comprises a set of contiguous columns having a substantially similar pattern of non-zero elements. In solving the equation, the system performs a CMOD operation between a source supernode and a destination supernode. As part of this CMOD operation, the system determines a subset of the source supernode that will be used in creating an update for the destination supernode. The system partitions the subset into a plurality of tiles, each tile being a rectangular shape of fixed dimensions chosen so as to substantially optimize a computational performance of the CMOD operation on a particular computer architecture. The system computes a corresponding portion of the update for the destination supernode using a CMOD function for each tile in the plurality of tiles. This computation may involve using computer code that is specifically tailored to each tile's dimensions.
In one embodiment of the present invention, the system pre-computes a computational performance for a plurality of different tile sizes.
In one embodiment of the present invention, while partitioning the subset of the source supernode into the plurality of tiles, the system covers as much of the subset as possible using a first tile shape having a best computational performance. If there is a remainder after the first tile shape has been used, the system covers as much of the remainder as possible using a second tile shape with a second best computational performance.
Thus, based on the size of the supernode being processed, the system selects a substantially optimal tiling for computing the update.


REFERENCES:
patent: 5200915 (1993-04-01), Hayami et al.
patent: 5206822 (1993-04-01), Taylor
patent: 5392429 (1995-02-01), Agrawal et al.
patent: 5548798 (1996-08-01), King
patent: 5717621 (1998-02-01), Gupta et al.
patent: 5864786 (1999-01-01), Jericevic
Heath, et al.; “Parallel Algorithms for Sparse Linear Systems”; Parallel Algorithms for Matrix Computations; Society for Industrial and Applied Mathematics by Gallivan, et al; Copyright 1990; pp. 83-124.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Using tiling to improve performance in a sparse symmetric... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Using tiling to improve performance in a sparse symmetric..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Using tiling to improve performance in a sparse symmetric... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2979107

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.