0% found this document useful (0 votes)
85 views17 pages

Sparse Matrix Methods: Day 1: Overview

This document provides an overview of sparse matrix methods. It discusses sparse matrix data structures and operations in MATLAB. Direct methods like LU and Cholesky factorizations are covered as well as preordering strategies like minimum degree ordering to reduce fill. Iterative methods like conjugate gradients are also mentioned. MATLAB demonstrations are provided for sparse matrix representations and algorithms.

Uploaded by

A F
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views17 pages

Sparse Matrix Methods: Day 1: Overview

This document provides an overview of sparse matrix methods. It discusses sparse matrix data structures and operations in MATLAB. Direct methods like LU and Cholesky factorizations are covered as well as preordering strategies like minimum degree ordering to reduce fill. Iterative methods like conjugate gradients are also mentioned. MATLAB demonstrations are provided for sparse matrix representations and algorithms.

Uploaded by

A F
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 17

Sparse Matrix Methods

Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and block triangular form

Day 2: Direct methods Day 3: Iterative methods

(Demo in Matlab) Matlab sprand spy sparse, full matrix arithmetic and indexing examples of sparse matrices from different applications (from UF site)

Matlab sparse matrices: Design principles


All operations should give the same results for sparse and full matrices (almost all) Sparse matrices are never created automatically, but once created they propagate Performance is important -- but usability, simplicity, completeness, and robustness are more important Storage for a sparse matrix should be O(nonzeros) Time for a sparse operation should be O(flops)
(as nearly as possible)

Data structures
31 0 41 0 59 26 53 0 0 31 41 59 26 53

Full: 2-dimensional array of real or complex numbers (nrows*ncols) memory

Sparse: compressed column storage about (1.5*nzs + .5*ncols) memory

(Demos in Matlab) A\b Cholesky factorization and orderings

Solving linear equations x = A \ b;


Is A square? no => use QR to solve least squares problem Is A triangular or permuted triangular? yes => sparse triangular solve

Is A symmetric with positive diagonal elements? yes => attempt Cholesky after symmetric minimum degree
Otherwise => use LU on A(:, colamd(A))

Matrix factorizations in Matlab


Cholesky: R = chol(A); simple left-looking column algorithm Nonsymmetric LU: [L,U,P] = lu(A); left-looking GPMOD, depth-first search, symmetric pruning Orthogonal: [Q,R] = qr(A); George-Heath algorithm: row-wise Givens rotations

Graphs and Sparse Matrices: Cholesky factorization

Fill: new nonzeros in factor

8 4

6 10

10

Symmetric Gaussian elimination: for j = 1 to n add edges between js higher-numbered neighbors

G(A)

G+(A)
[chordal]

Elimination Tree
1 3 7 10 9 8 4 6 10 6 9 5 2 5 4 8 2 7 3 1

Cholesky factor

G+(A)

T(A)

T(A) : parent(j) = min { i > j : (i,j) in G+(A) }


T describes dependencies among columns of factor Can compute T from G(A) in almost linear time Can compute G+(A) easily from T

(Demos in Matlab) matrix and graph elimination tree orderings in detail

Fill-reducing matrix permutations


Minimum degree: Eliminate row/col with fewest nzs, add fill, repeat Theory: can be suboptimal even on 2D model problem Practice: often wins for medium-sized problems Nested dissection: Find a separator, number it last, proceed recursively Theory: approx optimal separators => approx optimal fill and flop count Practice: often wins for very large problems Banded orderings (Reverse Cuthill-McKee, Sloan, . . .): Try to keep all nonzeros close to the diagonal Theory, practice: often wins for long, thin problems Best modern general-purpose orderings are ND/MD hybrids.

Fill-reducing permutations in Matlab


Nonsymmetric approximate minimum degree: p = colamd(A); column permutation: lu(A(:,p)) often sparser than lu(A) also for QR factorization Symmetric approximate minimum degree: p = symamd(A); symmetric permutation: chol(A(p,p)) often sparser than chol(A) Reverse Cuthill-McKee p = symrcm(A); A(p,p) often has smaller bandwidth than A similar to Sparspak RCM

(Demos in Matlab) nonsymmetric LU dmperm, dmspy, components

Matching and block triangular form


Dulmage-Mendelsohn decomposition: Bipartite matching followed by strongly connected components

Square, full rank A: [p, q, r] = dmperm(A); A(p,q) has nonzero diagonal and is in block upper triangular form also, strongly connected components of a directed graph also, connected components of an undirected graph
Arbitrary A: [p, q, r, s] = dmperm(A); maximum-size matching in a bipartite graph minimum-size vertex cover in a bipartite graph decomposition into strong Hall blocks

Complexity of direct methods

n1/2

n1/3

2D
Space (fill): Time (flops):

3D O(n 4/3 ) O(n 2 )

O(n log n) O(n 3/2 )

The Landscape of Sparse Ax=b Solvers


Direct A = LU Nonsymmetric Symmetric positive definite
Pivoting LU Cholesky

Iterative y = Ay
More General

GMRES, QMR, Conjugate gradient


More Robust Less Storage

More Robust

(Demos in Matlab) conjugate gradients

You might also like