71. Basic Iterative Methods#
In this chapter we learn
The Richardson iteration
The gradient method
The Chebyshev method
The conjugate gradients method
Using preconditioners
We are given a linear system of equations
The matrix \(A \in {\mathbb R}^{n\times n}\) is so large such that direct elimination is not a good option. Although this section applies to linear systems in general, we think of equations arising from finite element discretization. Other numerical methods for partial differential equations lead to similar systems. Matrices from graph networks are another example.
The situation is advantageous if the matrix is
symmetric:
\[A = A^T\]positive definite:
\[x^T A x > 0 \qquad \forall \, 0 \neq x \in {\mathbb R}^n\]
We call symmetric and positive definite matrices SPD.
SPD matrices lead to the energy norm and the energy inner product defined as
We do not assume that the matrix is stored as a dense matrix, i.e. all \(n^2\) entries are stored in a two dimensional array.
We only assume that the matrix-vector product
is computationally available.
Typically, the algorithmic complexity of the matrix-vector product is \(O(n)\).
Often one stores the matrix in a sparse matrix format, such as the CSR (Compressed Sparse Row) format. https://de.wikipedia.org/wiki/Compressed_Row_Storage https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html
More general, the matrix-vector product can be any linear operator. We can think of the sum or product of linear operators, multiplying with the inverse of the diagonal, or solving a triangular (sparse) system by forward/backward substitution