77. Additive Schwarz Methods#

In this lecture

  • we define additive and multiplicative sub-space correction methods

  • prove the Additive Schwarz Lemma as tool for the analysis of additive Schwarz methods

The method is named after Hermann Schwarz, who used this technique for proving the existence of solutions of PDEs.

Literature: Xu, Jinchao: Iterative methods by space decomposition and subspace correction. SIAM Rev. 34 (1992), no. 4, 581–613 https://epubs.siam.org/doi/10.1137/1034116

77.1. Block-Jacobi and general additive Schwarz preconditioners#

Let ei be the ith unit vector. The diagonal entry of the matrix A is aii=eiTAei. We can represent the Jacobi-preconditioner as

CJac1=(diagA)1=i=1Nei1eiTAeieiT

A block-Jacobi preconditioner is

CbJac=(A11000A22000Amm),

where we have m diagonal blocks, of sizes Ni.

For i=1m we define embedding matrices Ei consisting of the unit-vectors corresponding to the ith block:

Ei=(0INi0)=(ei,1,,ei,Ni)RN×Ni

We can write the diagonal blocks as

Aii=EiTAEiRNi×Ni

and the whole Block-Jacobi preconditioner as

CbJac=i=1mEiAiiEiT,

and the preconditioning action

CbJac1=i=1mEiAii1EiT

Next, we let the Ei be more general full rank, typically tall rectangular matrices. Let Ai=EiTAEiRNi. Then we call the preconditioner defined by the action

CASM1=i=1mEiAi1EiT

an additive Schwarz preconditioner CASM. In general we don’t have the simple representation as for the block-Jacobi preconditioner for CASM itself.

The action

CASM1:RNRN:rw

can be split into the following steps:

ri=EiTr extract residual on sub-space
wi=Ai1ri solve problem on sub-space
w=i=1mEiwi embed and combine sub-space solution

77.2. Sub-spaces of RN and of Hilbert-spaces#

The definition and implementation of the ASM - preconditioner happens in linear algebra, i.e. in RN. But for the analysis of the preconditioner we reformulate it in finite element space Vh and sub-spaces.

We define the so called Galerkin isomorphismus

G:RNVhuuh=i=1Nuiφi.

Every finite element function uh can be identified with its coefficient vector u with respect to the basis functions {φi}.

For u,vRN, there holds

vTAu=A(Gu,Gv),

where A is the Galerkin discretization matrix with entries Aji=A(φi,φj).

The adjoint of the Galerkin isomorphism is a mapping from the dual of Vh, i.e. from the linear forms on Vh, into RN:

G:VhRNr(.)(r(φi))i=1N

The embedding matrices EiRN×Ni define sub-spaces

Vi=Grange{Ei}={GEv:vRNi}Vh

The ASM preconditioner can be reformulated by means of variational problems on sub-spaces:

Lemma: Let r(.)Vh. Then

w=GCASM1GrVh

is given as

w=i=1mwiwithwiVi:A(wi,vi)=r(vi)viVi

Proof: Express wi=GEiwi and vi=GEivi with wi,viRNi. Then the defining equation of wi reads as

A(GEiwi,GEivi)=r(GEivi)viRNi

and thus as

viTEiTAEiwi=viTEiTGr,

i.e. wi=Ai1EiT(Gr).

Finally, w=wi=GEiwi=GEiwi=GEiAi1EiT(Gr).

Lemma Let wi=EiAi1EiTAv, and wi=GEiwi and v=Gv. Then

wi=Piv

with Pi the A(.,.) orthogonal projector onto Vi:

Pi:VhVi:A(Piv,z)=A(v,z)vVh,zVi

Proof: Follows from the proof of the Lemma above using r()=A(v,).

Lemma: There holds

GCASM1A=i=1MPiG

This implies that the condition number κA{CASM1A} equals κA{Pi}. The ideal case is that all sub-spaces are A-orthogonal, then Pi=Id.

Lemma: For the error propagation matrix there holds

G(IτCASM1A)=(Iτi=1MPi)G

A block-Gauss-Seidel iteration leads to the so called multiplicative Schwarz preconditioner, with the error propagation operator

MMSM=(IPm)(IP2)(IP1)

As a product of A-orthogonal projectors it has ||MMSM||A1. Its .,.A-adjoint is the product in reverse ordering.

77.3. The Additive Schwarz Lemma#

We have defined the action of the ASM preconditioner. To analyze the condition number via bounds of the Rayleigh quotient

γ1uTAuuTCASMuγ2

it is of useful to have an representation of the quadratic form generated by CASM itself. This is given by the Additive Schwarz Lemma. Since it was reformulated and reproven by many authors it is also called Lemma of many parents. In some variants it is also know as Lions’s Lemma, or fictitious space lemma by Nepomnyashchikh.

We assume that Ei are embedding matrices as above such that

i=1mrange{Ei}=RN

This means that every uRN can be decomposed into vectors in the range of Ei, not necessarily unique.

Theorem: ASM-lemma, Linear Algebra version:

For all uRN there holds:

uTCASMu=infuiRNiu=Eiuii=1muiTAiui

Theorem ASM-lemma, sub-space version:

For all uVh there holds:

uTCASMu=infuiViGu=uii=1muiA2

Proof: We prove the version in linear algebra notation, the other follows using the Galerkin isomorphism as above.

Fix uRN. The form uTCASMu is given as the minimum of the constraint minimization problem

infuiRNiu=Eiuii=1muiTAiui.

The feasible set is non-empty and convex, and the goal function is convex. Thus, the unique minimizer can be found by the method of Lagrange multipliers. The Lagrange function is

L((ui),λ)=uiTAiui+λT(uEiui)

There exists a unique Lagrange multiplier λ such that L has a stationary point ((ui),λ).

Differentiating L with respect an individual ui we get

2AiuiEiTλ=0

and thus

ui=12Ai1EiTλ.

Differentiating with respect to λ we recover the constraint

u=iEiui.

Inserting leads to

u=12iEiAi1EiTλ

We find the definition of the preconditiong action

u=12CASM1λ

Since for every u there exists a unique λ, CASM1 is indeed the inverse of a regular matrix CASM, and we can write

λ=2CASMu

The value of the minimum is

uiTAiui=(12Ai1EiTλ)Ai(12Ai1EiTλ)=14λT(EiAi1EiT)λ=14λTCASM1λ=uTCASMCASM1CASMu=uTCASMu

77.4. The upper-bound by the overlap#

Typically, proving the estimate

uCASM2=infuiViu=uiuiA21γ1uA2

is the hard part of the analysis, which has to be shown by explicitly constructing a decomposition. The other estimate often follows immediately from finite overlap of sub-spaces:

We define the overlap matrix ORm×m with entries

Oi,j:=supuiVi,vjVjA(ui,vj)uiAvjA

By the Cauchy-Schwarz inequality there is 0Oi,j1. The value Oi,j leads to the so called strengthened Cauchy-Schwarz inequality

|A(ui,vj)|Oi,juiAvjA

If two sub-spaces are A-orthogonal, then Oi,j=0.

Lemma There holds

uA2ρ(O)uCASM2,

where ρ(O) is the spectral radius of O.

Proof: Let uVh be given, and let u=ui be an arbitrary sub-space decomposition. Then

uA2=A(iui,jui)=i,jA(ui,uj)i,jOi,juiAujA

Next, we define zRN as zi=uiA. We use

zTOzρ(O)z22

to bound

uA2ρ(O)iuiA2.

Since the estimate holds for any decomposition, it holds also for the one leading to the minimal right hand side, what is exactly uCASM2.