84. Analysis of the multi-level preconditioner#

We apply the Additive Schwarz Theory to prove an optimal conditioner number of the multi-level preconditioner.

The outline is as follows. Provided that the \(A\)-norm is equivalent to the \(H^1\)-norm we will show easily that

\[ \| \underline u \|_{C_{ML}}^2 \approx \inf_{u = u_0 + \sum_{l=1}^L w_l \atop u_0 \in V_0, w_l \in V_l} \| u_0 \|_{H^1}^2 + \sum_{l=1}^L h_l^{-2} \| w_l \|_{L_2}^2 \]

We call the norm on the right hand side the multi-level norm. Then we prove that the multi-level norm is equivalent to the \(H^1\)-norm.

The idea is similar to Fourier-analysis. We decompose the function \(u\) into components \(w_l\) of different frequencies. For the particular frequency component \(w_l\) within the finite element space on the level-l mesh there holds

\[ \| w_l \|_{H^1}^2 \approx h_l^{-2} \| w_l \|_{L_2}^2. \]

Taking one derivative on the left hand side corresponds to scaling with the inverse mesh-size \(h_l^{-1}\).

Theorem: The multi-level preconditioner can be expressed as above.

Proof: The multi-level preconditioner is recursively defined as

\[ C_{ML,0}^{-1} = A_0^{-1} \qquad C_{ML,l}^{-1} = P_l C_{ML,{l-1}}^{-1} P_l^T + D_l^{-1} \]

By combining all prolongations from level \(l\) to the finest level \(L\) as

\[ E_l := P_L \, P_{L-1} \cdots P_{l+1} \]

we can replace the recursive definition of \(C_{ML}^{-1}\) by the sum

\[ C_{ML,L}^{-1} = E_0 A_0^{-1} E_0^T + \sum_{l=1}^L E_l D_l^{-1} E_l^T \]

By a generalization of the ASM - lemma (replacing sub-space inverses \(A_l\) by the preconditioner \(D_l\)) we can represent the norm of the additive preconditioner by a minimal decomposition:

\[ \| \underline u \|_{C_{ML,L}}^2 = \min_{u = E_0 u_0 + \sum E_l w_l} \| \underline u_0 \|_{A_0}^2 + \sum_{l=1}^L \| \underline w_l \|_{D_l}^2 \]

We have already shown that the Jacobi-preconditioner \(D_l\) satisfies

\[ \| u \|_{D_l}^2 \approx h_l^{-2} \| u \|_{L_2}^2, \]

which proves the result. \(\Box\)

84.1. Nearly optimal analysis of the ML - preconditioner#

Define the multi-level norm

\[ \| u \|_{ML}^2 := \inf_{u = u_0 + \sum w_l} \Big\{ \| u_0 \|_{H^1}^2 + \sum_{l=1}^L h_l^{-2} \| w_l \|_{L_2}^2 \Big\} \]

Theorem: There holds

\[ \frac{1}{L} \| u \|_{ML}^2 \preceq \| u \|_{H^1}^2 \preceq L \, \| u \|_{ML}^2. \]

Thus, the condition number of the ML-preconditioner is bounded as

\[ \kappa \{ C_{ML}^{-1} A \} \preceq L^2 \approx (\log N)^2 \]

Proof: We start with the estimate \(\| u \|_{ML}^2 \preceq L \| u \|_{H^1}^2\). Let \(\pi_l\) be a Clément-type operator projecting to \(V_l\) with the properties

\[ \| \pi_l u \|_{H^1}^2 + h_l^{-2} \| u - \pi_l u \|_{L_2}^2 \preceq \| u \|_{H^1}^2 \]

We define

\[\begin{eqnarray*} u_0 & := & \pi_0 u \\ w_l & := & \pi_l u - \pi_{l-1} u \end{eqnarray*}\]

There holds (a telescopic sum):

\[ u_0 + \sum_{l=1}^L w_l = \pi_0 u + \sum_{l=1}^L (\pi_l - \pi_{l-1}) u = \pi_l u = u, \]

i.e. we have a decomposition into sub-spaces. Furthermore there holds

\[ \| u_0 \|_{H^1}^2 \preceq \| u \|_{H^1}^2 \]

and

\[\begin{eqnarray*} \| w_l \|_{L_2}^2 & = & \| \pi_l u - \pi_{l-1} u \|_{L_2}^2 \\ & \preceq & \| \pi_l u - u \|_{L_2}^2 + \| u - \pi_{l-1} u \|^2 \\ & \preceq & h_l^2 \| u \|_{H^1}^2 \end{eqnarray*}\]

Thus, every single term of the \(L+1\) terms of the \(ML\)-norm can be bounded by \(\| u \|_{H^1}\), and thus the estimate \(\| u \|_{ML}^2 \preceq (L+1) \, \| u \|_{H^1}^2\) is proven.

For the other estimate, let \(u = \sum_{l=0}^L u_l\), with \(u_l \in V_l\) be an arbitrary decomposition. There holds

\[\begin{eqnarray*} \| u \|_A^2 & = & A( \sum_l u_l, \sum_k u_k) \\ & = & \sum_{l,k} A(u_l, u_k) \\ & \leq & \sum_{l,k = 0}^L \frac{1}{2} ( \| u_l \|_A^2 + \| u_k \|_A^2 ) \\ & = & (L+1) \sum_{l=0}^L \| u_l \|_A^2 \\ & \preceq & (L+1) \Big\{ \| u_0 \|_A^2 + \sum_{l=1}^L h_l^{-2} \| u_l \|_{L_2}^2 \Big\} \end{eqnarray*}\]

Since this holds for an arbitrary decomposition \(u = \sum u_l\), it holds also for the minimal one.

84.2. Optimal analysis of the multi-level preconditioner#

By a refined analysis, both \(L\)-dependent estimates can be improved to uniform estimates. However, this requires more problem-specific assumptions, as well as advanced tools from interpolation spaces.

Theorem: There holds \(\| u \|_{H^1} \preceq \| u \|_{ML}\).

We first show that the estimate of \(A\)-inner products of functions from very different levels can be improved essentially:

Lemma: For \(u_l \in V_l\) and \(v_k \in V_k\) there holds

\[ A(u_l, v_k) \preceq 2^{-|l-k|/2} h_l^{-1} \| u_l \|_{L_2} h_k^{-1} \| v_k \|_{L_2} \]

Proof: Assume \(l \leq k\), i.e., \(u_l\) is a function on a coarser level than \(v_k\). The idea is to apply integration by parts to put all derivatives to \(u_l\). Integration by parts can only be done element-by-element on the coarser mesh:

\[\begin{eqnarray*} A(u_l, v_k) & = & \int_\Omega \nabla u_l \nabla v_k \, dx \\ & = & \sum_{T_l \in {\mathcal T_l}} \int_{T_l} \nabla u_l \nabla v_k \, dx \\ & = & \sum_{T_l} -\int_{T_l} \Delta u_l v_k \, dx + \int_{\partial T_l} \frac{\partial u_l}{\partial n} v_k ds \\ & \leq & \sum_{T_l} \| \Delta u_l \|_{L_2(T_l)} \| v_k \|_{L_2(T_l)} + \| \partial_n u_l \|_{L_2(\partial T_l)} \| v_k \|_{L_2(\partial T_l)} \\ & \preceq & \sum_{T_l} h_l^{-2} \| u_l \|_{L_2(T_l)} \| v_k \|_{L_2(T_l)} + h_l^{-3/2} \| u_l \|_{L_2(T_l)} h_k^{-1/2} \| v_k \|_{L_2(T_l)} \\ & \leq & \frac{h_k}{h_l} h_l^{-1} \| u_l \|_{L_2} h_k^{-1} \| v_k \|_{L_2} + \sqrt{ \frac{h_k}{h_l}} h_l^{-1} \| u_l \|_{L_2} h_k^{-1} \| v_k \|_{L_2} \\ & \leq & 2^{-|l-k|/2} h_l^{-1} \| u_l \|_{L_2} h_k^{-1} \| v_k \|_{L_2} \end{eqnarray*}\]

Proof of the theorem:

We start as above with an arbitrary decomposition \(u = \sum u_l\), and use the improved estimate for the inner product:

\[\begin{eqnarray*} \| u \|_A^2 & = & \sum_{l,k = 0}^L A(u_l, v_k) \\ & \preceq & \sum_{l,k} 2^{-|l-k|/2} \; h_l^{-1} \| u_l \|_{L_2} h_k^{-1} \| u_k \|_{L_2} \\ & \preceq & \sum_{l,k} 2^{-|l-k|/2} \left( h_l^{-2} \| u_l \|^2_{L_2} + h_k^{-2} \| u_k \|_{L_2}^2 \right) \\ & = & 2 \sum_{l,k} 2^{-|l-k|/2} h_l^{-2} \| u_l \|^2_{L_2} \\ & \preceq & \sum_l h_l^{-2} \| u_l \|_{L_2}^2 \end{eqnarray*}\]

We used that \(\sum_k 2^{-|l-k|/2}\) is a convergent, geometric series. The term from the coarsest level was skipped for convenience of notation.

Definition: Full regularity assumption: The underlying differential equation satisfies full elliptic regularity if for arbitrary right hand side \(f \in L_2\) the solution \(u\) is \(H^2\)-regular with the estimate

\[ \| u \|_{H^2} \preceq \| f \|_{L_2}. \]

This assumption is satisfied if the domain \(\Omega\) is convex, or its boundary is \(C^1\)-continuous.

Aubin-Nitsche lemma: Assume that the differential equation satisfies full elliptic regularity. Then there holds

\[ \| u - u_h \|_{L_2} \preceq h \| u - u_h \|_{H^1}, \]

where \(u_h\) is the Galerkin approximation of \(u\), i.e. \(u_h \in V_h\) such that

\[ A(u_h, v_h) = A(u, v_h) \qquad \forall \, v_h \]

Proof: finite element class.

Theorem: Assume full elliptic regularity. Then there holds

\[ \| u \|_{ML} \preceq \| u \|_{H^1} \]

Proof: Given an arbitrary \(u \in V_L\). We decompose \(u\) by \(A\)-orthogonal projections.

Define \(u_l = \Pi_l u\) as \(u_l \in V_l\) such that

\[ A(u_l, v_l) = A(u, v_l) \qquad \forall \, v_l \in V_l \]

By the Aubin-Nitsche lemma we obtain

\[ \| u - u_l \|_{L_2} \preceq h_l \| u - u_l \|_A \]

Since the spaces are nested we get \(u_{l-1} = \Pi_{l-1} u = \Pi_{l-1} \Pi_l u = \Pi_{l-1} u_l\), and by Aubin-Nitsche

\[ \| u_{l-1} - u_l \|_{L_2} \preceq h_l \| u_{l-1} - u_l \|_A \]

Now define the decomposition \(u = u_0 + \sum w_l\) via orthogonal projections:

\[ u_0 = \Pi_0 u \qquad \text{and} \qquad w_l := \Pi_l u - \Pi_{l-1} u. \]

Thus, using the Pythagorean theorem

\[\begin{eqnarray*} \| u_0 \|_A^2 + \sum_l h_l^{-2} \| w_l \|_{L_2} & \preceq & \| \Pi_0 u \|_A^2 + \sum_l \| (\Pi_l - \Pi_{l-1}) u \|_A^2 \\ & = & \| u \|_A^2, \end{eqnarray*}\]

and the proof is complete.

84.2.1. Regularity-free estimate#

Now we show \(\| u \|_{ML} \preceq \| u \|_{H^1}\) without any regularity assumption. The technique is based on Hilbert-space interpolation theory by the K-functional,see numpde lecture notes, pp43.

We proceed similar as above. Let \(\Pi_l : L_2 \rightarrow V_l\) be an Clément-type operator such that

\[\begin{eqnarray*} \| \Pi_l u \|_{L_2} & \leq & \| u \|_{L_2} \qquad \forall \, u \in L_2 \\ \| u - \Pi_l u \|_{L_2} & \preceq & h_l^2 \| u \|_{H^2} \quad \forall \, u \in H^2. \end{eqnarray*}\]

We define \(u_0 = \Pi_0 u\) and \(u_l = \Pi_l u - \Pi_{l-1} u\). We obtain the 2 estimates

\[\begin{eqnarray*} h_l^{-2} \| u_l \|_{L_2}^2 & \preceq & h_l^{-2} \| u \|_{L_2}^2, \\ h_l^{-2} \| u_l \|_{L_2}^2 & \preceq & h_l^2 \| u \|_{H^2}. \end{eqnarray*}\]

The idea of the proof is that \(H^1\) is the interpolation space \([L_2, H^2]_{1/2}\). We define the K-functional

\[ K(t,u)^2 = \inf_{u = u_0 + u_2 \atop u_0 \in L_2, u_2 \in H^2} \big\{ \| u_0 \|_{L_2}^2 + t^2 \| u_2 \|_{H^2}^2 \big\}. \]

Combining the 2 estimates above we get

\[ h_l^{-2} \| u_l \|_{L_2}^2 \preceq h_l^{-2} K^2 (h_l^2, u) \]

Thus, the sum over \(L\) levels is

\[ \sum_{l=1}^L h_l^{-2} \| u_l \|_{L_2}^2 \leq \sum_{l=1}^L h_l^{-2} K^2 (h_l^2, u) \simeq \sum_{l=1}^L 2^l K^2(2^{-l}, u) \]

Next we use that \(K(s,.) \simeq K(t,.)\) for \(t \leq s \leq 2t\) and replace the sum by an integral, and substitute \(t := 2^{-l}, dt \simeq -2^{-l} dl = -t dl\):

\[\begin{eqnarray*} \sum_{l=1}^L h_l^{-2} \| u_l \|_{L_2}^2 & \leq & \int_{l=1}^{L+1} 2^l K^2(2^{-l}, u) dl \\ & \simeq & \int_{2^{-L-1}}^1 t^{-1} K^2(t,u) \frac{dt}{t} \\ & \preceq & \int_0^\infty t^{-1} K^2(t,u) \frac{dt}{t} \\ & = & \| u \|_{[L_2,H^2]_{1/2}}^2 \simeq \| u \|_{H^1}^2 \end{eqnarray*}\]

\(\Box\)

An intuitive explanation of the proof is that different terms of the sum \(\sum_{l=1}^L h_l^{-2} \| \Pi_l u - \Pi_{l-1} u \|_{L_2}^2\) are dominated by different frequency components of \(u\). The squared \(H^1\)-norm is the sum over \(H^1\)-norms of the individual frequency components.

Literature: Xu, Jinchao: Iterative methods by space decomposition and subspace correction. SIAM Rev. 34 (1992), no. 4, 581–613 https://epubs.siam.org/doi/10.1137/1034116