90. Structure of Saddle-point Problems#

90.1. Constrained minimization problems#

Let \(A \in {\mathbf R}^{n\times n}\) be spd, and \(f \in {\mathbf R}^n\). Then the minimization problem

\[ \min_u \frac{1}{2} \, u^T A u - f^T u \]

is equivalent to the linear system

\[ A u = f. \]

Since \(A\) is spd, the function to be minimized is convex.

Now let \(B \in {\mathbf R}^{m\times n}\) of full rank, with \(m \leq n\) and, and \(g \in {\mathbf R}^m\). We consider the constrained minimization problem

\[ \min_{u \atop B u = g} \frac{1}{2} \, u^T A u - f^T u. \]

The minimizers are found among the critical points of the Lagrange function

\[ L(u, p) = \frac{1}{2} \, u^T A u - f^T u + p^T (B u - g), \]

where \(p \in {\mathbb R}^m\) is the Lagrange parameter. Differentiation with respect to \(u\) and \(p\), and zeroing leads to the Karush-Kuhn-Tucker (KKT) equations

\[\begin{split} \begin{array}{ccccl} A u & + & B ^T p & = & f, \\ B u & & & = & g. \end{array} \end{split}\]

The solution is not a minimizer, but a saddle-point of the Lagrange function \(L(.,.)\).

Theorem: The Hessian matrix of \(L(.,.)\),

\[\begin{split} \left( \begin{array}{cc} A & B \\ B & \end{array} \right) \in {\mathbf R}^{(m+n) \times (m+n)} \end{split}\]

has \(n\) positive and \(m\) negative eigenvalues.

Proof: We define the sub-spaces

\[\begin{eqnarray*} V_1 & := & \{ (u, 0) : u \in {\mathbf R}^n \}, \\ V_2 & := & \{ (-A^{-1} B^T p, p) : p \in {\mathbf R}^m \}, \end{eqnarray*}\]

and evaluate for \(u \neq 0\)

\[\begin{split} \left( \begin{array}{c} u \\ 0 \end{array} \right)^T \left( \begin{array}{cc} A & B \\ B & \end{array} \right) \left( \begin{array}{c} u \\ 0 \end{array} \right) = u^T A u > 0 \end{split}\]

and for \(p \neq 0\)

\[\begin{split} \left( \begin{array}{c} -A^{-1}B^T p \\ p \end{array} \right)^T \left( \begin{array}{cc} A & B \\ B & \end{array} \right) \left( \begin{array}{c} -A^{-1}B^T p \\ p \end{array} \right) = - p^T B A^{-1} B^T p < 0. \end{split}\]

The matrix \(S := B A^{-1} B^T\) is called the (negative) Schur complement. For an spd matrix \(A\) and for the rank-\(m\) matrix \(B\) it is positive definit.

Since we found an \(n\)-dimensional sub-space where the quadratic form is positive, and an \(m\)-dimensional sub-space where it is negative, the \(\min-\max\)-theorem proves that we have \(n\) positive and \(m\) negative eigenvalues.

90.2. Examples#

Stokes Equation

The vector-field \(u \in [H^1_{0,D}]^2\) is the velocity of an incompressible flow. It minimizes the \(H^1\)-norm, under the incompressibility constraint \(\operatorname{div} u = 0\):

\[ \min_{u \atop \operatorname{div} u} \frac{1}{2} \int | \nabla u |^2 - \int f u \]

The variational formulation of the KKT system is

\[\begin{split} \begin{array}{ccccll} \int \nabla u : \nabla v & + & \int \operatorname{div} v \, p & = & \int f v & \qquad \forall \, v \\ \int \operatorname{div} u \, q & & & = & 0 & \qquad \forall \, q \end{array} \end{split}\]

Discretization leads to the matrix equation

\[\begin{split} \begin{array}{ccccl} A u & + & B ^T p & = & f, \\ B u & & & = & 0, \end{array} \end{split}\]

where \(A\) is the discretization of the \(-\Delta\)-operator for both components of the velocity, and \(B\) is the discretization of the \(\operatorname{div}\)-operator. It’s transposed matrix \(B^T\) is the discretization of the negative \(\nabla\)-operator. The physical meaning of the Lagrange parameters \(p\) is pressure.

Dirichlet boundary conditions

We can interpret a Dirichlet problem as a constrained minimization problem

\[ \min_{u \in H^1 \atop u = u_D \text{ on } \Gamma_D} \frac{1}{2} \int |\nabla u |^2 - \int f u \]

The KKT system is to find \(u \in H^1(\Omega)\) and \(\lambda\) on the boundary such that

\[\begin{split} \begin{array}{ccccll} \int_\Omega \nabla u \nabla v & + & \int_{\Gamma_D} v \lambda & = & \int f v & \qquad \forall \, v \\ \int_{\Gamma_D} u \, \mu & & & = & \int_{\Gamma_D} u_D \, \mu & \qquad \forall \, \mu \end{array} \end{split}\]

By integration by parts in the first equation, we obtain

\[ -\int_\Omega \Delta u v + \int_{\partial \Omega} \frac{\partial u}{\partial n} v + \int_{\Gamma_D} \lambda v = \int_\Omega f v. \]

We see that the meaning of the Lagrange parameter \(\lambda\) is the negative normal derivative \(- \tfrac{\partial u}{\partial n}\)

90.3. Schur complement iteration#

We solve the block-system

\[\begin{split} \begin{array}{ccccl} A u & + & B ^T p & = & f, \\ B u & & & = & g. \end{array} \end{split}\]

Since \(A\) is spd and thus regular, we can solve for \(u\) in the first block-row

\[ u = A^{-1} (f - B^T p). \]

Inserting this \(u\) into the second block-row gives

\[ B A^{-1} (f - B^T p) = g, \]

or equivalently

\[ B A^{-1} B^T p = B A^{-1} f - g. \]

The Schur complement matrix \(S = B A^{-1} B^T\) is positive definite. The Uzawa algorithm (and there are many variants of it) is now to use a Richardson iteration, or conjugate gradients, to solve the Schur complement equation

\[ S p = \tilde g \]

with \(\tilde g = B A^{-1} f - g\). After computing the Lagrange parameter \(p\), the first component \(u\) is found from \(u = A^{-1} (f - B^T p)\).

Of course, the drawback of this method is that it requires the inverse \(A^{-1}\), which is usually not cheaply available. If the Uzawa iteration does not take many iteration steps, one can also use an (inner) iterative solver to apply \(A^{-1}\) in every step.

To design preconditioners for the Uzawa iteration, it is important to understand the norm induced by the Schur-complement \(S\):

\[ \| p \|_S^2 = \| B^T p \|_{A^{-1}}^2 = \left( \sup_u \frac{ u^T B^T p}{ \| u \|_{A}} \right)^2 \]

If \(A\) and \(B\) come from finite-element discretization of bilinear-forms \(a(.,.)\) and \(b(.,.)\), then the last term can be recast as

\[ \sup_{u \in V_h} \frac{b(u,p)^2}{a(u,u)} \]

If we can prove continuity of \(b(.,.)\)

\[ b(u,p) \leq \overline \beta \, \| u \|_A \| p \|_Q, \]

and the famous LBB-condition

\[ \sup_u \frac{b(u,p)}{\| u \|_A} \geq \underline \beta \, \| p \|_Q, \]

we immediately get

\[ \underline \beta^2 \, \| p \|_Q^2 \leq \| p \|_S^2 \leq \overline \beta^2 \, \| p \|_Q^2 \]

If we have a preconditioner for the norm-matrix of the \(Q\)-norm, we have a preconditioner for the Schur complement \(S\).

from ngsolve import *
from ngsolve.webgui import Draw
mesh = Mesh(unit_square.GenerateMesh(maxh=0.1))
fesu = H1(mesh, order=1)
feslam = H1(mesh, order=1, definedon=mesh.Boundaries(".*"))
u,v = fesu.TnT()
lam,mu = feslam.TnT()

a = BilinearForm(grad(u)*grad(v)*dx + u*v*dx).Assemble()
b = BilinearForm(u*mu*ds).Assemble()
f = LinearForm(10*v*dx).Assemble()
g = LinearForm((x+y)*mu*ds).Assemble()
ainv = a.mat.Inverse()
gtilde = (b.mat @ ainv * f.vec - g.vec).Evaluate()

S = b.mat @ ainv @ b.mat.T
preS = Projector(feslam.FreeDofs(), True)

from ngsolve.krylovspace import CGSolver
Sinv = CGSolver(S, preS, printrates=True, maxiter=200)

gflam = GridFunction(feslam)
gfu = GridFunction(fesu)

gflam.vec.data = Sinv * gtilde 
gfu.vec.data = ainv * (f.vec - b.mat.T * gflam.vec)

Draw (gfu);
CG iteration 1, residual = 5.703743410155175     
CG iteration 2, residual = 0.32250970878595636     
CG iteration 3, residual = 0.08335855188353485     
CG iteration 4, residual = 0.011192460122905604     
CG iteration 5, residual = 0.004812013162339133     
CG iteration 6, residual = 0.004204370475404719     
CG iteration 7, residual = 0.0035184047718742643     
CG iteration 8, residual = 0.004623704351628802     
CG iteration 9, residual = 0.0027644465369766743     
CG iteration 10, residual = 0.001095452880817425     
CG iteration 11, residual = 0.00720533141359291     
CG iteration 12, residual = 0.0009022414011419461     
CG iteration 13, residual = 0.0010257501072204825     
CG iteration 14, residual = 0.0003641682638948373     
CG iteration 15, residual = 0.00019297576676091736     
CG iteration 16, residual = 0.00022281477996417238     
CG iteration 17, residual = 0.0002135880486764573     
CG iteration 18, residual = 8.720980706824187e-05     
CG iteration 19, residual = 0.0004407752179327705     
CG iteration 20, residual = 5.906366472377646e-05     
CG iteration 21, residual = 2.7113164141127007e-05     
CG iteration 22, residual = 1.6361360075956317e-05     
CG iteration 23, residual = 1.583408203975017e-05     
CG iteration 24, residual = 9.297607436855025e-06     
CG iteration 25, residual = 5.636716541085751e-06     
CG iteration 26, residual = 3.1121913244649083e-06     
CG iteration 27, residual = 6.791426428112145e-06     
CG iteration 28, residual = 1.763145703838146e-05     
CG iteration 29, residual = 1.8813631598073865e-06     
CG iteration 30, residual = 9.796780788441403e-07     
CG iteration 31, residual = 8.064853508518673e-07     
CG iteration 32, residual = 3.2985082498568915e-07     
CG iteration 33, residual = 1.5558353127250373e-07     
CG iteration 34, residual = 6.224582426593474e-08     
CG iteration 35, residual = 3.0956820207492046e-07     
CG iteration 36, residual = 3.784415947420096e-08     
CG iteration 37, residual = 1.9685811568329946e-08     
CG iteration 38, residual = 1.815059836395663e-08     
CG iteration 39, residual = 4.740471030598783e-09     
CG iteration 40, residual = 6.181384164542866e-09     
CG iteration 41, residual = 2.8399139114264356e-09     
CG iteration 42, residual = 9.600238226564242e-10     
CG iteration 43, residual = 4.231350929217185e-09     
CG iteration 44, residual = 2.5856683186987715e-09     
CG iteration 45, residual = 5.619150701221642e-10     
CG iteration 46, residual = 1.036006096856494e-09     
CG iteration 47, residual = 3.553474254749655e-10     
CG iteration 48, residual = 1.0401499370989996e-10     
CG iteration 49, residual = 1.8531733433027572e-11     
CG iteration 50, residual = 1.107279980834582e-11     
CG iteration 51, residual = 9.528784983278807e-12     
CG iteration 52, residual = 2.996273588831822e-12     

If we experiment with the mesh-size, we observe that the iteration number doubles, if we refine the mesh by a factor of 4, which numerically indicate that

\[ \kappa \{ S \} = O(h^{-1}) \]

We will come back to this, later.

90.4. Block-preconditioning#

Now we want to solve directly the saddle-point system, without forming the Schur complement explicitly. We want to use preconditioners for both fields separately.

First, we show that the spectrum of

\[\begin{split} \left( \begin{array} {cc} A & 0 \\ 0 & S \end{array} \right)^{-1} \left( \begin{array} {cc} A & B^T \\ B & 0 \end{array} \right) \end{split}\]

consists just of a few, small eigenvalues, and thus this matrix is well conditioned.

Theorem The spectrum \(\sigma\) of the generalized eigenvalue problem

\[\begin{split} \left( \begin{array} {cc} A & B^T \\ B & 0 \end{array} \right) \left( \begin{array}{cc} u \\ p \end{array} \right) = \lambda \left( \begin{array} {cc} A & 0 \\ 0 & S \end{array} \right) \left( \begin{array}{cc} u \\ p \end{array} \right) \end{split}\]

satisfies \(\sigma \subset \{ 1, (1+\sqrt{5})/2, (1-\sqrt{5})/2 \}\).

Proof: The eigenvalue problem reads as

\[\begin{split} \left( \begin{array} {ccc} A u & + & B^T p \\ B u & & \end{array} \right) = \lambda \left( \begin{array} {c} A u \\ S p \end{array} \right), \end{split}\]

with \((u,p) \neq 0\). The first row reads as

\[ A u + B^T p = \lambda A u \]

or $\( B^T p = (\lambda-1) A u. \)\( This implies \)\( B A^{-1} B^T p = (\lambda-1) B u \)\( or \)\( S p = (\lambda-1) B u \)$

Inserting the second equation \(Bu = \lambda S p\), we obtain

\[ S p = (\lambda-1) \lambda S p \]

If \(p\) is non-zero, the eigenvalue \(\lambda\) must satisfy the quadratic equation

\[ 1 = (\lambda - 1) \lambda, \]

i.e. \(\lambda = \frac{1}{2} (1 \pm \sqrt{5})\). If \(p = 0\), but \(u \neq 0\), the equation \(B^T p = (\lambda-1) A u\) can be satisfied only if \(\lambda = 1\).

Since

\[\begin{split} \left(\begin{array}{cc} A&B^T \\ B&0\end{array}\right) \end{split}\]

is symmetric, and

\[\begin{split} \left(\begin{array}{cc} A&0 \\ 0&S\end{array}\right) \end{split}\]

provides a norm, the fraction $\( \frac{\max | \sigma |}{\min | \sigma |} = \frac{1+\sqrt{5}}{| 1-\sqrt{5} |} = 2.618... \)$ bounds the norm of the preconditioned matrix. It also bounds the number of iterations of a Krylov-space method. However, since eigenvalues are positive and negative, we cannot use conjugate gradients. One can apply MinRes, or GMRes.

from ngsolve import *
from ngsolve.webgui import Draw
mesh = Mesh(unit_square.GenerateMesh(maxh=0.1))

fesu = H1(mesh, order=1, autoupdate=True)
feslam = H1(mesh, order=1, definedon=mesh.Boundaries(".*"), autoupdate=True)
u,v = fesu.TnT()
lam,mu = feslam.TnT()

a = BilinearForm(grad(u)*grad(v)*dx + u*v*dx)
b = BilinearForm(u*mu*ds)
pre = Preconditioner(a, type="multigrid")
a.Assemble()
b.Assemble()

for i in range(2):
    mesh.Refine()
    a.Assemble()
    b.Assemble()

f = LinearForm(10*v*dx).Assemble()
g = LinearForm((x+y)*mu*ds).Assemble()
copy mesh complete
K = BlockMatrix( [[a.mat, b.mat.T], [b.mat, None]] )
F = BlockVector( [f.vec, g.vec])
U = K.CreateRowVector()
prelam = Projector(feslam.FreeDofs(), True)
PRE = BlockMatrix( [[pre.mat, None], [None, prelam]] )
from ngsolve.krylovspace import MinRes

U.data = MinRes(K, pre=PRE, rhs=F, maxsteps=400)
LinearSolver iteration 1, residual = 10.006052884938201     
LinearSolver iteration 2, residual = 2.6924136155685763     
LinearSolver iteration 3, residual = 1.60282600161908     
LinearSolver iteration 4, residual = 0.17787146650364064     
LinearSolver iteration 5, residual = 0.16551145145630625     
LinearSolver iteration 6, residual = 0.12559322630402628     
LinearSolver iteration 7, residual = 0.04182651492449851     
LinearSolver iteration 8, residual = 0.04091961212851145     
LinearSolver iteration 9, residual = 0.03798027450044472     
LinearSolver iteration 10, residual = 0.00794492344759875     
LinearSolver iteration 11, residual = 0.005486673124053477     
LinearSolver iteration 12, residual = 0.005481794529587     
LinearSolver iteration 13, residual = 0.004823102886311118     
LinearSolver iteration 14, residual = 0.0019135138262121013     
LinearSolver iteration 15, residual = 0.0018231395803895414     
LinearSolver iteration 16, residual = 0.001815036053916696     
LinearSolver iteration 17, residual = 0.0012809618903024942     
LinearSolver iteration 18, residual = 0.0011334165967012847     
LinearSolver iteration 19, residual = 0.0011307102211392996     
LinearSolver iteration 20, residual = 0.0008634299729498884     
LinearSolver iteration 21, residual = 0.0007970718790778335     
LinearSolver iteration 22, residual = 0.0007967222328500014     
LinearSolver iteration 23, residual = 0.0006568584384695376     
LinearSolver iteration 24, residual = 0.0005669357974305203     
LinearSolver iteration 25, residual = 0.0005668987307889644     
LinearSolver iteration 26, residual = 0.0005002942167523764     
LinearSolver iteration 27, residual = 0.0004778431788232897     
LinearSolver iteration 28, residual = 0.0004761165147485691     
LinearSolver iteration 29, residual = 0.00044476274680689597     
LinearSolver iteration 30, residual = 0.000443926110546035     
LinearSolver iteration 31, residual = 0.0003026291735590164     
LinearSolver iteration 32, residual = 0.0002867283203485827     
LinearSolver iteration 33, residual = 0.00028670864286594716     
LinearSolver iteration 34, residual = 0.0002544824048447953     
LinearSolver iteration 35, residual = 0.00016602077647740788     
LinearSolver iteration 36, residual = 0.00016434676866042806     
LinearSolver iteration 37, residual = 0.00016421847717886735     
LinearSolver iteration 38, residual = 0.00014029678008718827     
LinearSolver iteration 39, residual = 0.00010396456155120892     
LinearSolver iteration 40, residual = 0.00010321518710120788     
LinearSolver iteration 41, residual = 0.00010317733972589769     
LinearSolver iteration 42, residual = 9.235558457392675e-05     
LinearSolver iteration 43, residual = 7.191412740756462e-05     
LinearSolver iteration 44, residual = 7.167788478028774e-05     
LinearSolver iteration 45, residual = 7.142460317688723e-05     
LinearSolver iteration 46, residual = 6.44287657201762e-05     
LinearSolver iteration 47, residual = 6.352206186975885e-05     
LinearSolver iteration 48, residual = 6.348678469928673e-05     
LinearSolver iteration 49, residual = 5.5582836045941635e-05     
LinearSolver iteration 50, residual = 4.9564189071730394e-05     
LinearSolver iteration 51, residual = 4.955252997009489e-05     
LinearSolver iteration 52, residual = 4.825107658370613e-05     
LinearSolver iteration 53, residual = 4.2021410480985224e-05     
LinearSolver iteration 54, residual = 4.193017889368154e-05     
LinearSolver iteration 55, residual = 4.181275397026587e-05     
LinearSolver iteration 56, residual = 4.159226802859874e-05     
LinearSolver iteration 57, residual = 4.1393692959144535e-05     
LinearSolver iteration 58, residual = 3.310666361752348e-05     
LinearSolver iteration 59, residual = 3.234563447979218e-05     
LinearSolver iteration 60, residual = 3.2336993139926075e-05     
LinearSolver iteration 61, residual = 3.0091636403030867e-05     
LinearSolver iteration 62, residual = 2.893809231662017e-05     
LinearSolver iteration 63, residual = 2.893754159196256e-05     
LinearSolver iteration 64, residual = 2.742066724764176e-05     
LinearSolver iteration 65, residual = 2.4707544746314215e-05     
LinearSolver iteration 66, residual = 2.469188331783693e-05     
LinearSolver iteration 67, residual = 2.4439636276027798e-05     
LinearSolver iteration 68, residual = 2.2741783209975205e-05     
LinearSolver iteration 69, residual = 2.2732527515891966e-05     
LinearSolver iteration 70, residual = 2.0971473988404078e-05     
LinearSolver iteration 71, residual = 1.910553350297205e-05     
LinearSolver iteration 72, residual = 1.9098392782407787e-05     
LinearSolver iteration 73, residual = 1.8969487973256553e-05     
LinearSolver iteration 74, residual = 1.5061795188774857e-05     
LinearSolver iteration 75, residual = 1.4767521894186756e-05     
LinearSolver iteration 76, residual = 1.4764670997617955e-05     
LinearSolver iteration 77, residual = 1.3980831180153135e-05     
LinearSolver iteration 78, residual = 1.3690705057364501e-05     
LinearSolver iteration 79, residual = 1.3680394619761025e-05     
LinearSolver iteration 80, residual = 1.3106857175187873e-05     
LinearSolver iteration 81, residual = 1.30491321882643e-05     
LinearSolver iteration 82, residual = 1.303008939522798e-05     
LinearSolver iteration 83, residual = 1.297011872199007e-05     
LinearSolver iteration 84, residual = 1.2968072136285665e-05     
LinearSolver iteration 85, residual = 1.1886623726682327e-05     
LinearSolver iteration 86, residual = 1.013811543530733e-05     
LinearSolver iteration 87, residual = 1.0128688782012293e-05     
LinearSolver iteration 88, residual = 1.0023928342549886e-05     
LinearSolver iteration 89, residual = 9.901962432716777e-06     
LinearSolver iteration 90, residual = 9.901698813932705e-06     
LinearSolver iteration 91, residual = 9.34923238484178e-06     
LinearSolver iteration 92, residual = 7.77344724236405e-06     
LinearSolver iteration 93, residual = 7.76156412171864e-06     
LinearSolver iteration 94, residual = 7.73992768129038e-06     
LinearSolver iteration 95, residual = 6.606634889215221e-06     
LinearSolver iteration 96, residual = 6.451200504182154e-06     
LinearSolver iteration 97, residual = 6.451059188051417e-06     
LinearSolver iteration 98, residual = 6.1282174616192325e-06     
LinearSolver iteration 99, residual = 5.295673124262948e-06     
LinearSolver iteration 100, residual = 5.290161202956915e-06     
LinearSolver iteration 101, residual = 5.2441091631346645e-06     
LinearSolver iteration 102, residual = 4.6969039222829026e-06     
LinearSolver iteration 103, residual = 4.681809447713418e-06     
LinearSolver iteration 104, residual = 4.675150622392839e-06     
LinearSolver iteration 105, residual = 4.095939553147835e-06     
LinearSolver iteration 106, residual = 3.956866708243204e-06     
LinearSolver iteration 107, residual = 3.956788505148804e-06     
LinearSolver iteration 108, residual = 3.912312105164127e-06     
LinearSolver iteration 109, residual = 3.9111102267874765e-06     
LinearSolver iteration 110, residual = 3.4582431551015187e-06     
LinearSolver iteration 111, residual = 3.0985340239060944e-06     
LinearSolver iteration 112, residual = 3.0972441079209224e-06     
LinearSolver iteration 113, residual = 3.080363248841906e-06     
LinearSolver iteration 114, residual = 2.7594147515889492e-06     
LinearSolver iteration 115, residual = 2.717901707233367e-06     
LinearSolver iteration 116, residual = 2.7178089277347426e-06     
LinearSolver iteration 117, residual = 2.591872379395178e-06     
LinearSolver iteration 118, residual = 2.012479782578389e-06     
LinearSolver iteration 119, residual = 1.9824564452862945e-06     
LinearSolver iteration 120, residual = 1.982454733891398e-06     
LinearSolver iteration 121, residual = 1.949180345347565e-06     
LinearSolver iteration 122, residual = 1.8936006388080886e-06     
LinearSolver iteration 123, residual = 1.8932471540524113e-06     
LinearSolver iteration 124, residual = 1.6906601592346228e-06     
LinearSolver iteration 125, residual = 1.4743845285220006e-06     
LinearSolver iteration 126, residual = 1.4733739358435825e-06     
LinearSolver iteration 127, residual = 1.465366340985294e-06     
LinearSolver iteration 128, residual = 1.3100766606552662e-06     
LinearSolver iteration 129, residual = 1.3019964774111864e-06     
LinearSolver iteration 130, residual = 1.29946292210663e-06     
LinearSolver iteration 131, residual = 1.2273752434466936e-06     
LinearSolver iteration 132, residual = 1.2200814982392898e-06     
LinearSolver iteration 133, residual = 1.219015968824155e-06     
LinearSolver iteration 134, residual = 1.1961228339113691e-06     
LinearSolver iteration 135, residual = 1.1960973838068986e-06     
LinearSolver iteration 136, residual = 1.1565545505886938e-06     
LinearSolver iteration 137, residual = 1.154591039558322e-06     
LinearSolver iteration 138, residual = 1.1407461783284897e-06     
LinearSolver iteration 139, residual = 9.844363215072037e-07     
gfu = GridFunction(fesu)
gfu.vec.data = U[0]
Draw (gfu);