Cor Less
Cor Less
Cor Less
In this lecture we look at several analytical techniques then the convolution integrals present suggest that
for solving linear one-dimensional integral equations; we take Laplace transforms. Remember
we look at the Maple share library package IntSolve, Z1
written by Honglin Ye [Ph.D. Western]; we look L(f )(s) = e?st f (t) dt (3)
at well-posedness of linear integral equations with 0
smooth kernels; and brie y at approximate and nu- L(f g) = L(f )L(g) (4)
merical methods. Clearly we're going to have to move
at some speed! where the convolution product f g is
You have now seen the general Neumann series, Z x
and some examples; you also seen how to solve in- f g = f (x ? )g( ) d
tegral equations with degenerate kernels. This will Z0 x
be quite useful for us when we look at Marchenko's = f ( )g(x ? ) d :
equation for soliton problems. 0
We also saw brie y this morning an introduction to Hence, Poisson's integral equation becomes
eigenvalue techniques. We look brie y at some more
examples here.
L() = 1 ?L(Lf()K )
2 Solution Techniques provided that L(K ) =
6 1. Therefore
We now look at some very simple but powerful tech-
= L?1 L(f ) : (5)
niques for some important classes of problems. 1 ? L(K )
1
For example, if our Poisson's equation is where we want to nd the coecients ak . Substitut-
Z x
ing this into the integral equation gives
(x) = ex?y (y) dy + sin(x) ; (6) X
( ak ? k ak k ? b k k ) = 0 :
0 k
k
then since L(exp(x)) = 1=(s ? 1) and L(sin(x)) =
1=(s2 + 1), Thus, choosing
! ak = 1 ?bk (12)
(x) = L?1 1 k
(s2 + 1)(1 ? s?1 1 ) solves the problem, unless one of the k = 1.
For example, consider the Fredholm rst kind in-
?1
= L?1 (s2 +s1)( tegral equation (the above analysis was for a second-
s ? 2) kind integral equation but it all goes through mutatis
= 5 e ? 5 cos x + 53 sin x :
1 2x 1
(7) mutandis):
Substituting this equation back into the original 1 Z 1? 2
equation shows that this really is a solution. 2 ? 1 ? 2 cos(x ? y) + 2 (y) dy = f (x) :
(13)
We take 0 < < 1 and ? x . The eigenfunc-
2.2 Eigenfunction expansions tions are just cos(kx) and sin(kx). The solution can
Suppose we are trying to solve be shown to be
X ?n
Z (y) = 21 a0 + (an cos ny + bn sin ny) ; (14)
(x) = K (x; )( ) d + f (x) ; (8) n1
selves here only with the case where the set of eigen- There are only two eigenfunctions. The eigenvalues
values is nite or at most countably in nite, but in are = 9=20 p889=60, with associated eigenfunc-
the next two weeks we will see examples of integral tions Ai + Bi x2 . The solution is
equations with a continuous spectrum of eigenvalues.
[This is of some importance for physics, and some of 3 61 + 20
2
(y ) = sin y + 3 ? 2 y : (16)
6 2
you will have seen this already.] 5
Here, when we expand everything in sight, Of course, I did that with Maple, not by hand!
X
f (x) = bk k (x) (10)
k 3 Maple code for solution of
where the bk are presumed known because f (x) and linear integral equations
the eigenfunctions k (x) are known, and
X
In [7] we nd a description of IntSolve, which resides
(x) = ak k (x) (11) in the share library of Maple. Watch out for bugs.
k The code has \rotted", to use a technical term|that
2
is, Maple has evolved since the package was written, > p2 := IntSolve( example2, phi(y),
eigenfunc );
and it is not clear that the package still works. In
particular I have found some bugs in the eigenfunc p2 := sin( y) +
3 61 2 + 20 ? 6 y2
method. 5 3
> P2 := unapply(p2,y);
P2 := y ! sin( y) + 3 61 3+ 20 ? 6 y
2 2
4 Solution of Linear Integral
Equations in Maple
5
> with(share); > value( eval( subs( phi=P2, example2 )
) );
See ?share and ?share,contents for
sin( y) + 35 61 3+ 20 ? 6 y =
information about the share library 2 2
[]
? 53 ?61 + 10 y2 2 ? 20 + sin( y)
2
3
> with(IntSolve);
> normal( lhs(%) - rhs(%) );
Share Library: IntSolve
0
Authors: Ye, Honglin and Corless, Robert. > example3 := phi(x) = lambda*Int(
exp(k*(x-y))*phi(y), y=0..1) + sin(x);
Description: Version 1 of
Z 1
an integral equation solver. example3 := (x) = e(k (x?y)) (y) dy + sin(x)
0
3
> P3 := unapply( rhs(P3sol), x ); Then we have found the exact solution to the related
P3 := x ! sin(x) + e (k x)
C1 integral equation
Z
> eval(subs(phi=P3,p3[2])); (x) = K (x; ) ( ) d + f^(x) ; (18)
C1 ? (?cos(1) e(?k) ? k e(?k) sin(1) + C1 k2 where f^(x) = f (x) + r(x). This is trivial. But it's
+ C1 + 1)=(k2 + 1) = 0 also profound. If r(x) is small compared to the ap-
proximations already made in deriving the integral
> map(normal, readlib(isolate)(%,_C1) equation, or to physically reasonable perturbations
); in f , then we are done.
(?k) (?k)
C1 = (cos(1) e 2 + k e 2 sin(1) ? 1)
Perhaps instead we have exactly computed a so-
?k ? 1 + k + lution satisfying an equation with a modi ed kernel
That's very impressive, but is it right? (one technique is to replace K (x; ) with a degener-
ate kernel K^ that approximates K , for example). Is
> assign(%); K^ ? K smaller than terms neglected in the model?
> value( eval( subs(phi=P3, example3 ) Yes? Done.
) ); This raises the question of how sensitive the solu-
tion of the integral equation is to changes in f or K .
sin(x) + e (cos(1)
(k x)
e(?k) + k e(?k) sin(1) ? 1) = In general this is a deep subject (with singular ker-
?k2 ? 1 + k2 + nels, for example). But for our case, we nd in [2,
(k e(k x?k) sin(1) ? k sin(1) e(k x?k) p. 156] that there are computable constants N and
C such that
+ k e(?k) sin(1) e(k x)
+ cos(1) e(?k) e(k x) + cos(1) e(k x?k) j^ ? j ("N1 ?+"(1)(1
+
+ C)
C ) (19)
? cos(1) e(k x?k) ? e(k x) )=((k2 + 1) (?1 + )
) + sin(x) where " is a bound on the distance jjK^ ? K jj and
is a bound on jjf^ ? f jj. This is as good a behaviour
> simplify( normal( lhs(%) - rhs(%) ) as can be expected, and shows that not only is the
); problem well posed, we have a linear dependence on
0 the perturbations.
6 Collocation
5 Well-posedness
This is a reasonable numerical method to solve many
There are many methods to approximate the solution kinds of integral equations, not just the simple ones
of an integral equation. There are many methods to here. See however the many numerical papers, start-
solve them numerically. But before we show how to ing perhaps from those of Hermann Brunner (Memo-
solve problems approximately, we observe that lin- rial U.), e.g. [1], or the software available through
ear integral equations with smooth kernels are well- http://www.gams.nist.gov, for real methods. The
posed. Suppose for example we have somehow com- following is just a sketch.
puted an approximate solution (x) to the Fredholm Choose points (\collocation points") xk on a x
2nd kind equation (8). De ne the residual b. Then
Z Z b
r(x) := (x) ? K (x; )( ) d ? f (x) : (17) (x) = K (x; )( ) d + f (x)
a
4
implies > phi(x) = Int( exp(x*y)*phi(y),
Z b y=0..1) + 1/(1+x^2);
(xk ) = K (xk ; )( ) d + f (xk ) : (20) Z
e(x y) (y) dy + 1 +1 x2
1
a (x) =
Now if we approximate (y) by a linear combina- 0
1 7
x < 5 ; ?1:303125609; x < 30 ; ?1:362122403;
4 ; ?1:423937411; x < 3 ;
0 0.2 0.4 0.6 0.8 1
x < 15 10
1 11 ;
–0.02
6
References