Landweber iteration: Difference between revisions
m Undid last revision: nonsense with condition number, direction of largest eigenvalue of inverse blows error out of proportion: Not happy with "ill-conditioned", is misleading |
→Basic algorithm: relaxation |
||
Line 6: | Line 6: | ||
When ''A'' is [[nonsingular]], then an explicit solution is <math> x = A^{-1} y</math>. However, if ''A'' is [[ill-conditioned]], the explicit solution is a poor choice since it is sensitive to any errors made on ''y''. If ''A'' is [[Mathematical singularity|singular]], this explicit solution doesn't even exist. The Landweber algorithm is an attempt to [[Regularization (mathematics)|regularize]] the problem, and is one of the alternatives to [[Tikhonov regularization]]. We may view the Landweber algorithm as solving: |
When ''A'' is [[nonsingular]], then an explicit solution is <math> x = A^{-1} y</math>. However, if ''A'' is [[ill-conditioned]], the explicit solution is a poor choice since it is sensitive to any errors made on ''y''. If ''A'' is [[Mathematical singularity|singular]], this explicit solution doesn't even exist. The Landweber algorithm is an attempt to [[Regularization (mathematics)|regularize]] the problem, and is one of the alternatives to [[Tikhonov regularization]]. We may view the Landweber algorithm as solving: |
||
: <math> \min_x |
: <math> \min_x \|Ax-y\|_2^2/2 </math> |
||
using an iterative method. For [[ill-posed]] problems, the iterative method may be purposefully stopped before convergence. |
using an iterative method. For [[ill-posed]] problems, the iterative method may be purposefully stopped before convergence. |
||
Line 12: | Line 12: | ||
The algorithm is given by the update |
The algorithm is given by the update |
||
: <math> x_{k+1} = x_{k} - A^*(Ax_k - y). </math> |
: <math> x_{k+1} = x_{k} - \omega A^*(Ax_k - y). </math> |
||
If we write <math> f(x) = |
where the relaxation factor <math>\omega</math> satisfies <math> 0 < \omega < 2/\sigma_1</math>. Here <math>\sigma_1</math> is the larges [[singular value decomposition|singular value]] of <math>A</math>. If we write <math> f(x) = \|Ax-y\|_2^2 /2</math>, then the update can be written in terms of the [[gradient]] |
||
: <math> x_{k+1} = x_k - \nabla f(x_k) </math> |
: <math> x_{k+1} = x_k - \omega \nabla f(x_k) </math> |
||
and hence the algorithm is a special case of [[gradient descent]]. |
and hence the algorithm is a special case of [[gradient descent]]. |
Revision as of 09:40, 12 March 2014
The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s,[1] and it can be now viewed as a special case of many other more general methods.[2]
Basic algorithm
The original Landweber algorithm [1] attempts to recover a signal x from measurements y. The linear version assumes that for a linear operator A. When the problem is in finite dimensions, A is just a matrix.
When A is nonsingular, then an explicit solution is . However, if A is ill-conditioned, the explicit solution is a poor choice since it is sensitive to any errors made on y. If A is singular, this explicit solution doesn't even exist. The Landweber algorithm is an attempt to regularize the problem, and is one of the alternatives to Tikhonov regularization. We may view the Landweber algorithm as solving:
using an iterative method. For ill-posed problems, the iterative method may be purposefully stopped before convergence.
The algorithm is given by the update
where the relaxation factor satisfies . Here is the larges singular value of . If we write , then the update can be written in terms of the gradient
and hence the algorithm is a special case of gradient descent.
Discussion of the Landweber iteration as a regularization algorithm can be found in.[3][4]
Nonlinear extension
In general, the updates generated by will generate a sequence that converges to a minimizer of f whenever f is convex and the stepsize is chosen such that where is the spectral norm.
Since this is special type of gradient descent, there currently is not much benefit to analyzing it on its own as the nonlinear Landweber, but such analysis was performed historically by many communities not aware of unifying frameworks.
The nonlinear Landweber problem has been studied in many papers in many communities; see, for example,.[5]
Extension to constrained problems
If f is a convex function and C is a convex set, then the problem
can be solved by the constrained, nonlinear Landweber iteration, given by:
where is the projection onto the set C. Convergence is guaranteed when .[6] This is again a special case of projected gradient descent (which is a special case of the forward–backward algorithm) as discussed in.[2]
Applications
Since the method has been around since the 1950s, it has been adopted by many scientific communities, especially those studying ill-posed problems. In particular, the computer vision community [7] and the signal restoration community.[8] It is also used in image processing, since many image problems, such as deblurring, are ill-posed. Variants of this method have been used also in sparse approximation problems and compressed sensing settings.[9]
References
- ^ a b Landweber, L. (1951): An iteration formula for Fredholm integral equations of the first kind. Amer. J. Math. 73, 615–624
- ^ a b P. L. Combettes and J.-C. Pesquet, "Proximal splitting methods in signal processing," in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, (H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, Editors), pp. 185–212. Springer, New York, 2011. PDF
- ^ Louis, A.K. (1989): Inverse und schlecht gestellte Probleme. Stuttgart, Teubner
- ^ Vainikko, G.M., Veretennikov, A.Y. (1986): Iteration Procedures in Ill-Posed Problems. Moscow, Nauka (in Russian)
- ^ A convergence analysis of the Landweber iteration for nonlinear ill-posed problems Martin Hanke, Andreas Neubauer and Otmar Scherzer. NUMERISCHE MATHEMATIK Volume 72, Number 1 (1995), 21-37, DOI: 10.1007/s002110050158
- ^ Eicke, B.: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 13, 413–429 (1992)
- ^ Johansson, B., Elfving, T., Kozlovc, V., Censor, Y., Forssen, P.E., Granlund, G.; "The application of an oblique-projected Landweber method to a model of supervised learning", Math. Comput. Modelling, vol 43, pp 892–909 (2006)
- ^ Trussell, H.J., Civanlar, M.R.: The Landweber iteration and projection onto convex sets. IEEE Trans. Acoust., Speech, Signal Process. 33, 1632–1634 (1985)
- ^ Anastasios Kyrillidis and Volkan Cevher. "Recipes for hard thresholding methods".