Jump to content

Method of steepest descent: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
minor tweaks to make a couple formulas shorter
The asymptotic expansion in the case of a single non-degenerate saddle point: improved formatting and latex formulas using \left & \right also converted some to {{math|...}}
Line 152: Line 152:
=== The asymptotic expansion in the case of a single non-degenerate saddle point ===
=== The asymptotic expansion in the case of a single non-degenerate saddle point ===
Assume
Assume
# <math>f(z)</math> and <math>S(z)</math> are [[Holomorphic function|holomorphic]] functions in an [[Open set|open]], [[Bounded set (topological vector space)|bounded]], and [[Simply connected space|simply connected]] set <math>\Omega_x \subset \mathbb{C}^n</math> such that the set <math>I_x = \Omega_x \cap \mathbb{R}^n</math> is [[Connected space|connected]];
# {{math|&thinsp;''f''&thinsp;(''z'')}} and {{math|''S''(''z'')}} are [[Holomorphic function|holomorphic]] functions in an [[Open set|open]], [[Bounded set (topological vector space)|bounded]], and [[Simply connected space|simply connected]] set {{math|Ω''<sub>x</sub>'' '''C'''<sup>''n''</sup>}} such that the {{math|''I<sub>x</sub>'' {{=}} Ω''<sub>x</sub>'' '''R'''<sup>''n''</sup>}} is [[Connected space|connected]];
# <math>\Re[S(z)]</math> has a single maximum: <math>\max\limits_{z\in I_x} \Re[S(z)] = \Re[S(x^0)]</math> for exactly one point <math>x^0 \in I_x </math>;
# <math>\Re(S(z))</math> has a single maximum: <math>\max\limits_{z \in I_x} \Re(S(z)) = \Re(S(x^0))</math> for exactly one point {{math|''x''<sup>0</sup> ''I<sub>x</sub>''}};
# <math>x^0</math> is a non-degenerate saddle point (i.e., <math>\nabla S(x^0) = 0</math> and <math>\det S''_{xx}(x^0) \neq 0</math>).
# {{math|''x''<sup>0</sup>}} is a non-degenerate saddle point (i.e., <math>\nabla S(x^0) = 0</math> and <math>\det S''_{xx}(x^0) \neq 0</math>).


Then, the following asymptotic holds
Then, the following asymptotic holds
:<math>I(\lambda) \equiv \int_{I_x} f(x) e^{\lambda S(x)} dx = \left( \frac{2\pi}{\lambda}\right)^{\frac{n}{2}} e^{\lambda S(x^0)} \prod_{j=1}^n (-\mu_j)^{-\frac{1}{2}} \left[f(x^0)+ O\left(\lambda^{-1}\right) \right], \qquad \lambda \to + \infty,</math>
:<math>I(\lambda) \equiv \int_{I_x} f(x) e^{\lambda S(x)} dx = \left( \frac{2\pi}{\lambda}\right)^{\frac{n}{2}} e^{\lambda S(x^0)} \prod_{j=1}^n (-\mu_j)^{-\frac{1}{2}} \left(f(x^0)+ O\left(\lambda^{-1}\right) \right), \qquad \lambda \to \infty,</math><div style="text-align: right;"> '''(8)''' </div>
where {{math|''μ<sub>j</sub>''}} are eigenvalues of the [[Hessian matrix|Hessian]] <math>S''_{xx}(x^0)</math> and <math>(-\mu_j)^{-\frac{1}{2}}</math> are defined with arguments
<div style="text-align: right;"> '''(8)''' </div>
where <math>\mu_j</math> are eigenvalues of the [[Hessian matrix|Hessian]] <math>S''_{xx}(x^0)</math> and <math>(-\mu_j)^{-1/2}</math> are defined with arguments
:<math>\left | \arg\sqrt{-\mu_j} \right| < \tfrac{\pi}{4}.</math>
:<math>\left | \arg\sqrt{-\mu_j} \right| < \tfrac{\pi}{4}.</math>
<div style="text-align: right;"> '''(9)''' </div>
<div style="text-align: right;"> '''(9)''' </div>
Line 171: Line 170:
[[File:Illustration To Derivation Of Asymptotic For Saddle Point Integration.pdf|thumb|center|An illustration to the derivation of equation (8)]]
[[File:Illustration To Derivation Of Asymptotic For Saddle Point Integration.pdf|thumb|center|An illustration to the derivation of equation (8)]]


First, we deform the contour <math>I_x</math> into a new contour <math>I'_x \subset \Omega_x</math> passing through the saddle point <math>x^0</math> and sharing the boundary with <math>I_x</math>. This deformation does not change the value of the integral <math>I(\lambda)</math>. We employ the [[Method_of_steepest_descent#Complex_Morse_Lemma|Complex Morse Lemma]] to change the variables of integration. According to the lemma, the function <math>\boldsymbol{\varphi}(w)</math> maps a neighborhood <math>U \subset \Omega_x</math> (<math>x^0 \in U</math>) onto a neighborhood <math>\Omega_w</math> containing the origin. The integral <math>I(\lambda)</math> can be split into two: <math>I(\lambda) = I_0(\lambda) + I_1(\lambda)</math>, where <math>I_0(\lambda)</math> is the integral over <math>U\cap I'_x</math>, while <math>I_1(\lambda)</math> is over <math>I'_x \setminus (U\cap I'_x)</math> (i.e., the remaining part of the contour <math>I'_x</math>). Since the latter region does not contain the saddle point <math>x^0</math>, the value of <math>I_1(\lambda)</math> is exponentially smaller than <math>I_0(\lambda)</math> as <math>\lambda\to +\infty</math>;<ref> This conclusion follows from a comparison between the final asymptotic for <math>I_0(\lambda)</math>, given by equation (8), and [[Method_of_steepest_descent#A_simple_estimate_.5B1.5D|a simple estimate]] for the discarded integral <math>I_1(\lambda)</math>.</ref> thus, <math>I_1(\lambda)</math> is ignored. Introducing the contour <math>I_w</math> such that <math>U\cap I'_x = \boldsymbol{\varphi}(I_w)</math>, we have
First, we deform the contour {{math|''I<sub>x</sub>''}} into a new contour <math>I'_x \subset \Omega_x</math> passing through the saddle point {{math|''x''<sup>0</sup>}} and sharing the boundary with {{math|''I<sub>x</sub>''}}. This deformation does not change the value of the integral {{math|''I''(''λ'')}}. We employ the [[Method_of_steepest_descent#Complex_Morse_Lemma|Complex Morse Lemma]] to change the variables of integration. According to the lemma, the function {{math|'''''φ'''''(''w'')}} maps a neighborhood {{math|''x''<sup>0</sup> ∈ ''U'' ⊂ Ω''<sub>x</sub>''}} onto a neighborhood {{math|Ω''<sub>w</sub>''}} containing the origin. The integral {{math|''I''(''λ'')}} can be split into two: {{math|''I''(''λ'') {{=}} ''I''<sub>0</sub>(''λ'') + ''I''<sub>1</sub>(''λ'')}}, where {{math|''I''<sub>0</sub>(''λ'')}} is the integral over <math>U\cap I'_x</math>, while {{math|''I''<sub>1</sub>(''λ'')}} is over <math>I'_x \setminus (U\cap I'_x)</math> (i.e., the remaining part of the contour <math>I'_x</math>). Since the latter region does not contain the saddle point {{math|''x''<sup>0</sup>}}, the value of {{math|''I''<sub>1</sub>(''λ'')}} is exponentially smaller than {{math|''I''<sub>0</sub>(''λ'')}} as {{math|''λ'' → ∞}};<ref> This conclusion follows from a comparison between the final asymptotic for {{math|''I''<sub>0</sub>(''λ'')}}, given by equation (8), and [[Method_of_steepest_descent#A_simple_estimate_.5B1.5D|a simple estimate]] for the discarded integral {{math|''I''<sub>1</sub>(''λ'')}}.</ref> thus, {{math|''I''<sub>1</sub>(''λ'')}} is ignored. Introducing the contour {{math|''I<sub>w</sub>''}} such that <math>U\cap I'_x = \boldsymbol{\varphi}(I_w)</math>, we have


:<math>I_0(\lambda) = \exp[{\lambda S(x^0)}] \int_{I_w} f[\boldsymbol{\varphi}(w)] \exp\left( \lambda \sum_{j=1}^n \frac{\mu_j}2 w_j^2 \right)|\det\boldsymbol{\varphi}_w'(w)| dw.</math>
:<math>I_0(\lambda) = e^{\lambda S(x^0)} \int_{I_w} f[\boldsymbol{\varphi}(w)] \exp\left( \lambda \sum_{j=1}^n \tfrac{\mu_j}{2} w_j^2 \right) \left |\det\boldsymbol{\varphi}_w'(w) \right | dw.</math><div style="text-align: right;"> '''(10)''' </div>
<div style="text-align: right;"> '''(10)''' </div>
Recalling that <math>x^0 = \boldsymbol{\varphi}(0)</math> as well as <math>\det \boldsymbol{\varphi}_w'(0) = 1</math>, we expand the pre-exponential function into a Taylor series and keep just the leading zero-order term


Recalling that {{math|''x''<sup>0</sup> {{=}} '''''φ'''''(0)}} as well as <math>\det \boldsymbol{\varphi}_w'(0) = 1</math>, we expand the pre-exponential function into a Taylor series and keep just the leading zero-order term
:<math>I_0(\lambda) \approx f(x^0)\exp[{\lambda S(x^0)}] \int_{\mathbb{R}^n} \exp\left( \lambda \sum_{j=1}^n \frac{\mu_j}2 w_j^2 \right) dw = f(x^0)e^{\lambda S(x^0)} \prod_{j=1}^n \int_{-\infty}^{\infty} e^{\frac{1}{2}\lambda \mu_j y^2} dy.</math>

<div style="text-align: right;"> '''(11)''' </div>
:<math>I_0(\lambda) \approx f(x^0) e^{\lambda S(x^0)} \int_{\mathbf{R}^n} \exp \left( \lambda \sum_{j=1}^n \tfrac{\mu_j}{2} w_j^2 \right) dw = f(x^0)e^{\lambda S(x^0)} \prod_{j=1}^n \int_{-\infty}^{\infty} e^{\frac{1}{2}\lambda \mu_j y^2} dy.</math><div style="text-align: right;"> '''(11)''' </div>
Here, we have substituted the integration region <math>I_w</math> by <math>\mathbb{R}^n</math> because both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term.<ref>This is justified by comparing the integral asymptotic over <math>\mathbb{R}^n</math> [see equation (8)] with [[Method_of_steepest_descent#A_simple_estimate_.5B1.5D|a simple estimate]] for the altered part.</ref> The integrals in the r.h.s. of equation (11) can be expressed as

:<math>\mathcal{I}_j = \int_{-\infty}^{\infty} e^{\frac{1}{2} \lambda \mu_j y^2} dy = 2\int_0^{\infty} e^{-\lambda \left(\sqrt{-\mu_j} y\right)^2 /2} dy = 2\int_0^{\infty} e^{-\lambda |\sqrt{-\mu_j}|^2 y^2\exp\left(2i\arg\sqrt{-\mu_j}\right) /2} dy.</math>
Here, we have substituted the integration region {{math|''I<sub>w</sub>''}} by {{math|'''R'''<sup>''n''</sup>}} because both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term.<ref>This is justified by comparing the integral asymptotic over {{math|'''R'''<sup>''n''</sup>}} [see equation (8)] with [[Method_of_steepest_descent#A_simple_estimate_.5B1.5D|a simple estimate]] for the altered part.</ref> The integrals in the r.h.s. of equation (11) can be expressed as
<div style="text-align: right;"> '''(12)''' </div>

From this representation, we conclude that condition (9) must be satisfied in order for the r.h.s. and l.h.s. of equation (12) to coincide. According to assumption 2, <math>\Re[ S_{xx}''(x^0) ]</math> is a [[Definite bilinear form|negatively defined quadratic form]] (viz., <math>\Re(\mu_j) <0</math>) implying the existence of the integral <math>\mathcal{I}_j</math>, which is readily calculated
:<math>\mathcal{I}_j = \frac 2{\sqrt{-\mu_j}\sqrt{\lambda}} \int_0^{\infty} e^{-\frac{\xi^2}{2}} d\xi = \sqrt{ \frac{2\pi}{\lambda}} (-\mu_j)^{-\frac{1}{2}}.</math>
:<math>\mathcal{I}_j = \int_{-\infty}^{\infty} e^{\frac{1}{2} \lambda \mu_j y^2} dy = 2\int_0^{\infty} e^{-\frac{1}{2} \lambda \left(\sqrt{-\mu_j} y\right)^2} dy = 2\int_0^{\infty} e^{-\frac{1}{2} \lambda \left |\sqrt{-\mu_j} \right|^2 y^2\exp\left(2i\arg\sqrt{-\mu_j}\right)} dy.</math><div style="text-align: right;"> '''(12)''' </div>

From this representation, we conclude that condition (9) must be satisfied in order for the r.h.s. and l.h.s. of equation (12) to coincide. According to assumption 2, <math>\Re \left( S_{xx}''(x^0) \right)</math> is a [[Definite bilinear form|negatively defined quadratic form]] (viz., <math>\Re(\mu_j)<0</math>) implying the existence of the integral <math>\mathcal{I}_j</math>, which is readily calculated

:<math>\mathcal{I}_j = \frac{2}{\sqrt{-\mu_j}\sqrt{\lambda}} \int_0^{\infty} e^{-\frac{\xi^2}{2}} d\xi = \sqrt{\frac{2\pi}{\lambda}} (-\mu_j)^{-\frac{1}{2}}.</math>


</div>
</div>
Line 190: Line 191:


Equation (8) can also be written as
Equation (8) can also be written as

:<math>I(\lambda) = \left( \frac{2\pi}{\lambda}\right)^{\frac{n}{2}} e^{\lambda S(x^0)} \left[ \det (-S_{xx}''(x^0)) \right]^{-\frac{1}{2}} \left[f(x^0) + O\left(\lambda^{-1}\right) \right],</math>
<div style="text-align: right;"> '''(13)''' </div>
:<math>I(\lambda) = \left( \frac{2\pi}{\lambda}\right)^{\frac{n}{2}} e^{\lambda S(x^0)} \left[ \det (-S_{xx}''(x^0)) \right]^{-\frac{1}{2}} \left[f(x^0) + O\left(\lambda^{-1}\right) \right],</math><div style="text-align: right;"> '''(13)''' </div>

where the branch of <math>\sqrt{\det (-S_{xx}''(x^0)) }</math> is selected as follows
where the branch of
:<math>\left[ \det (-S_{xx}''(x^0)) \right]^{-\frac{1}{2}} = \prod_{j=1}^n \left| \mu_j \right|^{-\frac{1}{2}} \exp\left[ -i {\rm Ind} (-S_{xx}''(x^0)) \right],</math>

:<math>{\rm Ind} (-S_{xx}''(x^0))= \frac{1}{2} \sum_{j=1}^n \arg (-\mu_j), \quad |\arg(-\mu_j)| < \frac{\pi}{2}.</math>
:<math>\sqrt{\det \left (-S_{xx}''(x^0) \right)}</math>

is selected as follows

:<math>\begin{align}
\left( \det \left (-S_{xx}''(x^0) \right ) \right)^{-\frac{1}{2}} &= \prod_{j=1}^n \left| \mu_j \right|^{-\frac{1}{2}} \exp\left( -i \text{ Ind} \left (-S_{xx}''(x^0) \right ) \right), \\
\text{Ind} \left (-S_{xx}''(x^0) \right) &= \tfrac{1}{2} \sum_{j=1}^n \arg (-\mu_j), && |\arg(-\mu_j)| < \tfrac{\pi}{2}.
\end{align}</math>


Consider important special cases:
Consider important special cases:


* If <math> S(x) </math> is real valued for real ''x'' and <math> x^0 \in \mathbb{R}^n </math> (aka, the '''multidimensional Laplace method'''), then <math> {\rm Ind} (-S_{xx}''(x^0)) = 0 </math>.<ref>See equation (4.4.9) on page 125 in {{harvtxt|Fedoryuk|1987}}</ref>
* If {{math|''S''(''x'')}} is real valued for real {{mvar|x}} and {{math|''x''<sup>0</sup>}} in {{math|'''R'''<sup>''n''</sup>}} (aka, the '''multidimensional Laplace method'''), then<ref>See equation (4.4.9) on page 125 in {{harvtxt|Fedoryuk|1987}}</ref>
::<math>\text{Ind} \left(-S_{xx}''(x^0) \right ) = 0.</math>

* If {{math|''S''(''x'')}} is purely imaginary for real {{mvar|x}} (i.e., <math>\Re(S(x)) = 0</math> for all {{mvar|x}} in {{math|'''R'''<sup>''n''</sup>}}) and {{math|''x''<sup>0</sup>}} in {{math|'''R'''<sup>''n''</sup>}} (aka, the '''multidimensional stationary phase method'''),<ref>Rigorously speaking, this case cannot be inferred from equation (8) because [[Method_of_steepest_descent#The_asymptotic_expansion_in_the_case_of_a_single_non-degenerate_saddle_point|the second assumption]], utilized in the derivation, is violated. To include the discussed case of a purely imaginary phase function, condition (9) should be replaced by <math> \left | \arg\sqrt{-\mu_j} \right | \leqslant \tfrac{\pi}{4}.</math></ref> then<ref>See equation (2.2.6') on page 186 in {{harvtxt|Fedoryuk|1987}}</ref>

::<math>\text{Ind} \left (-S_{xx}''(x^0) \right ) = \frac{\pi}{4} \text{sign }S_{xx}''(x_0),</math>


:where <math>\text{sign }S_{xx}''(x_0)</math> denotes [[Sylvester's_law_of_inertia#Statement_of_the_theorem|the signature of matrix]] <math>S_{xx}''(x_0)</math>, which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), {{math|Ind}} is related to the [[Maslov index]] see, e.g., {{harvtxt|Chaichian|Demichev|2001}} and {{harvtxt|Schulman|2005}}.
* If <math> S(x) </math> is purely imaginary for real ''x'' (i.e., <math> \Re[S(x)] = 0, \, \forall x \in \mathbb{R}^n </math>) and <math> x^0 \in \mathbb{R}^n </math> (aka, the '''multidimensional stationary phase method'''),<ref>Rigorously speaking, this case cannot be inferred from equation (8) because [[Method_of_steepest_descent#The_asymptotic_expansion_in_the_case_of_a_single_non-degenerate_saddle_point|the second assumption]], utilized in the derivation, is violated. To include the discussed case of a purely imaginary phase function, condition (9) should be replaced by <math> | \arg\sqrt{-\mu_j}| \leqslant \pi/4 </math>.</ref> then<ref>See equation (2.2.6') on page 186 in {{harvtxt|Fedoryuk|1987}}</ref> <math>
{\rm Ind} (-S_{xx}''(x^0)) = \frac{\pi}{4} {\rm sign} S_{xx}''(x_0),
</math> where <math>{\rm sign} S_{xx}''(x_0)</math> denotes [[Sylvester's_law_of_inertia#Statement_of_the_theorem|the signature of matrix]] <math>S_{xx}''(x_0)</math>, which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), <math>{\rm Ind}</math> is related to the [[Maslov index]] see, e.g., {{harvtxt|Chaichian|Demichev|2001}} and {{harvtxt|Schulman|2005}}.


== The case of multiple non-degenerate saddle points ==
== The case of multiple non-degenerate saddle points ==

Revision as of 10:13, 18 April 2014

In mathematics, the method of steepest descent or stationary phase method or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

The integral to be estimated is often of the form

where C is a contour and λ is large. One version of the method of steepest descent deforms the contour of integration so that it passes through a zero of the derivative g′(z) in such a way that on the contour g is (approximately) real and has a maximum at the zero.

The method of steepest descent was first published by Debye (1909), who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note Riemann (1863) about hypergeometric functions. The contour of steepest descent has a minimax property, see Fedoryuk (2001). Siegel (1932) described some other unpublished notes of Riemann, where he used this method to derive the Riemann-Siegel formula.

A simple estimate[1]

Let and If where denotes the real part, and there exists such that

then the following estimate holds:

The case of a single non-degenerate saddle point

Basic notions and notation

Let x be a complex n-dimensional vector, and

denote the Hessian matrix for a function . If

is a vector function, then its Jacobian matrix is defined as

A non-degenerate saddle point, , of a holomorphic function is a point where the function reaches an extremum (i.e., ) and has a non-vanishing determinant of the Hessian (i.e., ).

The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:

Complex Morse Lemma

The Morse lemma for real-valued functions generalizes as follows[2] for holomorphic functions: near a non-degenerate saddle point of a holomorphic function , there exist coordinates in terms of which is quadratic. Let S be a holomorphic function with domain , and let be a non-degenerate saddle point of S, that is, and . Then there exist neighborhoods of and of , and a bijective holomorphic function with such that

at all points . Here, the are the eigenvalues of the matrix .

An illustration of Complex Morse Lemma

The asymptotic expansion in the case of a single non-degenerate saddle point

Assume

  1. f (z) and S(z) are holomorphic functions in an open, bounded, and simply connected set ΩxCn such that the Ix = ΩxRn is connected;
  2. has a single maximum: for exactly one point x0Ix;
  3. x0 is a non-degenerate saddle point (i.e., and ).

Then, the following asymptotic holds

(8)

where μj are eigenvalues of the Hessian and are defined with arguments

(9)

This statement is a special case of more general results presented in.[4]

Equation (8) can also be written as

(13)

where the branch of

is selected as follows

Consider important special cases:

  • If S(x) is real valued for real x and x0 in Rn (aka, the multidimensional Laplace method), then[7]
  • If S(x) is purely imaginary for real x (i.e., for all x in Rn) and x0 in Rn (aka, the multidimensional stationary phase method),[8] then[9]
where denotes the signature of matrix , which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), Ind is related to the Maslov index see, e.g., Chaichian & Demichev (2001) and Schulman (2005).

The case of multiple non-degenerate saddle points

If the function has multiple isolated non-degenerate saddle points (i.e., where is an open cover of ), calculation of the integral asymptotic is reduced to the case of a singe saddle point by employing the partition of unity.

The partition of unity allows us to construct a set of continuous functions such that and each function vanishes outside . Whence,

where equation (13) was utilized at the last stage, and the pre-exponential function at least must be continuous.

The other cases

When and , the point is called a degenerate saddle point of a function .

Calculating the asymptotic of , when , is continuous, and has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function into one of the multitude of canonical representations. For further details see, e.g., Poston & Stewart (1978) and Fedoryuk (1987).

Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.

The other cases such as, e.g., and/or are discontinuous or when an extremum of lies at the integration region's boundary, require special care (see, e.g., Fedoryuk (1987) and Wong (1989)).

Extensions and generalizations

An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.

Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.

An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.

The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem.

The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.

Notes

  1. ^ A modified version of Lemma 2.1.1 on page 56 in Fedoryuk (1987).
  2. ^ Lemma 3.3.2 on page 113 in Fedoryuk (1987)
  3. ^ Poston & Stewart (1978), page 54; see also the comment on page 479 in Wong (1989).
  4. ^ Fedoryuk (1987), pages 417-420.
  5. ^ This conclusion follows from a comparison between the final asymptotic for I0(λ), given by equation (8), and a simple estimate for the discarded integral I1(λ).
  6. ^ This is justified by comparing the integral asymptotic over Rn [see equation (8)] with a simple estimate for the altered part.
  7. ^ See equation (4.4.9) on page 125 in Fedoryuk (1987)
  8. ^ Rigorously speaking, this case cannot be inferred from equation (8) because the second assumption, utilized in the derivation, is violated. To include the discussed case of a purely imaginary phase function, condition (9) should be replaced by
  9. ^ See equation (2.2.6') on page 186 in Fedoryuk (1987)

References

  • Chaichian, M.; Demichev, A. (2001), Path Integrals in Physics Volume 1: Stochastic Process and Quantum Mechanics, Taylor & Francis, p. 174, ISBN 075030801X
  • Debye, P. (1909), "Näherungsformeln für die Zylinderfunktionen für große Werte des Arguments und unbeschränkt veränderliche Werte des Index", Mathematische Annalen, 67 (4): 535–558, doi:10.1007/BF01450097 English translation in Debye, Peter J. W. (1954), The collected papers of Peter J. W. Debye, Interscience Publishers, Inc., New York, ISBN 978-0-918024-58-9, MR0063975
  • Deift, P.; Zhou, X. (1993), "A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation", Ann. Of Math., vol. 137, no. 2, The Annals of Mathematics, Vol. 137, No. 2, pp. 295–368, doi:10.2307/2946540, JSTOR 2946540.
  • Erdelyi, A. (1956), Asymptotic Expansions, Dover.
  • Fedoryuk, M V (2001) [1994], "Saddle_point_method", Encyclopedia of Mathematics, EMS Press.
  • Fedoryuk, M. V. (1987), Asymptotic: Integrals and Series, Nauka, Moscow [in Russian].
  • Kamvissis, S.; McLaughlin, K. T.-R.; Miller, P. (2003), "Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation", Annals of Mathematics Studies, vol. 154, Princeton University Press.
  • Riemann, B. (1863), Sullo svolgimento del quoziente di due serie ipergeometriche in frazione continua infinita (Unpublished note, reproduced in Riemann's collected papers.)
  • Siegel, C. L. (1932), "Über Riemanns Nachlaß zur analytischen Zahlentheorie", Quellen Studien zur Geschichte der Math. Astron. und Phys. Abt. B: Studien 2: 45–80 Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966.
  • Poston, T.; Stewart, I. (1978), Catastrophe Theory and Its Applications, Pitman.
  • Schulman, L. S. (2005), "Ch. 17: The Phase of the Semiclassical Amplitude", Techniques and Applications of Path Integration, Dover, ISBN 0486445283
  • Wong, R. (1989), Asymptotic approximations of integrals, Academic Press.