jmi-09-08 (1)
jmi-09-08 (1)
jmi-09-08 (1)
Mathematical
I nequalities
Volume 9, Number 1 (2015), 85–99 doi:10.7153/jmi-09-08
(Communicated by S. Koumandos)
Abstract. For λ ∈ (0,1) and x,y > 0 we obtain the best possible constants p and r , such that
erf(Mp (x,y; λ )) λ erf(x) + (1 − λ ) erf(y) erf(Mr (x,y; λ ))
x −t 2
where erf(x) = √2 e dt and Mp (x,y; λ ) = (λ x p + (1 − λ )y p )1/p (p = 0) , M0 (x,y; λ ) =
π 0
xλ y1−λ are error function and weighted power mean, respectively. Furthermore, using these
results, we generalized and complement an inequality due to Alzer.
1. Introduction
This function, also known as probability integral, has numbers applications in statistics,
probability theory, and partial differential equations. It’s well-known that the error
function is odd, strictly increasing on (−∞, +∞), and strictly concave on [0, +∞) with
limx→+∞ erf(x) = 1 . For the n− th derivation we have the representation
dn 2 2
erf(x) = (−1)n−1 √ e−x Hn−1 (x),
dxn π
2 n 2
where Hn (x) = (−1)n ex dxd −x ) is a Hermite polynomial.
n (e
The error function can be expanded as a power series in the following two ways
[35]:
2 +∞ (−1)n 2n+1 2
+∞
1
erf(x) = √ ∑ x = e−x ∑ 3
x2n+1 .
π n=0 n!(2n + 1) n=0 Γ(n + 2 )
c , Zagreb 85
Paper JMI-09-08
86 W. X IA AND Y. C HU
It is natural to ask that if (1.1) holds for 0 < x, y < 1 . Moreover we ask: what are
the best possible constants p and r such that the inequalities
hold for all x, y 1 (or 0 < x, y < 1 ). In what follows, we answer those questions.
2. Lemmas
In the section we present some lemmas, which will used in the proof of our main
results.
1
L EMMA 2.1. Let r = 0 and w(x) = erf(x r ), one has
(1) If r −1 , then w(x) is strictly convex on [1, +∞);
(2) If −1 < r < 0 , then w(x) is strictly concave on (0, 1];
(3) If 0 < r < 1 , then w(x) is strictly concave on [1, +∞);
(4) If r 1 , then w(x) is strictly concave on (0, +∞).
2 1 1 2
w (x) = √ x r −1 e−x r (2.1)
πr
and
2 1 1 2 2
w (x) = √ 2 x r −2 e−x r [1 − r − 2x r ]. (2.2)
πr
Therefore, Lemma 2.1 follows from (2.2).
L EMMA 2.2. Let u(x) = erf(ex ), then u(x) is strictly concave on [0, +∞).
2 2x
u (x) = √ ex−e > 0 (2.3)
π
and
2 2x
u (x) = √ (1 − 2e2x)ex−e < 0 (2.4)
π
for x 0 .
Therefore, (2.4) leads to u(x) is strictly concave on [0, +∞).
1
L EMMA 2.3. Let 0 < λ < 1 , r −1(r = 0) and ψ (x) = xr−1 (λ xr + 1 − λ ) r −1
2
x2 −(λ xr +1−λ ) r
×e , then ψ (x) is strictly increasing in [1, +∞).
88 W. X IA AND Y. C HU
ψ (x) 1
= ψ1 (x) (2.5)
ψ (x) r
x(λ x + 1 − λ )
2
where ψ1 (x) = (r − 1)(1 − λ ) + 2x2(λ xr + 1 − λ ) − 2λ xr(λ xr + 1 − λ ) r .
Case 1. If −1 r < 0 , Let
and 2
ψ12 (x) = 2λ x2 (λ xr + 1 − λ ) − 2λ xr(λ xr + 1 − λ ) r ,
then
ψ1 (x) = ψ11 (x) + ψ12 (x). (2.6)
Since
ψ11 (1) = (1 − λ )(1 + r) 0, (2.7)
ψ11 (x) = 2(1 − λ )x[λ (2 + r)xr + 2(1 − λ )] > 0 (2.8)
and 2−r
ψ12 (x) = 2λ x2 (λ xr + 1 − λ )[1 − (λ + (1 − λ )x−r) r ]>0 (2.9)
for x 1 .
From (2.6)–(2.9) we clearly see that ψ1 (x) > 0 for x ∈ (1, +∞) and −1 r < 0 .
Therefore, ψ (x) is strictly increasing in [1, +∞) for −1 r < 0 .
Case 2. If 0 < r < 2 , then (2.7)–(2.9) hold again, so, ψ (x) is strictly increasing
in [1, +∞) for 0 < r < 2 .
Case 3. If r 2 , we let ψ2 (x) = log[2x2 (λ xr + 1 − λ )] − log[2λ xr (λ xr + 1 −
2
λ ) ]. Then
r
2
lim ψ2 (x) = − log λ > 0 (2.10)
x→+∞ r
and
(2 − r)(1 − λ )
ψ2 (x) = 0. (2.11)
x(λ xr + 1 − λ )
It follows from (2.10) and (2.11) that ψ2 (x) > 0 for all x ∈ [1, +∞) and r 2 .
Hence, (2.5) lead to ψ (x) is strictly increasing in [1, +∞) for r 2 .
λ erf(x) + (1 − λ ) erf(1)
c1 (λ , r) 1 (2.12)
erf((λ xr + 1 − λ ) r )
and
λ erf(1) + (1 − λ ) erf(x)
c1 (λ , r) 1 (2.13)
erf((λ + (1 − λ )xr ) r )
O PTIMAL INEQUALITIES OF ERROR FUNCTION 89
Proof. It is not difficult to verify that 0 < c1 (λ , r) < 1 for 0 < λ < 1 and r −1 .
Since the proof of (2.13) is similarly with (2.12), so we only prove (2.12).
Firstly, we prove that
1 1 2 −(λ xr +1−λ ) 2r
G1 (x) = erf((1 − λ ) r ) − [λ + (1 − λ ) erf(1)]xr−1 (λ xr + 1 − λ ) r −1 ex ,
(2.16)
1
G1 (1) = erf((1 − λ ) r ) − [λ + (1 − λ ) erf(1)] > 0 (2.17)
and
lim G1 (x) = −∞. (2.18)
x→+∞
Therefore, Lemma 2.3 and (2.16) imply that G1 (x) is strictly decreasing in [1, +∞),
thus from (2.17) and (2.18) we conclude that there exists x1 ∈ (1, +∞), such that
G1 (x) > 0 for x ∈ (1, x1 ) and G1 (x) < 0 for x ∈ (x1 , +∞). So, G(x) is strictly in-
creasing in [1, x1 ] and strictly decreasing in [x1 , +∞).
It follows from (2.14) and (2.15) together with the piecewise monotonicity of G(x)
that G(x) > 0 for x ∈ [1, +∞) and −1 r < 0 .
Next, we prove that
λ erf(x) + (1 − λ ) erf(1)
λ + (1 − λ ) erf(1) 1
erf((λ xr + 1 − λ ) r )
1 2 −(λ xr +1−λ ) 2
H1 (x) = 1 − [λ + (1 − λ ) erf(1)]xr−1 (λ xr + 1 − λ ) r −1 ex r
, (2.21)
H1 (1) = (1 − λ )(1 − erf(1)) > 0 (2.22)
and
lim H1 (x) = −∞. (2.23)
x→+∞
Hence, Lemma 2.3 and (2.21) imply that H1 (x) is strictly decreasing in [1, +∞). It
follows from the monotonicity of H1 (x) and (2.22) together with (2.23) that there exists
x2 ∈ (1, +∞), such that H1 (x) > 0 for x ∈ (1, x2 ) and H1 (x) < 0 for x ∈ (x2 , +∞).
Therefore, H(x) is strictly increasing in [1, x2 ] and strictly decreasing in [x2 , +∞).
From the piecewise monotonicity of H(x) and (2.19) together with (2.20) we
clearly see that H(x) > 0 for x ∈ [1, +∞) and r > 0 .
and
λ erf(1) + (1 − λ ) erf(x)
λ + (1 − λ ) erf(1) . (2.25)
erf(x1−λ )
x ∈ (x3 , +∞). Thus, E(x) is strictly increasing in [1, x3 ] and is strictly decreasing in
[x3 , +∞).
Hence, E(x) > 0 follows from the piecewise monotonicity of E(x) and (2.26)
together with (2.27).
The proof of (2.25) is similarly with (2.24), so we omit the detail.
1
Proof. We only prove (2.32). For 0 < x < 1 and r 1 , let J(x) = λ erf(λ r ) erf(x)−
1
λ erf(1) erf(λ r x), then simple computation leads to
and
4λ 2 2 2
J (x) = − √ xe−x [ erf(α ) − α 3 erf(1)e(1−α )x ] (2.35)
π
1
where 0 < α = λ r < 1 .
Since
2 )x2 2
erf(α ) − α 3 erf(1)e(1−α > erf(α ) − α 3 erf(1)e1−α (2.36)
4
I1 (1) = − √ + 4 erf(1)e = 6.9060... > 0. (2.41)
π
It is easy to see that the function φ (α ) = 6 − 14α 2 + 4α 4 is strictly decreasing in
(0, 1), then (2.39) yields to I1 (α ) is strictly increasing in (0, 1).
It follows from the monotonicity of I1 (α ) and (2.40) together with (2.41) that there
exists α1 ∈ (0, 1), such that I1 (α ) < 0 for α ∈ (0, α1 ) and I1 (α ) > 0 for α ∈ (α1 , 1).
Therefore, I (α ) is strictly decreasing in [0, α1 ] and strictly increasing in [α1 , 1].
From the piecewise monotonicity of I (α ) and (2.38) we conclude that there exists
α2 ∈ (0, 1), such that I (α ) > 0 for α ∈ (0, α2 ) and I1 (α ) < 0 for α ∈ (α2 , 1). Hence,
I(α ) is strictly increasing in [0, α2 ] and strictly decreasing in [α2 , 1].
It follows from the piecewise monotonicity of I(α ) and (2.37) that I(α ) > 0 for
α ∈ (0, 1).
Therefore, (2.36) and (2.35) lead to J(x) is concave on (0, 1), from (2.34) we have
J(x) min{J(0), J(1)} = 0 .
3. Main results
∂
K(y, y) = K(x, y) |x=y = 0
∂x
and
∂2 2 1 2
K(x, y) |x=y = λ (1 − λ ) √ e−y [r − 1 + 2y2] 0,
∂ x2 πy
this leads to r −1 .
Thirdly, we suppose that there exists a real number p such that the left-hand side
of (3.1) hold for all x 1 and y 1 . We divide the proof into two cases.
O PTIMAL INEQUALITIES OF ERROR FUNCTION 93
and
lim [λ erf(x) + (1 − λ ) erf(y)] = λ + (1 − λ ) erf(y) < 1.
x→+∞
Hence we get
lim Q(x) = 0 (3.3)
x→+∞
and
2 2 2
Q (x) = √ e−x [λ − β e(1−β )x ]. (3.4)
π
Since β > 1 , then (3.4) leads to that there exists η1 ∈ (1, +∞), such that Q (x) > 0
for x ∈ (η1 , +∞), this implies that Q(x) is strictly increasing in [η1 , +∞).
It follows from (3.3) and the monotonicity of Q(x) that there exists η2 ∈ (1, +∞),
such that Q(x) < 0 for x ∈ (η2 , +∞), this is contradict with (3.2).
This leads to
erf(Mμ (x, y; λ )) λ erf(x) + (1 − λ ) erf(y)
for all 0 < x, y < 1 .
For ν 1 , 0 < x, y < 1 , we let s = xν and t = yν , then 0 < s,t < 1 . From Lemma
2.1(4) we clearly see that
This leads to
erf(Mν (x, y; λ )) λ erf(x) + (1 − λ ) erf(y)
94 W. X IA AND Y. C HU
Then
∂
T (y, y) = T (x, y)|x=y = 0
∂x
and
∂2 2 1 2
T (x, y)|x=y = λ (1 − λ ) √ e−y [ν − 1 + 2y2] 0. (3.6)
∂x 2 πy
Therefore, (3.6) leads to ν 1 for all 0 < x, y < 1 .
Finally, we prove that the left-hand side of (3.5) implies μ −1 .
Let y → 1 , then from the left-hand side of (3.5) we obtain
L(1) = 0 (3.8)
and
2λ 2 1 2 μ
2
μ
L (x) = √ e−x [1 − xμ −1(λ xμ + 1 − λ ) μ −1 ex −(λ x +1−λ ) ]. (3.9)
π
Let
1 2
2 −(λ xμ +1−λ ) μ
L1 (x) = log 1 − log[xμ −1 (λ xμ + 1 − λ ) μ −1 ex ]. (3.10)
Then
lim L1 (x) = 0 (3.11)
x→1−
and
1
L1 (x) = L2 (x), (3.12)
x(λ xμ + 1 − λ )
where
2
L2 (x) = (1 − μ )(1 − λ ) + 2λ (λ xμ + 1 − λ ) μ xμ − 2x2(λ xμ + 1 − λ )
and
lim L2 (x) = (1 − λ )(−1 − μ ). (3.13)
x→1−
In fact, if μ > −1 , then by the continuity of L2 (x) and (3.13) we know that there
exists a small δ1 > 0 such that L2 (x) < 0 for x ∈ (1 − δ1 , 1). Therefore, (3.12) leads to
L1 (x) is strictly decreasing in [1 − δ1, 1].
From (3.11) and the monotonicity of L1 (x) in [1 − δ1 , 1] we conclude that there
exists a small δ2 > 0 , such that L1 (x) > 0 for x ∈ (1 − δ2 , 1). Hence, (3.9) and (3.10)
imply that L(x) is increasing in [1 − δ2, 1].
O PTIMAL INEQUALITIES OF ERROR FUNCTION 95
It follows from (3.8) and the monotonicity of L(x) in (1 − δ2, 1) that there exists a
small δ3 > 0 , such that L(x) < 0 for x ∈ (1 − δ3 , 1). This is contradict with (3.7).
The following Theorem 3.3 generalized Theorem 1.1.
T HEOREM 3.3. Let 0 < λ < 1 and r −1 , then the double inequalities
Proof. The right-hand side of (3.14) follows from Theorem 3.1, so we only need
to prove the left-hand side of (3.14). We divide the proof into three cases.
1
Case 1. −1 r < 0 . For x 1 and y 1 , we let w(z) = erf(z r ), s = xr and
t = yr , then 0 < s,t 1 . From (2.1) and (2.2) we clearly see that w < 0 and w < 0
on (0, 1) for −1 r < 0 . Therefore, −w is positive and increasing in [0, 1]. Let
1 ∂
B (s,t) = u (s) − c1 (λ , r)u (λ s + (1 − λ )t) > 0.
λ ∂x λ
This implies that
1 ∂
B (s,t) = u (t) − c1 (λ , r)u (λ s + (1 − λ )t) > 0.
1 − λ ∂t λ
Thus, we have
1 ∂
A (s,t) = w (s) − c1 (λ , r)w (λ s + (1 − λ )t) > 0.
λ ∂s λ
Therefore,
1 ∂
A (s,t) = w (t) − c1 (λ , r)w (λ s + (1 − λ )t) > 0.
1 − λ ∂t λ
Thus
and
λ erf(x) + (1 − λ ) erf(y)
lim =1 (r −1). (3.24)
y→x erf(Mr (x, y; λ ))
This complete the proof of Theorem 3.3.
The following Theorem 3.4 complement of Theorem 1.1.
Proof. The right-hand side of (3.25) follows from Theorem 3.2, so we only need
1
to prove the left-hand side of (3.25). We let w(z) = erf(z r ) and
REFERENCES
[1] H. A LZER, Functional inequalities for the error function, Aequationes Math. 66, 1–2 (2003), 119–
127.
[2] H. A LZER, Functional inequalities for the error function. II, Aequationes Math. 78, 1-2 (2009), 113–
121.
[3] H. A LZER, Error function inequalities, Adv. Comput. Math. 33, 3 (2010), 349–379.
[4] E. Á RP ÁD AND L. A NDREA, The zeros of the complementary error function, Numer. Algorithms 49,
1-4 (2008), 153–157.
[5] B. BAJI Ć, On the power expansion of the inverse of the error function, Bull. Math. Soc. Sci. Math. R.
S. Roumanie (N. S.) 16(64), 4 (1972), 371–379.
[6] B. BAJI Ć, On the computation of the inverse of the error function by means of the power expansion,
Bull. Math. Soc. Sci. Math. R. S. Roumanie (N. S.) 17(65) (1973), 115–121.
[7] O. S. B ERLJAND AND A. JA . P RESSMAN, Asymptotic representations and some estimates for integral
error-functions of arbitrary order, (Russian) Dokl. Akad. Nauk SSSR 140 (1961), 12–14.
[8] R. K. B HADURI AND B. K. J ENNINGS , Note on the error function, Amer. J. Phys. 44, 6 (1976),
590–592.
[9] J. M. B LAIR , C. A. E DWARDS AND J. H. J OHNSON, Rational Chebyshev approximations for the
inverse of the error function, Math. Comp. 30, 136 (1976), 827–830.
[10] J. M. B LAIR , C. A. E DWARDS AND J. H. J OHNSON, Rational Chebyshev approximations for the
inverse of the error function, Math. Comp. 30, 136 (1976), 7–68.
[11] L. C ARLITZ, The inverse of the error function, Pacific J. Math. 13 (1963), 459–470.
[12] S. J. C HAPMAN, On the non-universality of the error function in the smoothing of Stokes discontinu-
ities, Proc. Roy. Soc. London Ser. A 452, 1953 (1996), 2225–2230.
[13] M. A. C HAUDHRY, A. Q ADIR AND S. M. Z UBAIR, Generalized error functions with applications to
probability and heat conduction, Int. J. Appl. Math. 9, 3 (2002), 259–278.
[14] J. T. C HU, On bounds for the normal integral, Biometrika 42 (1955), 263–265.
[15] W. W. C LENDENIN, Rational approximations for the error function and for similar functions, Comm.
ACM 4 (1961), 354–355.
[16] W. J. C ODY, Performance evaluation of programs for the error and complementary error functions,
ACM Trans. Math. Software 16, 1 (1990), 29–37.
[17] W. J. C ODY, Rational Chebyshev approximations for the error function, Math. Comp. 23 (1969),
631–637.
[18] D. C OMAN, The radius of starlikeness for the error function, Studia Univ. Babes-Bolyai Math. 36, 2
(1991), 13–16.
[19] A. D EA ÑO AND N. M. T EMME, Analytical and numerical aspects of a generalization of the comple-
mentary error function, Appl. Math. Comput. 216, 12 (2010), 3680–3693.
[20] D. D OMINICI , Some properties of the inverse error function, Contemp. Math. 457 (2008), 191–203.
[21] H. E. F ETTIS , J. C. C ASLIN AND K. R. C RAMER, Complex zeros of the error function and of the
complementary error function, Math. Comp. 27 (1973), 401–407.
[22] B. F ISHER , F. A L -S IREHY AND M. T ELCI , Convolutions involving the error function, Int. J. Appl.
Math. 13 (2003), 317–326.
[23] B. F ISHER , M. T ELCI AND E. Ö ZCA Ḡ, Results on the error function and the neutrix convolution,
Rad. Mat. 12, 1 (2003), 81–90.
[24] W. G AUTSCHI , Efficient computation of the complex error function, SIAM J. Numer. Anal. 7 (1970),
187–198.
[25] W. G AWRONSKI , J. M ÜLLER AND M. R EINHARD, Reduced cancellation in the evaluation of entire
functions and applications to the error function, SIAM J. Numer. Anal. 45, 6 (2007), 2564–2576.
[26] R. G. H ART, A close approximation related to the error function, Math. Comp. 20 (1966), 600–602.
[27] D. B. H UNTER AND T. R EGAN, A note on the evaluation of the complementary error function, Math.
Comp. 26 (1972), 539–541.
[28] J. K ESTIN AND L. N. P ERSEN, On the error function of a complex argument, Z. Angew. Math. Phys.
7 (1956), 33–40.
[29] S. N. K HARIN, A generalization of the error function and its application in heat conduction problems,
(Russian) Differential equations and their applications, 176 (1981), 51–59.
O PTIMAL INEQUALITIES OF ERROR FUNCTION 99
[30] A. L AFORGIA AND S. S ISMONDI , Monotonicity results and inequalities for the gamma and error
functions, J. Comput. Appl. Math. 23, 1 (1988), 25–33.
[31] F. M ATTA AND A. R EICHEL, Uniform computation of the error function and other related functions,
Math. Comp. 25 (1971), 339–344.
[32] D. S. M ITRINOVI Ć, Problem 5555, Amer. Math. Monthly 75 (1968), 1129–1130.
2
[33] S. M OROSAWA, The parameter space of error functions of the form a 0z e−w dw , Complex analysis
and potential theory (2007), 174–177.
[34] H. S. M UKUNDA, Evaluation of some definite integrals involving repeated integrals of error functions,
Bull. Calcutta Math. Soc. 66 (1974), 39–54.
[35] K. O LDHAM , J. M YLAND AND J. S PANIER, An atlas of functions. With Equator, the atlas function
calculator, Second edition, Springer, New York, 2009.
[36] H. E. S ALZER, Complex zeros of the error function, J. Franklin Inst. 260 (1955), 209–211.
[37] V. L. N. S ARMA AND H. D. PANDEY, Hölder’s inequality and the error function, Vijnana Parishad
Anusandhan Patrika 25, 4 (1982), 307–310.
[38] O. N. S TRAND, A method for the computation of the error function of a complex variable, Math.
Comp. 19 (1965), 127–129.
[39] N. M. T EMME, Error functions, Dawson’s and Fresnel integrals, NIST handbook of mathematical
functions, 159–171, U.S. Dept. Commerce, Washington, DC, 2010.
[40] J. P. V IGNERON AND P H . L AMBIN, Gaussian quadrature of integrands involving the error function,
Math. Comp. 35, 152 (1980), 1299–1307.
[41] J. A. C. W EIDEMAN, Computation of the complex error function, SIAM J. Numer. Anal. 31, 5 (1994),
1497–1518.
[42] I. H. Z IMMERMAN, Extending Menzel’s closed-form approximation for the error function, Amer. J.
Phys. 44, 6 (1976), 592–593.