jmi-09-08 (1)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

J ournal of

Mathematical
I nequalities
Volume 9, Number 1 (2015), 85–99 doi:10.7153/jmi-09-08

OPTIMAL INEQUALITIES FOR THE CONVEX


COMBINATION OF ERROR FUNCTION

W EIFENG X IA AND Y UMING C HU

(Communicated by S. Koumandos)

Abstract. For λ ∈ (0,1) and x,y > 0 we obtain the best possible constants p and r , such that
erf(Mp (x,y; λ ))  λ erf(x) + (1 − λ ) erf(y)  erf(Mr (x,y; λ ))
 x −t 2
where erf(x) = √2 e dt and Mp (x,y; λ ) = (λ x p + (1 − λ )y p )1/p (p = 0) , M0 (x,y; λ ) =
π 0
xλ y1−λ are error function and weighted power mean, respectively. Furthermore, using these
results, we generalized and complement an inequality due to Alzer.

1. Introduction

For x ∈ R, the error function erf(x) is defined as


 x
2 2
erf(x) = √ e−t dt.
π 0

This function, also known as probability integral, has numbers applications in statistics,
probability theory, and partial differential equations. It’s well-known that the error
function is odd, strictly increasing on (−∞, +∞), and strictly concave on [0, +∞) with
limx→+∞ erf(x) = 1 . For the n− th derivation we have the representation

dn 2 2
erf(x) = (−1)n−1 √ e−x Hn−1 (x),
dxn π
2 n 2
where Hn (x) = (−1)n ex dxd −x ) is a Hermite polynomial.
n (e
The error function can be expanded as a power series in the following two ways
[35]:
2 +∞ (−1)n 2n+1 2
+∞
1
erf(x) = √ ∑ x = e−x ∑ 3
x2n+1 .
π n=0 n!(2n + 1) n=0 Γ(n + 2 )

Mathematics subject classification (2010): 33B20, 26D15.


Keywords and phrases: Error function, power mean, functional inequalities.
This research is supported by the Natural Science Foundation of China (No. 11071069) and Natural Science Founda-
tion of Zhejiang Province (No. LY13A010004).

c  , Zagreb 85
Paper JMI-09-08
86 W. X IA AND Y. C HU

It also can be expressed in terms of incomplete gamma function and a confluent


hypergeometric function:
   
sgn(x) 1 2 2x 1 3
erf(x) = √ γ , x = √ 1 F1 ; ; −x2 .
π 2 π 2 2
In the recently past, the error function have been the subject of intensive research.
In particular, many properties and inequalities for error function can be found in the
literature [1, 7, 12, 18, 22, 23, 25, 30, 31, 33, 34, 37, 39, 40, 41]. In [4, 16, 19, 21, 27],
the authors study the properties of complementary error function. The expressions
in series, rational chebyshev approximates and derivation properties of inverse error
function are given in [5, 6, 9, 10, 11, 20]. Rational approximates for error function
can be found in [8, 15, 17, 26, 42]. In [24, 28, 36, 38] the authors concerned with the
computation of complex error function. It might be surprising that the error function
has application in heat conduction [13, 29].
In [14], Chu obtained the following sharp inequalities:
 
2 2
1 − e−ax  erf(x)  1 − e−bx
4
hold for all x  0 with the best possible constants a = 1 and b = π .
Mitrinović [32] proved the elegant inequality:
erf(x) + erf(y)  erf(x + y) + erf(x) erf(y)
holds for all x, y > 0 .
The following two best possible inequalities were obtained by Alzer [2]:
erf(x + erf(y)) 2
erf(1) < <√
erf(y + erf(x)) π
and
erf(x erf(y))
0<  1.
erf(y erf(x))
For λ ∈ (0, 1), we denote A(x, y; λ ) = λ x+(1− λ )y, G(x, y; λ ) = xλ y1−λ , H(x, y; λ ) =
xy
λ y+(1−λ )xand Mr (x, y; λ ) = (λ xr + (1 − λ )yr )1/r (r = 0), M0 (x, y; λ ) = xλ y1−λ are
weighted arithmetic mean, weighted geometric mean, weighted harmonic mean and
weighted power mean of two positive numbers x and y with x = y. It is well-known
that
H(x, y; λ ) = M−1 (x, y; λ ) < G(x, y; λ ) = M0 (x, y; λ ) < A(x, y; λ ) = M1 (x, y; λ ).
Very recently, Alzer proved the following Theorem 1.1 in [3].

T HEOREM 1.1. Let λ ∈ (0, 12 ) be a real number, then


c1 (λ ) erf(H(x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)  c2 (λ ) erf(H(x, y; λ )) (1.1)
hold for all x  1 and y  1 with the best possible factors
λ + (1 − λ ) erf(1)
c1 (λ ) = and c2 (λ ) = 1.
erf(1/(1 − λ ))
O PTIMAL INEQUALITIES OF ERROR FUNCTION 87

It is natural to ask that if (1.1) holds for 0 < x, y < 1 . Moreover we ask: what are
the best possible constants p and r such that the inequalities

erf(M p (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)  erf(Mr (x, y; λ ))

hold for all x, y  1 (or 0 < x, y < 1 ). In what follows, we answer those questions.

2. Lemmas

In the section we present some lemmas, which will used in the proof of our main
results.
1
L EMMA 2.1. Let r = 0 and w(x) = erf(x r ), one has
(1) If r  −1 , then w(x) is strictly convex on [1, +∞);
(2) If −1 < r < 0 , then w(x) is strictly concave on (0, 1];
(3) If 0 < r < 1 , then w(x) is strictly concave on [1, +∞);
(4) If r  1 , then w(x) is strictly concave on (0, +∞).

Proof. Elementary computation leads to

2 1 1 2
w (x) = √ x r −1 e−x r (2.1)
πr

and
2 1 1 2 2
w (x) = √ 2 x r −2 e−x r [1 − r − 2x r ]. (2.2)
πr
Therefore, Lemma 2.1 follows from (2.2). 

L EMMA 2.2. Let u(x) = erf(ex ), then u(x) is strictly concave on [0, +∞).

Proof. Simple computation yields

2 2x
u (x) = √ ex−e > 0 (2.3)
π

and
2 2x
u (x) = √ (1 − 2e2x)ex−e < 0 (2.4)
π
for x  0 .
Therefore, (2.4) leads to u(x) is strictly concave on [0, +∞). 

1
L EMMA 2.3. Let 0 < λ < 1 , r  −1(r = 0) and ψ (x) = xr−1 (λ xr + 1 − λ ) r −1
2
x2 −(λ xr +1−λ ) r
×e , then ψ (x) is strictly increasing in [1, +∞).
88 W. X IA AND Y. C HU

Proof. By logarithmic differentiation,

ψ  (x) 1
= ψ1 (x) (2.5)
ψ (x) r
x(λ x + 1 − λ )
2
where ψ1 (x) = (r − 1)(1 − λ ) + 2x2(λ xr + 1 − λ ) − 2λ xr(λ xr + 1 − λ ) r .
Case 1. If −1  r < 0 , Let

ψ11 (x) = (r − 1)(1 − λ ) + 2(1 − λ )x2(λ xr + 1 − λ )

and 2
ψ12 (x) = 2λ x2 (λ xr + 1 − λ ) − 2λ xr(λ xr + 1 − λ ) r ,
then
ψ1 (x) = ψ11 (x) + ψ12 (x). (2.6)
Since
ψ11 (1) = (1 − λ )(1 + r)  0, (2.7)

ψ11 (x) = 2(1 − λ )x[λ (2 + r)xr + 2(1 − λ )] > 0 (2.8)
and 2−r
ψ12 (x) = 2λ x2 (λ xr + 1 − λ )[1 − (λ + (1 − λ )x−r) r ]>0 (2.9)
for x  1 .
From (2.6)–(2.9) we clearly see that ψ1 (x) > 0 for x ∈ (1, +∞) and −1  r < 0 .
Therefore, ψ (x) is strictly increasing in [1, +∞) for −1  r < 0 .
Case 2. If 0 < r < 2 , then (2.7)–(2.9) hold again, so, ψ (x) is strictly increasing
in [1, +∞) for 0 < r < 2 .
Case 3. If r  2 , we let ψ2 (x) = log[2x2 (λ xr + 1 − λ )] − log[2λ xr (λ xr + 1 −
2
λ ) ]. Then
r
2
lim ψ2 (x) = − log λ > 0 (2.10)
x→+∞ r
and
(2 − r)(1 − λ )
ψ2 (x) =  0. (2.11)
x(λ xr + 1 − λ )
It follows from (2.10) and (2.11) that ψ2 (x) > 0 for all x ∈ [1, +∞) and r  2 .
Hence, (2.5) lead to ψ (x) is strictly increasing in [1, +∞) for r  2 . 

L EMMA 2.4. For 0 < λ < 1 , r  −1(r = 0) and x  1 , we have

λ erf(x) + (1 − λ ) erf(1)
c1 (λ , r)  1 (2.12)
erf((λ xr + 1 − λ ) r )
and
λ erf(1) + (1 − λ ) erf(x)
c1 (λ , r)  1 (2.13)
erf((λ + (1 − λ )xr ) r )
O PTIMAL INEQUALITIES OF ERROR FUNCTION 89

where  λ +(1−λ ) erf(1)


1 , −1  r < 0,
c1 (λ , r) = erf((1−λ ) r )
λ + (1 − λ ) erf(1), r > 0.

Proof. It is not difficult to verify that 0 < c1 (λ , r) < 1 for 0 < λ < 1 and r  −1 .
Since the proof of (2.13) is similarly with (2.12), so we only prove (2.12).
Firstly, we prove that

λ + (1 − λ ) erf(1) λ erf(x) + (1 − λ ) erf(1)


1
 1
erf((1 − λ ) ) r erf((λ xr + 1 − λ ) r )

holds for −1  r < 0 and x  1 .


1
Let G(x) = erf((1− √
λ ) r )[λ erf(x)+(1− λ ) erf(1)]−[λ +(1− λ ) erf(1)] erf((λ xr +
1 2
1 − λ ) r ) and G1 (x) = 2λπ ex G (x), then one has
1
G(1) = [ erf((1 − λ ) r ) − (λ + (1 − λ ) erf(1))] erf(1) > 0, (2.14)

lim G(x) = 0, (2.15)


x→+∞

1 1 2 −(λ xr +1−λ ) 2r
G1 (x) = erf((1 − λ ) r ) − [λ + (1 − λ ) erf(1)]xr−1 (λ xr + 1 − λ ) r −1 ex ,
(2.16)
1
G1 (1) = erf((1 − λ ) r ) − [λ + (1 − λ ) erf(1)] > 0 (2.17)
and
lim G1 (x) = −∞. (2.18)
x→+∞

Therefore, Lemma 2.3 and (2.16) imply that G1 (x) is strictly decreasing in [1, +∞),
thus from (2.17) and (2.18) we conclude that there exists x1 ∈ (1, +∞), such that
G1 (x) > 0 for x ∈ (1, x1 ) and G1 (x) < 0 for x ∈ (x1 , +∞). So, G(x) is strictly in-
creasing in [1, x1 ] and strictly decreasing in [x1 , +∞).
It follows from (2.14) and (2.15) together with the piecewise monotonicity of G(x)
that G(x) > 0 for x ∈ [1, +∞) and −1  r < 0 .
Next, we prove that

λ erf(x) + (1 − λ ) erf(1)
λ + (1 − λ ) erf(1)  1
erf((λ xr + 1 − λ ) r )

holds for x  1 and r > 0 .


1
Let H(x)√= λ erf(x) + (1 − λ ) erf(1) − [λ + (1 − λ ) erf(1)] erf((λ xr + 1 − λ ) r )
and H1 (x) = 2λπ ex H  (x), then we have
2

H(1) = (1 − λ )(1 − erf(1)) erf(1) > 0, (2.19)

lim H(x) = 0, (2.20)


x→+∞
90 W. X IA AND Y. C HU

1 2 −(λ xr +1−λ ) 2
H1 (x) = 1 − [λ + (1 − λ ) erf(1)]xr−1 (λ xr + 1 − λ ) r −1 ex r
, (2.21)
H1 (1) = (1 − λ )(1 − erf(1)) > 0 (2.22)
and
lim H1 (x) = −∞. (2.23)
x→+∞

Hence, Lemma 2.3 and (2.21) imply that H1 (x) is strictly decreasing in [1, +∞). It
follows from the monotonicity of H1 (x) and (2.22) together with (2.23) that there exists
x2 ∈ (1, +∞), such that H1 (x) > 0 for x ∈ (1, x2 ) and H1 (x) < 0 for x ∈ (x2 , +∞).
Therefore, H(x) is strictly increasing in [1, x2 ] and strictly decreasing in [x2 , +∞).
From the piecewise monotonicity of H(x) and (2.19) together with (2.20) we
clearly see that H(x) > 0 for x ∈ [1, +∞) and r > 0 . 

L EMMA 2.5. For 0 < λ < 1 and x  1 , we have


λ erf(x) + (1 − λ ) erf(1)
λ + (1 − λ ) erf(1)  (2.24)
erf(xλ )

and
λ erf(1) + (1 − λ ) erf(x)
λ + (1 − λ ) erf(1)  . (2.25)
erf(x1−λ )

Proof. Let E(x) = λ erf(x)+(1− λ ) erf(1)−[λ +(1− λ ) erf(1)] erf(xλ ), E1 (x) =


√ 2λ 2
π x2  x2−λ ex −x 
2λ e E (x) and E2 (x) = λ +(1−λ ) erf(1) E1 (x), then simple computation leads to

E(1) = (1 − λ )(1 − erf(1)) erf(1) > 0, (2.26)

lim E(x) = 0, (2.27)


x→+∞
2λ +x2
E1 (x) = 1 − [λ + (1 − λ ) erf(1)]xλ −1 e−x ,
E1 (1) = (1 − λ )(1 − erf(1)) > 0, (2.28)
lim E1 (x) = −∞, (2.29)
x→+∞

E2 (x) = 1 − λ + 2λ x2λ − 2x2 ,


E2 (1) = λ − 1 < 0 (2.30)
and
E2 (x) = 4x(λ 2 x2λ −2 − 1) < 0 (2.31)
for x  1 .
Therefore, Inequalities (2.31) and (2.30) imply that E1 (x) is strictly decreasing in
[1, +∞).
From the monotonicity of E1 (x) and (2.28) together with (2.29) we clearly see
that there exists x3 ∈ (1, +∞), such that E1 (x) > 0 for x ∈ (1, x3 ) and E1 (x) < 0 for
O PTIMAL INEQUALITIES OF ERROR FUNCTION 91

x ∈ (x3 , +∞). Thus, E(x) is strictly increasing in [1, x3 ] and is strictly decreasing in
[x3 , +∞).
Hence, E(x) > 0 follows from the piecewise monotonicity of E(x) and (2.26)
together with (2.27).
The proof of (2.25) is similarly with (2.24), so we omit the detail. 

L EMMA 2.6. For 0 < λ < 1 , r  1 and 0 < x < 1 , we have


λ erf(1) λ erf(x)
1  1 (2.32)
erf(λ )r erf(λ r x)
and
λ erf(1) (1 − λ ) erf(x)
1  1 . (2.33)
erf(λ )
r erf((1 − λ ) r x)

1
Proof. We only prove (2.32). For 0 < x < 1 and r  1 , let J(x) = λ erf(λ r ) erf(x)−
1
λ erf(1) erf(λ r x), then simple computation leads to

J(0) = 0, J(1) = 0 (2.34)

and
4λ 2 2 2
J  (x) = − √ xe−x [ erf(α ) − α 3 erf(1)e(1−α )x ] (2.35)
π
1
where 0 < α = λ r < 1 .
Since
2 )x2 2
erf(α ) − α 3 erf(1)e(1−α > erf(α ) − α 3 erf(1)e1−α (2.36)

for x ∈ (0, 1).


2
Next, we prove that I(α ) = erf(α ) − α 3 erf(1)e1−α > 0 for α ∈ (0, 1).
Elementary computations yield

I(0) = 0, I(1) = 0 (2.37)


2
I  (α ) = erf (α ) − erf(1)(3α 2 − 2α 4 )e1−α ,
2 2
I  (0) = √ , I  (1) = √ − erf(1) = −0.4276... < 0 (2.38)
π e π
and
2
I  (α ) = α e−α I1 (α ),
where
4
I1 (α ) = − √ − erf(1)e(6 − 14α 2 + 4α 4 ), (2.39)
π
4
I1 (0) = − √ − 6 erf(1)e = −16.0009... < 0, (2.40)
π
92 W. X IA AND Y. C HU

4
I1 (1) = − √ + 4 erf(1)e = 6.9060... > 0. (2.41)
π
It is easy to see that the function φ (α ) = 6 − 14α 2 + 4α 4 is strictly decreasing in
(0, 1), then (2.39) yields to I1 (α ) is strictly increasing in (0, 1).
It follows from the monotonicity of I1 (α ) and (2.40) together with (2.41) that there
exists α1 ∈ (0, 1), such that I1 (α ) < 0 for α ∈ (0, α1 ) and I1 (α ) > 0 for α ∈ (α1 , 1).
Therefore, I  (α ) is strictly decreasing in [0, α1 ] and strictly increasing in [α1 , 1].
From the piecewise monotonicity of I  (α ) and (2.38) we conclude that there exists
α2 ∈ (0, 1), such that I  (α ) > 0 for α ∈ (0, α2 ) and I1 (α ) < 0 for α ∈ (α2 , 1). Hence,
I(α ) is strictly increasing in [0, α2 ] and strictly decreasing in [α2 , 1].
It follows from the piecewise monotonicity of I(α ) and (2.37) that I(α ) > 0 for
α ∈ (0, 1).
Therefore, (2.36) and (2.35) lead to J(x) is concave on (0, 1), from (2.34) we have
J(x)  min{J(0), J(1)} = 0 . 

3. Main results

T HEOREM 3.1. Let λ ∈ (0, 1), the double inequalities

erf(M p (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)  erf(Mr (x, y; λ )) (3.1)

hold for all x  1 , y  1 if and only if p = −∞ and r  −1 .

Proof. Firstly, we prove that if r  −1 and p = −∞, then (3.1) hold.


The monotonicity of erf(x) implies that the left-hand side of (3.1) is true with
p = −∞. Since the weighted power mean is increasing on R with respect with it’s
order, this implies that t → erf(Mt (x, y; λ )) is increasing on R. Therefore, it is enough
to prove that the right-hand side of (3.1) is valid for r = −1 , which is followed from
(1.1).
Secondly, we prove that the right-hand side of (3.1) imply that r  −1 .
For x  1 and y  1 , from the right-hand side of (3.1) we can let

K(x, y) = erf(Mr (x, y; λ )) − λ erf(x) − (1 − λ ) erf(y)  0.

Then simple computation leads to


K(y, y) = K(x, y) |x=y = 0
∂x
and
∂2 2 1 2
K(x, y) |x=y = λ (1 − λ ) √ e−y [r − 1 + 2y2]  0,
∂ x2 πy
this leads to r  −1 .
Thirdly, we suppose that there exists a real number p such that the left-hand side
of (3.1) hold for all x  1 and y  1 . We divide the proof into two cases.
O PTIMAL INEQUALITIES OF ERROR FUNCTION 93

Case A. If p  0 , for fixed y ∈ R we have

lim erf(M p (x, y; λ )) = 1


x→+∞

and
lim [λ erf(x) + (1 − λ ) erf(y)] = λ + (1 − λ ) erf(y) < 1.
x→+∞

This contradict with the left-hand side of (3.1).


Case B. If −∞ < p < 0 , then for x  1 , from the left-hand side of (3.1) we let
1
β = λ p , y → +∞ and

Q(x) = λ erf(x) + 1 − λ − erf(β x)  0. (3.2)

Hence we get
lim Q(x) = 0 (3.3)
x→+∞

and
2 2 2
Q (x) = √ e−x [λ − β e(1−β )x ]. (3.4)
π
Since β > 1 , then (3.4) leads to that there exists η1 ∈ (1, +∞), such that Q (x) > 0
for x ∈ (η1 , +∞), this implies that Q(x) is strictly increasing in [η1 , +∞).
It follows from (3.3) and the monotonicity of Q(x) that there exists η2 ∈ (1, +∞),
such that Q(x) < 0 for x ∈ (η2 , +∞), this is contradict with (3.2). 

T HEOREM 3.2. Let λ ∈ (0, 1), the double inequalities

erf(Mμ (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)  erf(Mν (x, y; λ )) (3.5)

hold for all 0 < x, y < 1 if and only if μ  −1 and ν  1 .

Proof. Firstly we prove that if μ  −1 and ν  1 , then (3.5) is valid.


For μ  −1 and 0 < x, y < 1 , we let s = xμ and t = yμ , then s,t > 1 . It follows
from Lemma 2.1(1) that

w(λ s + (1 − λ )t)  λ w(s) + (1 − λ )w(t).

This leads to
erf(Mμ (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)
for all 0 < x, y < 1 .
For ν  1 , 0 < x, y < 1 , we let s = xν and t = yν , then 0 < s,t < 1 . From Lemma
2.1(4) we clearly see that

w(λ s + (1 − λ )t)  λ w(s) + (1 − λ )w(t).

This leads to
erf(Mν (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)
94 W. X IA AND Y. C HU

for all 0 < x, y < 1 .


Secondly, we prove that the right-hand side of (3.5) implies ν  1 . Let

T (x, y) = erf(Mν (x, y; λ )) − λ erf(x) − (1 − λ ) erf(y)  0.

Then

T (y, y) = T (x, y)|x=y = 0
∂x
and
∂2 2 1 2
T (x, y)|x=y = λ (1 − λ ) √ e−y [ν − 1 + 2y2]  0. (3.6)
∂x 2 πy
Therefore, (3.6) leads to ν  1 for all 0 < x, y < 1 .
Finally, we prove that the left-hand side of (3.5) implies μ  −1 .
Let y → 1 , then from the left-hand side of (3.5) we obtain

L(x) = λ erf(x) + (1 − λ ) erf(1) − erf(Mμ (x, 1; λ ))  0 (3.7)

for 0 < x < 1 .


By elementary computations, we get

L(1) = 0 (3.8)

and
2λ 2 1 2 μ
2
μ
L (x) = √ e−x [1 − xμ −1(λ xμ + 1 − λ ) μ −1 ex −(λ x +1−λ ) ]. (3.9)
π
Let
1 2
2 −(λ xμ +1−λ ) μ
L1 (x) = log 1 − log[xμ −1 (λ xμ + 1 − λ ) μ −1 ex ]. (3.10)
Then
lim L1 (x) = 0 (3.11)
x→1−
and
1
L1 (x) = L2 (x), (3.12)
x(λ xμ + 1 − λ )
where
2
L2 (x) = (1 − μ )(1 − λ ) + 2λ (λ xμ + 1 − λ ) μ xμ − 2x2(λ xμ + 1 − λ )

and
lim L2 (x) = (1 − λ )(−1 − μ ). (3.13)
x→1−

In fact, if μ > −1 , then by the continuity of L2 (x) and (3.13) we know that there
exists a small δ1 > 0 such that L2 (x) < 0 for x ∈ (1 − δ1 , 1). Therefore, (3.12) leads to
L1 (x) is strictly decreasing in [1 − δ1, 1].
From (3.11) and the monotonicity of L1 (x) in [1 − δ1 , 1] we conclude that there
exists a small δ2 > 0 , such that L1 (x) > 0 for x ∈ (1 − δ2 , 1). Hence, (3.9) and (3.10)
imply that L(x) is increasing in [1 − δ2, 1].
O PTIMAL INEQUALITIES OF ERROR FUNCTION 95

It follows from (3.8) and the monotonicity of L(x) in (1 − δ2, 1) that there exists a
small δ3 > 0 , such that L(x) < 0 for x ∈ (1 − δ3 , 1). This is contradict with (3.7). 
The following Theorem 3.3 generalized Theorem 1.1.

T HEOREM 3.3. Let 0 < λ < 1 and r  −1 , then the double inequalities

c1 (λ , r) erf(Mr (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)  c2 (λ , r) erf(Mr (x, y; λ )) (3.14)

hold for all x, y  1 , and the factors


 λ +(1−λ ) erf(1)
1 , −1  r < 0,
c1 (λ , r) = erf((1−λ ) r ) and c2 (λ , r) = 1
λ + (1 − λ ) erf(1), r  0,
are the best possible.

Proof. The right-hand side of (3.14) follows from Theorem 3.1, so we only need
to prove the left-hand side of (3.14). We divide the proof into three cases.
1
Case 1. −1  r < 0 . For x  1 and y  1 , we let w(z) = erf(z r ), s = xr and
t = yr , then 0 < s,t  1 . From (2.1) and (2.2) we clearly see that w < 0 and w < 0
on (0, 1) for −1  r < 0 . Therefore, −w is positive and increasing in [0, 1]. Let

Aλ (s,t) = λ w(s) + (1 − λ )w(t) − c1(λ , r)w(λ s + (1 − λ )t). (3.15)

Subcase 1.1. If 0 < s  t  1 , then s  λ s + (1 − λ )t  t . Differentiating (3.15)


leads to
1 ∂
A (s,t) = w (t) − c1 (λ , r)w (λ s + (1 − λ )t) < 0.
1 − λ ∂t λ
Thus

Aλ (s,t)  Aλ (s, 1) = λ w(s) + (1 − λ )w(1) − c1(λ , r)w(λ s + 1 − λ ). (3.16)

Therefore, Aλ (s,t)  0 follows from (3.16) and (2.12).


Subcase 1.2. If 0 < t  s  1 , then t  λ s + (1 − λ )t  s. Differentiating (3.15)
yields to
1 ∂
A (s,t) = w (s) − c1 (λ , r)w (λ s + (1 − λ )t) < 0.
λ ∂s λ
So

Aλ (s,t)  Aλ (1,t) = λ w(1) + (1 − λ )w(t) − c1(λ , r)w(λ + (1 − λ )t). (3.17)

Hence, Aλ (s,t)  0 follows from (3.17) and (2.13).


Case 2. r = 0 . For x  1 and y  1 , we let u(z) = erf(ez ), s = log x and
t = log y, then s,t  0 . From (2.3) and (2.4) we know that u < 0 and u > 0 on
[0, +∞). Therefore, u is positive and decreasing in [0, +∞). Let

Bλ (s,t) = λ u(s) + (1 − λ )u(t) − c1(λ , r)u(λ s + (1 − λ )t). (3.18)


96 W. X IA AND Y. C HU

Subcase 2.1. If 0  s  t , then s  λ s + (1 − λ )t  t , (3.18) leads to

1 ∂
B (s,t) = u (s) − c1 (λ , r)u (λ s + (1 − λ )t) > 0.
λ ∂x λ
This implies that

Bλ (s,t)  Bλ (0,t) = λ u(0) + (1 − λ )u(t) − c1(λ , r)u((1 − λ )t). (3.19)

Hence, Bλ (s,t)  0 follows from (3.19) and (2.25).


Subcase 2.2. If 0  t  s, then t  λ s + (1 − λ )t  s, (3.18) yields

1 ∂
B (s,t) = u (t) − c1 (λ , r)u (λ s + (1 − λ )t) > 0.
1 − λ ∂t λ
Thus, we have

Bλ (s,t)  Bλ (s, 0) = λ u(s) + (1 − λ )u(0) − c1(λ , r)u(λ s). (3.20)

Hence, Bλ (s,t)  0 follows from (3.20) and (2.24).


1
Case 3. r > 0 . For x  1 and y  1 , we let w(z) = erf(z r ), s = xr and t = yr ,
then s,t  1 . It follows from (2.1) and (2.2) that w < 0 and w > 0 in (1, +∞) for
r  0 , therefore, w is positive and decreasing in [1, +∞).
Subcase 3.1. If 1  s  t , then s  λ s + (1 − λ )t  t , (3.15) leads to

1 ∂
A (s,t) = w (s) − c1 (λ , r)w (λ s + (1 − λ )t) > 0.
λ ∂s λ
Therefore,

Aλ (s,t)  Aλ (1,t) = λ w(1) + (1 − λ )w(t) − c1(λ , r)w(λ + (1 − λ )t). (3.21)

Hence, Aλ (s,t)  0 follows from (3.21) and (2.13).


Subcase 3.2. If 1  t  s, then t  λ s + (1 − λ )t  s, from (3.15) we obtain

1 ∂
A (s,t) = w (t) − c1 (λ , r)w (λ s + (1 − λ )t) > 0.
1 − λ ∂t λ
Thus

Aλ (s,t)  Aλ (s, 1) = λ w(s) + (1 − λ )w(1) − c1(λ , r)w(λ s + 1 − λ ). (3.22)

Therefore, Aλ (s,t)  0 follows from (3.22) and (2.12).


The following (3.23) and (3.24) imply that c1 (λ , r) and c2 (λ , r) are the best pos-
sible.
 λ +(1−λ ) erf(1)
λ erf(x) + (1 − λ ) erf(y) 1 , −1  r < 0,
lim lim = erf((1−λ ) r ) (3.23)
y→1 x→+∞ erf(Mr (x, y; λ )) λ + (1 − λ ) erf(1), r  0.
O PTIMAL INEQUALITIES OF ERROR FUNCTION 97

and
λ erf(x) + (1 − λ ) erf(y)
lim =1 (r  −1). (3.24)
y→x erf(Mr (x, y; λ ))
This complete the proof of Theorem 3.3. 
The following Theorem 3.4 complement of Theorem 1.1.

T HEOREM 3.4. Let 0 < λ < 1 , r  1 , then the double inequalities


c3 (λ , r) erf(Mr (x, y; λ ))  λ erf(x) + (1 − λ ) erf(y)  c4 (λ , r) erf(Mr (x, y; λ )) (3.25)
hold for all 0 < x, y < 1 , and the factors
λ erf(1)
c3 (λ , r) = 1 and c4 (λ , r) = 1
erf(λ r )
are the best possible.

Proof. The right-hand side of (3.25) follows from Theorem 3.2, so we only need
1
to prove the left-hand side of (3.25). We let w(z) = erf(z r ) and

Dλ (s,t) = λ w(s) + (1 − λ )w(t) − c3(λ , r)w(λ s + (1 − λ )t). (3.26)


r r
For r  1 , 0 < x, y < 1 , let s = x ,t = y , then 0 < s,t < 1 . From (2.1) and (2.2)
we see that w < 0 and w > 0 , thus w is positive and decreasing in [0, 1].
Case 1. If 0 < s  t < 1 , then s  λ s + (1 − λ )t  t . It follows from (3.26) that
1 ∂
D (s,t) = w (s) − c3 (λ , r)w (λ s + (1 − λ )t) > 0.
λ ∂s λ
This leads to
Dλ (s,t) > Dλ (0,t) = λ w(0) + (1 − λ )w(t) − c3(λ , r)w((1 − λ )t). (3.27)
Hence, Bλ (s,t) > 0 follows from (3.27) and (2.33).
Case 2. If 0 < t  s < 1 , then t  λ s + (1 − λ )t  s. From (3.26) we get
1 ∂
D (s,t) = w (t) − c3(λ , r)w (λ s + (1 − λ )t) > 0.
1 − λ ∂t λ
So
Dλ (s,t) > Dλ (s, 0) = λ w(s) + (1 − λ )w(0) − c3(λ , r)w(λ s). (3.28)
Hence, Dλ (s,t) > 0 follows from (3.28) and (2.32).
The following (3.29) and (3.30) imply that c3 (λ , r) and c4 (λ , r) are the best pos-
sible.
λ erf(x) + (1 − λ ) erf(y) λ erf(1)
lim lim = (3.29)
y→0 x→1 erf(Mr (x, y; λ )) 1
erf(λ r )
and
λ erf(x) + (1 − λ ) erf(y)
lim =1 (3.30)
y→x erf(Mr (x, y; λ ))
for r  1 . 
98 W. X IA AND Y. C HU

REFERENCES

[1] H. A LZER, Functional inequalities for the error function, Aequationes Math. 66, 1–2 (2003), 119–
127.
[2] H. A LZER, Functional inequalities for the error function. II, Aequationes Math. 78, 1-2 (2009), 113–
121.
[3] H. A LZER, Error function inequalities, Adv. Comput. Math. 33, 3 (2010), 349–379.
[4] E. Á RP ÁD AND L. A NDREA, The zeros of the complementary error function, Numer. Algorithms 49,
1-4 (2008), 153–157.
[5] B. BAJI Ć, On the power expansion of the inverse of the error function, Bull. Math. Soc. Sci. Math. R.
S. Roumanie (N. S.) 16(64), 4 (1972), 371–379.
[6] B. BAJI Ć, On the computation of the inverse of the error function by means of the power expansion,
Bull. Math. Soc. Sci. Math. R. S. Roumanie (N. S.) 17(65) (1973), 115–121.
[7] O. S. B ERLJAND AND A. JA . P RESSMAN, Asymptotic representations and some estimates for integral
error-functions of arbitrary order, (Russian) Dokl. Akad. Nauk SSSR 140 (1961), 12–14.
[8] R. K. B HADURI AND B. K. J ENNINGS , Note on the error function, Amer. J. Phys. 44, 6 (1976),
590–592.
[9] J. M. B LAIR , C. A. E DWARDS AND J. H. J OHNSON, Rational Chebyshev approximations for the
inverse of the error function, Math. Comp. 30, 136 (1976), 827–830.
[10] J. M. B LAIR , C. A. E DWARDS AND J. H. J OHNSON, Rational Chebyshev approximations for the
inverse of the error function, Math. Comp. 30, 136 (1976), 7–68.
[11] L. C ARLITZ, The inverse of the error function, Pacific J. Math. 13 (1963), 459–470.
[12] S. J. C HAPMAN, On the non-universality of the error function in the smoothing of Stokes discontinu-
ities, Proc. Roy. Soc. London Ser. A 452, 1953 (1996), 2225–2230.
[13] M. A. C HAUDHRY, A. Q ADIR AND S. M. Z UBAIR, Generalized error functions with applications to
probability and heat conduction, Int. J. Appl. Math. 9, 3 (2002), 259–278.
[14] J. T. C HU, On bounds for the normal integral, Biometrika 42 (1955), 263–265.
[15] W. W. C LENDENIN, Rational approximations for the error function and for similar functions, Comm.
ACM 4 (1961), 354–355.
[16] W. J. C ODY, Performance evaluation of programs for the error and complementary error functions,
ACM Trans. Math. Software 16, 1 (1990), 29–37.
[17] W. J. C ODY, Rational Chebyshev approximations for the error function, Math. Comp. 23 (1969),
631–637.
[18] D. C OMAN, The radius of starlikeness for the error function, Studia Univ. Babes-Bolyai Math. 36, 2
(1991), 13–16.
[19] A. D EA ÑO AND N. M. T EMME, Analytical and numerical aspects of a generalization of the comple-
mentary error function, Appl. Math. Comput. 216, 12 (2010), 3680–3693.
[20] D. D OMINICI , Some properties of the inverse error function, Contemp. Math. 457 (2008), 191–203.
[21] H. E. F ETTIS , J. C. C ASLIN AND K. R. C RAMER, Complex zeros of the error function and of the
complementary error function, Math. Comp. 27 (1973), 401–407.
[22] B. F ISHER , F. A L -S IREHY AND M. T ELCI , Convolutions involving the error function, Int. J. Appl.
Math. 13 (2003), 317–326.
[23] B. F ISHER , M. T ELCI AND E. Ö ZCA Ḡ, Results on the error function and the neutrix convolution,
Rad. Mat. 12, 1 (2003), 81–90.
[24] W. G AUTSCHI , Efficient computation of the complex error function, SIAM J. Numer. Anal. 7 (1970),
187–198.
[25] W. G AWRONSKI , J. M ÜLLER AND M. R EINHARD, Reduced cancellation in the evaluation of entire
functions and applications to the error function, SIAM J. Numer. Anal. 45, 6 (2007), 2564–2576.
[26] R. G. H ART, A close approximation related to the error function, Math. Comp. 20 (1966), 600–602.
[27] D. B. H UNTER AND T. R EGAN, A note on the evaluation of the complementary error function, Math.
Comp. 26 (1972), 539–541.
[28] J. K ESTIN AND L. N. P ERSEN, On the error function of a complex argument, Z. Angew. Math. Phys.
7 (1956), 33–40.
[29] S. N. K HARIN, A generalization of the error function and its application in heat conduction problems,
(Russian) Differential equations and their applications, 176 (1981), 51–59.
O PTIMAL INEQUALITIES OF ERROR FUNCTION 99

[30] A. L AFORGIA AND S. S ISMONDI , Monotonicity results and inequalities for the gamma and error
functions, J. Comput. Appl. Math. 23, 1 (1988), 25–33.
[31] F. M ATTA AND A. R EICHEL, Uniform computation of the error function and other related functions,
Math. Comp. 25 (1971), 339–344.
[32] D. S. M ITRINOVI Ć, Problem 5555, Amer. Math. Monthly 75 (1968), 1129–1130.
 2
[33] S. M OROSAWA, The parameter space of error functions of the form a 0z e−w dw , Complex analysis
and potential theory (2007), 174–177.
[34] H. S. M UKUNDA, Evaluation of some definite integrals involving repeated integrals of error functions,
Bull. Calcutta Math. Soc. 66 (1974), 39–54.
[35] K. O LDHAM , J. M YLAND AND J. S PANIER, An atlas of functions. With Equator, the atlas function
calculator, Second edition, Springer, New York, 2009.
[36] H. E. S ALZER, Complex zeros of the error function, J. Franklin Inst. 260 (1955), 209–211.
[37] V. L. N. S ARMA AND H. D. PANDEY, Hölder’s inequality and the error function, Vijnana Parishad
Anusandhan Patrika 25, 4 (1982), 307–310.
[38] O. N. S TRAND, A method for the computation of the error function of a complex variable, Math.
Comp. 19 (1965), 127–129.
[39] N. M. T EMME, Error functions, Dawson’s and Fresnel integrals, NIST handbook of mathematical
functions, 159–171, U.S. Dept. Commerce, Washington, DC, 2010.
[40] J. P. V IGNERON AND P H . L AMBIN, Gaussian quadrature of integrands involving the error function,
Math. Comp. 35, 152 (1980), 1299–1307.
[41] J. A. C. W EIDEMAN, Computation of the complex error function, SIAM J. Numer. Anal. 31, 5 (1994),
1497–1518.
[42] I. H. Z IMMERMAN, Extending Menzel’s closed-form approximation for the error function, Amer. J.
Phys. 44, 6 (1976), 592–593.

(Received December 22, 2013) Weifeng Xia


School of Teacher Education, Huzhou Teachers College
Huzhou 313000, Zhejiang
China
e-mail: xwf212@163.com
Yuming Chu
Department of Mathematics, Huzhou Teachers College
Huzhou 313000, Zhejiang
China
e-mail: chuyuming@hutc.zj.cn

Journal of Mathematical Inequalities


www.ele-math.com
jmi@ele-math.com

You might also like