1. Introduction
The entropy measure proposed by Shannon [
1] has many applications in various fields such as information science, physics, probability, statistics, communication theory, and economics. Let
X represent the lifetime of a unit which has a cumulative distribution function (CDF)
and a probability density function (PDF)
. The Shannon differential entropy of
X is defined by
if the expected value exists. This quantifies the uncertainty of a random phenomenon. The concept of the Tsallis entropy, initiated by Tsallis [
2], see also [
3], is a generalization of the Boltzmann–Gibbs statistic. Very recently, Tsallis and Borges [
4] showed that, depending on the initial condition and the size of the time series, time reversal can enable the recovery, within a small error bar, of past information when the Lyapunov exponent is non-positive, notably at the Feigenbaum point (edge of chaos), where weak chaos is known to exist. The practical usefulness of time reversal has been very recently exhibited by decreasing error bars in the predictions of strong earthquakes [
5,
6]. For a non-negative continuous random variable (RV)
X with PDF
, the Tsallis entropy of order
is given by
where
for all
and where
for
denotes the quantile function. The Tsallis entropy can be negative for some values of
, but it can also be non-negative if one chooses appropriate values for
. The Tsallis entropy converges to the Shannon entropy when
approaches 1, i.e.,
. The Shannon differential entropy has the property of additivity, i.e., for RVs
X and
Y, which are independent and the entropy of their joint distribution is equal to the sum of their entropies, i.e.,
. However, the Tsallis entropy does not have this property and instead follows a non-additive rule given by
The uncertainty in the lifetime of a new system can be measured by
, where
X is the RV representing the lifetime. However, in some situations, operators already have some information about the age of the system. For example, they may know that the system is still functioning at time
t and want to quantify the uncertainty in the remaining lifetime, i.e.,
. In such cases,
is not appropriate. Consider the PDF of
represented as
where both
x and
t are positive. Here,
represents the survival function of
X. Thus, the residual Tsallis entropy (RTE) is defined as
where
is the quantile function of
The characteristics, extensions, and uses of
have been deeply researched by a number of scholars, including Asadi et al. [
7], Nanda and Paul [
8], and Zhang [
9], along with further studies referenced in their work.
Records have applications in various fields, such as reliability engineering, insurance science, and others. An illustrative example can be found in the field of reliability theory. Consider a system involving
k of
n components to function successfully. Surprisingly, the lifetime of this system corresponds to the
-th order statistic from a sample of size
n. As a result, we can interpret the
n-th upper record value as the lifetime of a system that requires
components to function properly. In other words, the
nth upper record value represents the operating lifetime of a system that requires
components to function successfully. In insurance science, the second or third highest values are relevant for some types of non-life insurance claims; see, e.g., Kamps [
10] for more details. The information properties of record values have been studied by many researchers. The paper by Baratpour et al. [
11] explores the information properties of record values using the Shannon differential entropy. Kumar [
12] investigates the Tsallis entropy of
k-record statistics in various continuous probability models and provides a characterization result for the Tsallis entropy of
k-record values. Additionally, the mentioned paper examines the residual Tsallis entropy of
k-record statistics in a summary context. Drawing on the information measures of record data obtained from independent and identically distributed continuous RVs, Ahmadi [
13] offers new characterizations for continuous symmetric distributions based on the cumulative residual (past) entropy, Shannon entropy, Renyi entropy, Tsallis entropy, and common Kerridge inaccuracy measures. Xiong et al. [
14] present the symmetric property of the extropy of record values and provide characterizations of exponential distributions. They also propose a new test for the symmetry of continuous distributions based on a characterization result, and the Monte Carlo simulation results investigate a wide range of alternative asymmetric distributions. In a recent study, Jose and Sathar [
15] examine the residual extropy of
k-records derived from any continuous distribution, relating it to the residual extropy of
k-records derived from a uniform distribution. They establish lower bounds for the residual extropy of upper and lower
k-records arising from any continuous probability distribution. Furthermore, they discuss the monotone property of the residual extropy for both upper and lower
k-records. Further contributions in this area can be found in the following papers: Paul et al. [
16], Cali et al. [
17], Zamani et al. [
18], Gupta et al. [
19], Paula et al. [
20], Zarezadeh [
21], Baratpour et al. [
22], and Qiu [
23]. These papers, along with their references, provide additional insights and developments related to the topic.
This article is concerned with the study of the residual Tsallis entropy of upper data sets obtained from continuous distributions. The uniform distribution is chosen as a benchmark because of its simplicity and convenience in terms of its density function. It also serves as a versatile tool for modeling other distributions by applying appropriate transformations. Thus, by examining the residual Tsallis entropy of the upper data sets from the uniform distribution, we can gain insight into the entropy behavior of the upper data sets from any continuous distribution. Our main insight is that the RTE of upper data sets from any continuous distribution can be represented by the RTE of upper data sets from the uniform distribution over the interval . To denote the uniform distribution over this interval, we use the notation .
In the present study, we will comprehensively investigate the residual Tsallis entropy of upper data sets obtained from continuous probability distributions. The following sections provide an overview of the structure of the paper and its main contributions: In
Section 2, we first introduce the residual Tsallis entropy and application it provides regarding upper data sets derived from continuous probability distributions. We present a rigorous derivation of the expression for the RTE of upper data sets. Moreover, we establish a lower bound for this entropy measure that provides valuable insight into the minimum achievable entropy of upper data set values. Furthermore, we investigate how the RTE of data set changes in view of the aging behaviors of their components. Finally, we show the monotonic behavior of the residual Tsallis entropy of
n-th upper record values as a function of
Section 3 provides an expression for the residual records based on the knowledge that all units consist of stresses exceeding
Understanding the monotonic property enhances our understanding of entropy dynamics and provides deeper insights into the system reliability and performance characteristics. In
Section 4, we present a comprehensive summary of the overall results of our study. We highlight the main contributions, implications, and applications of the derived expressions, the lower bound, and the monotone property of residual Tsallis entropy.
We will use some notations throughout the paper. The order relations “
”, “
”, “
”, and “
” represent, respectively, the usual stochastic order, hazard rate order, likelihood ratio order, and dispersive order; for a detailed discussion of these stochastic orders, the reader may be referred to Shaked and Shanthikumar [
24].
2. Residual Tsallis Entropy of Record Values
Let us consider a technical system that is subjected to shocks, such as voltage spikes. Then, the shocks can be modeled as a sequence of independent and identically distributed (i.i.d.) RVs , with a common continuous CDF K, PDF k, and survival function . The shocks represent the stresses on the system at different times. We are interested in the record statistics (the values of the highest stresses observed so far) of this sequence. Let us denote by the ith order statistics from the first n observations.
Then, we define the sequences of upper record times
and upper record values
respectively, as follows:
where
It is well known that the PDF and the survival function of
denoted by
and
, respectively, are given by
and
where
is known as the incomplete gamma function (see e.g., [
25]). We use the notation
to indicate that the RV
V has a truncated Gamma distribution with the following PDF
where
and
In the residual part, we concentrate on examining the residual Tsallis entropy of the RV
, as it is considered a measure to quantify the uncertainty degree induced by the density of
in terms of the system’s residual lifetime and its predictability. To facilitate the computations, we introduce a lemma that approves the RTE of order statistics from a uniform distribution that is linked to the incomplete beta function. This relationship is crucial from a practical point of view and allows for a more convenient computation of RTE. The proof of this lemma is omitted here since it involves simple calculations.
Lemma 1. Let be a sequence of i.i.d. RVs from the uniform distribution. Moreover, let denote the n-th upper record values of the sequence . Thenfor all By leveraging this lemma, researchers and practitioners can readily compute the RTE of record values from a uniform distribution using the well-known incomplete gamma function. This computational simplification enhances the applicability and usability of the RTE in various contexts. In
Figure 1, we present the plot of
for values of
and
and values of
The upcoming theorem establishes a relationship between the RTE of record values and the RTE of record values from a uniform distribution.
Theorem 1. Let be a sequence of i.i.d. RVs with CDF F and PDF Let denote the n-th upper record value of the sequence Then, the residual Tsallis entropy of for all is formulated as below:where Proof. By employing the transformation
we can utilize Equations (
2), (
4), and (
5) to derive the following expression
where the last identity is acquired by applying Lemma 1. Hence, the proof is completed. □
Our analysis shows a significant decomposition of the remaining Tsallis entropy of the upper record values. In particular, we have shown that this entropy measure can be expressed as the product of two key components: the residual Tsallis entropy of the upper records from the uniform distribution and the expectation of a truncated gamma RV. From Equation (
8), we can also see that the residual Tsallis entropy of the
n-th upper data set value from an arbitrary continuous distribution
F can be expressed by the residual Shannon entropy of the
n-th upper data set value from
as follows:
where
The specialized version of this result for
was already obtained by Baratpour et al. [
11].
Next, we examine how the residual Tsallis entropy of record values changes by referring to the aging aspects of the underlying distribution. The aging property of X affects the behavior of its residual Tsallis entropy of order . The forthcoming theorem is necessary to our aim. We recall that X has an increasing (decreasing) failure rate (IFR(DFR)) property if its hazard rate function is an increasing (decreasing) function of
Theorem 2. If X is IFR(DFR), then decreases (increases) as t increases.
Proof. We focus on the case where
X is IFR, but the case where
X has DFR is similar. We can observe that
where
denotes the hazard rate of the residual lifetime
This means that we can express Equation (3) as
for all
We can easily verify that
If
, then
. Therefore, when
X is IFR, we obtain the following inequality for all
:
for all
By utilizing the expression (
10), we can derive the following inequality:
This inequality holds true for all values of satisfying or . Consequently, we can conclude that for all . □
Now, we prove that the IFR property of X affects the behavior of the residual Tsallis entropy of record values.
Theorem 3. If X is IFR, then is decreasing in t for all .
Proof. The IFR property of
X implies that
is also IFR, according to Corollary 1 of Gupta and Kirmani [
26]. Therefore, the proof follows from Theorem 2. □
We demonstrate how to use Theorems 3 and 6 with an example.
Example 1. We consider a sequence of i.i.d. RVs
that follow a common Weibull distribution. The CDF of this distribution is given by
We can find the inverse CDF of
X as
Then, we can calculate
Therefore, using (
8), we obtain
We show the plots of
for different values of
,
, and
in
Figure 2. The plots confirm the result of Theorem 3, which states that the residual Tsallis entropy decreases with
t when
X has IFR.
Now, we present a theorem that establishes a lower bound for the residual Tsallis entropy of upper records from any continuous distribution. The lower bound for the residual Tsallis entropy of upper records is influenced by two key factors: the residual Tsallis entropy of upper records from the uniform distribution on the interval and the mode of the original distribution.
Theorem 4. Given the conditions outlined in Theorem 6, let us assume that , where m represents the mode of the PDF k. Under this assumption, we can derive the following result for any Proof. Since for
it holds that
one can write
The result now is easily obtained from relation (
8) and this completes the proof. □
Remark 1. It is essential to emphasize that the equality stated in Equation (
15) may not hold universally, as there is no distribution in which
for all
x within the support of
X. Nevertheless, the bound established in Theorem 4 proves to be immensely valuable, as it offers significant utility in cases where the computation of the mode for various distributions is relatively simple.
We have presented a theorem that establishes a lower bound on the RTE of
, denoted
. This lower bound depends on the RTE of record values from a uniform distribution and the mode of PDF, denoted by
M, of the original distribution. This result provides interesting insights into the information properties of
and provides a measurable lower bound for the RTE with respect to the mode of the distribution. In
Table 1, we show the lower bounds on the RTE of the record values for some common distributions based on Theorem 4.
In the following theorem, we show the monotonic behavior of the residual Tsallis entropy of n-th upper set values with respect to n. First, we need the following lemma.
Lemma 2. Let be a sequence of i.i.d. RVs with CDF F and PDF Let denote the n-th upper record values of the sequence . Then, for all
Proof. We can introduce the RVs
and
with PDFs
and
as follows:
Assuming that
is differentiable in
n, we obtain
where
We can easily see that for
where
We can observe that for
we have
Therefore, since
is an increasing function of
z, we have
by applying Theorem 1.A.3. of [
24]. This implies that (
17) is positive (negative), and hence,
is a decreasing function of
n. □
Theorem 5. Let be a sequence of i.i.d. RVs with CDF F and PDF Let denote the n-th upper record value of the sequence If is decreasing in then is decreasing in
Proof. If we assume that
then we can prove that
, and hence,
. Also, for
then
is an increasing (decreasing) function of
x; therefore, we have
So, using relation (
8), for
we obtain
The first inequality comes from the fact that is non-negative. The last inequality comes from Lemma 2. Therefore, we can conclude that for any □
3. Conditional Tsallis Entropy of Record Values
Hereafter, we are interested in evaluating the residual records
based on knowing the fact that all units are of voltages exceeding
It follows that the survival function of the
can be written as (see [
27])
and hence, we have
Hereafter, we will focus on studying the Tsallis entropy of the RV
that measures the amount of uncertainty contained in the density of
about the predictability of the system’s residual lifetime in terms of the Tsallis entropy. The probability integral transformation
plays a crucial role in our aim. It is clear that
had the pdf as
In the forthcoming proposition, we provide an expression for the Tsallis entropy of by using the earlier mentioned transforms.
Theorem 6. Let be a sequence of i.i.d. RVs with CDF F and PDF The Tsallis entropy of can be expressed as follows:for all Proof. By using the change in
from (
2) and (
20) we obtain
In the last equality,
is the PDF of
V given in (
21) and this completes the proof. □
In the next theorem, we investigate how the residual Tsallis entropy of record values changes with respect to the aging properties of their components.
Theorem 7. If X is IFR (DFR), then is decreasing (increasing) in t for all .
Proof. By using similar arguments of Theorem 2, when
X is IFR, then for all
we have
for all
By utilizing Equation (
22), we can establish the following inequality:
This inequality holds true for all values of satisfying or . As a result, we can conclude that for all . □
In the following example, we provide an illustration of the results presented in Theorems 6 and 7.
Example 2. Suppose we have a sequence of the set of RVs
that are independent and identically distributed with a common Pareto type II distribution. The survival function of this distribution is
Using this, we can show that
The implication of this result is that the Tsallis entropy of exhibits a positive correlation with time t, indicating that it increases as t increases. Consequently, the uncertainty associated with the conditional lifetime also increases with the passage of time. It is important to highlight that the distribution under consideration in this context possesses the property of DFR.
Theorem 8. If X is IFR (DFR), then for all The equality holds when the component lifetimes are exponentially distributed.
Proof. We can use Theorem 7 to show that if X has an IFR (DFR) property, then is a function that decrease (increases) with time t for any . This implies that is always smaller (larger) than or equal to for any and this finishes the proof. It should be noted that the memoryless property of the exponential distribution leads to the equality . Consequently, Theorem 7 establishes that for all . □
Theorem 9. If X is DFR, then a lower bound for is given as follows:for all The equality holds when the component lifetimes are exponentially distributed. Proof. We know that
X has DFR property, which means it is also NWU (that is,
). This means that
for any
Moreover, it is a fact that when
X has DFR property, the PDF
f is a function that goes down, which means that
for any
Using (
22), we can infer that
for all
and this completes the proof. It is worth noting that the exponential distribution exhibits not only the memoryless property but also the desirable DFR characteristic, thus qualifying as NWU. Furthermore, with the exponential distribution satisfying the equality
, Theorem 7 establishes that
for all
. □
In Theorems 8 and 9, we have demonstrated that the aforementioned equality holds true in the case of exponentially distributed component lifetimes. This highlights the significance of the exponential distribution in the context of record values, information theory, and reliability.
Considering the uncertainties of two records, here we discuss the partial ordering of residual records based on knowing the fact that all units are of voltages exceeding The next theorem compares the residual Tsallis entropies of two records.
Theorem 10. Let and denote two residual records having n i.i.d component lifetimes and from CDFs F and G, respectively. If and X or Y is IFR, then for all
Proof. Using the relation (
22), we only need to show that
. Since we assume that
and either
X or
Y has IFR, we can apply the proof of Theorem 5 of [
28] to prove that
. This completes the proof. □
Example 3. Consider two residual records, denoted as and , which consist of n i.i.d. component lifetimes, namely and , respectively. The component lifetimes are drawn from CDFs F and respectively. Let us assume that X follows a Weibull distribution with parameters and scale parameter 1, denoted as . Similarly, we assume that Y follows a Weibull distribution with parameters and scale parameter 1, denoted as . It can be observed that X is stochastically dominated by Y (), indicating that Y has larger values than X in terms of the dispersion order. Furthermore, both X and Y exhibit the property of IFR. So, Theorem 10 yields that for all