Different Kinds of Risk
Different Kinds of Risk
Different Kinds of Risk
ETH Zurich, Department of Mathematics, 8092 Zurich, Switzerland embrechts@math.ethz.ch Swiss Life, General-Guisan-Quai 40, 8022 Zurich, Switzerland Hansjoerg.Furrer@swisslife.ch Swiss Life, General-Guisan-Quai 40, 8022 Zurich, Switzerland Roger.Kaufmann@swisslife.ch
Summary. Over the last twenty years, the nancial industry has developed numerous tools for the quantitative measurement of risk. The need for this was mainly due to changing market conditions and regulatory guidelines. In this article we review these processes and summarize the most important risk categories considered.
1 Introduction
Tumbling equity markets, falling real interest rates, an unprecedented increase in longevity, inappropriate reserving, and wrong management decisions were among the driving forces that put the nancial stability of so many (insurance) companies at risk over the recent past. Senior management, risk managers, actuaries, accounting conventions, regulatory authorities all played their part. With the solvency of many companies put at stake, political intervention led to the revision of the existing regulatory frameworks. For both the insurance and the banking industry, the aim is to create prudential supervisory frameworks that focus on the true risks being taken by a company. In the banking regime, these principles were set out by the Basel Committee on Banking Supervision (the Committee) and culminated in the socalled Basel II Accord, see [BII]. Initially and under the original 1988 Basel I Accord, the focus has been on techniques to manage and measure market and credit risk. Market risk is the risk that the value of the investments will change due to moves in the market risk factors. Typical market risk factors are stock prices or real estate indices, interest rates, foreign exchange rates, commodity prices. Credit risk, in essence, is the risk of loss due to counter-party defaulting on a contract. Typically, this applies to bonds where the bond holders are concerned that the counter-party may default on the payments (coupon or principal). The goal of the new Basel II Accord was to overturn the imbalances that prevailed in the original 1988 accord. Concomitant with the arrival of Basel II and its more risk sensitive capital requirements for market and
credit risk, the Committee introduced a new risk category aiming at capturing risks other than market and credit risks. The introduction of the operational risk category was motivated, among other considerations, by events such as the Barings Bank failure. The Basel Committee denes operational risk as the risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events. The Basel II denition includes legal risk, but excludes strategic risk, i.e. the risk of a loss arising from a poor strategic business decision. Furthermore, this denition excludes reputational risk. Examples of operational risk include, among others, technology failure, business premises becoming unavailable, errors in data processing, fraud, etc. The capital requirement of Basel II is that banks must hold capital of at least 8% of total risk-weighted assets. This denition was retained from the original accord. Insurance regulation too is rapidly moving towards risk-based foundations. Based on the ndings of the Mller Report [Mul97], it was recognized that u a fundamental review of the assessment of the overall nancial position of an insurance company should be done, including for example the interactions between assets and liabilities, accounting systems and the methods to calculate the solvency margins. In 2001, the European Commission launched the so-called Solvency II project. The key objective of Solvency II is to secure the benets of the policyholders thereby assessing the companys overall risk prole. A prudential supervisory scheme does not strive for a zero-failure target; in a free market, failures will occur. Rather, prudential supervisory frameworks should be designed in such a way that a smooth run-o of the portfolios is ensured in case of nancial distress. Phase 1 of the Solvency II project began in 2001 with the constitution of the so-called London Working Group chaired by Paul Sharma from the FSA (Financial Services Authority). The resulting Sharma Report [Sha02] was published in 2002, and contains a survey of actual failures and near misses from 1996 to 2001. The second phase lasts from 2003 to 2007 and is designated to the development of more detailed rules. Finally, the third phase should be terminated by 2010, and is devoted to the implementation of the new standards, also in the national laws. At the heart of both Basel II and Solvency II lies a three pillar structure. Pillar one denes the minimum nancial requirements. The second pillar earmarks the supervisory review process, whereas pillar three sets out the disclosure requirements. The minimum nancial requirements relate a companys available capital to its economic capital. Economic capital is the amount of capital that is needed to support for retained risks in a loss situation. Associating available capital with economic capital is only meaningful if consistency prevails between valuation and risk measurement. The arrival of a robust marked-to-market culture in the 1990s helps to achieve greater harmonization in this context. The three-pillar structure of both risk based insurance and banking supervisory frameworks indicates that the overall assessment of a nancial institutions nancial stability goes beyond the determination of capital adequacy
ratios. Nevertheless, the focus in this note will be on the capital requirements, that is on pillar one. More specically, we address the issue of how to quantify market, credit and insurance risk. We also touch upon the measurement of operational risk. But rather than promoting seemingly sophisticated (actuarial) measurement techniques for quantifying operational risk, we focus on the very special nature of this risk category, implying that standard analytical concepts prove insucient and also yield counter-intuitive results in terms of diversication. In hindsight, the inexperienced reader could be tempted to believe that only regulators demand for distinctive risk management cultures and cuttingedge economic capital models. Alas, there are many more institutions that keep a beady eye on the companies risk management departments: analysts, investors, and rating agencies, to name a few, have a growing interest in what is going on on the risk management side. Standard and Poors for instance, a rating agency, recently added an Enterprise Risk Management (ERM) criterion when rating insurance companies. The ERM rating is based on ve key metrics, among which are the risk and economic capital models of insurance undertakings. The remainder of this note is organized as follows. In Section 2 we provide the basic prerequisites for quantitative risk management by introducing the notion of risk measures and the concept of risk factor mapping. Special emphasis will be given to two widely used risk measures, namely Value at Risk (VaR) and expected shortfall. Section 3 is devoted to the measurement of credit risk, whereas Section 4 deals with market risk. The problem of how to scale a short term VaR to a longer term VaR will be addressed in Section 4.3. The particularities of operational risk loss data and their implications on the economic capital modeling in connection with VaR will be discussed in Section 5. Section 6 is devoted to the measurement of insurance risk. Both the life and non-life measurement approach that will be presented originate from the Swiss Solvency Test. In Section 7 we make some general comments on the aggregation of risks in the realm of economic capital modeling. Attention will be drawn to risks that exhibit special properties such as extreme heavy-tailedness, extreme skewness, or a particular dependence structure.
2 Preliminaries
Risk models typically aim at quantifying likely losses of a portfolio over a given time horizon that could incur for a variety of risks. Formal risk modeling for instance is required under the new (risk-sensitive) supervisory frameworks in the banking and insurance world (Basel II, Solvency II). In this section, we provide the prerequisites for the modeling of risk by introducing risk measures and the notion of risk factor mapping.
2.1 Risk measures The central notion in actuarial and nancial mathematics is the notion of uncertainty or risk. In this article, uncertainty or risk will always be represented by a random variable, say X or X(t), dened on a ltered probability space (, F, (Ft )t[0,T ] , P). The ltration (Ft )t is assumed to satisfy the usual conditions, that is a) (Ft )t is right-continuous and b) F0 contains all null sets, i.e. if B A F0 with P[A] = 0, then B F0 . Since risks are modeled as (non-negative) random variables, measuring risk is equivalent to establishing a relation between the set of random variables and R, the real numbers. Put another way, a risk measure is a function mapping a risk X to a real number (X). If for example X denes the loss in a nancial portfolio over some time horizon, then (X) can be interpreted as the additional amount of capital that should be set aside so that the portfolio becomes acceptable for a regulator, say. The denition of a risk measure is very general, and yet risk measures should fulll certain properties to make them good risk measures. For instance, it should always hold that (X) is bounded by the largest possible loss, as modeled by FX . Within nance, Artzner et al. [ADEH99] pioneered the systematic study of risk measure properties, and dened the class of so-called coherent risk measures to be the ones satisfying the following properties: (a) Translation invariance: (X + c) = c + (X), for each risk X and constant c > 0. (b) Positive homogeneity: (cX) = c (X), for all risks X and constants c > 0. (c) Monotonicity: if X Y a.s., then (X) (Y ). (d) Subadditivity: (X + Y ) (X) + (Y ). Subadditivity can be interpreted in the way that a merger should not create extra risk; it reects the idea that risk in general can be reduced via diversication. Note that there exist several variations of these axioms depending on whether or not losses correspond to positive or negative values, or whether a discount rate over the holding period is taken into account. In our case, we consider losses as positive and neglect interest payments. The random variables X, Y correspond to values of risky positions at the end of the holding period, hence the randomness. Value at Risk The most prominent risk measure undoubtedly is Value at Risk (VaR). It refers to the question of how much a portfolio position can fall in value over a certain time period with a given probability. The concept of Value at Risk originates from J.P. Morgans RiskMetrics published in 1993. Today, VaR is the key concept in the banking industry for determining market risk capital charges. A textbook treatment of VaR and its properties is Jorion [Jor00]. Formally, VaR is dened as follows:
Denition 1. Given a risk X with cumulative distribution function FX and a probability level (0, 1), then
1 VaR (X) = FX () = inf{x R : FX (x) }.
Typical values for are 0.95, 0.99 or 0.999. The Basel II approach for a market risk charge for example requires a holding period of ten days and a condence level of = 0.99. At the trading oor level, individual trading limits are typically set for one day, = 0.95. Even though VaR has become the benchmark risk measure in the nancial world, it has some deciencies which we shall address here. First, note that VaR only considers the result at the end of the holding period, hence it neglects what happens with the portfolio value along the way. Moreover, VaR assumes the current positions being xed over the holding period. In practice, however, positions are changed almost continuously. It is fair to say, however, that these weaknesses are not peculiar to VaR; other one-period risk measures have the same shortcomings. More serious though is the fact that VaR does not measure the potential size of a loss given that the loss exceeds VaR. It is mainly for this reason why VaR is not being used in the Swiss Solvency Test for the determination of the so-called target capital. There, the regulatory capital requirement asks for sucient capital to be left (on average) in a situation of nancial distress in order to ensure a smooth run-o of the portfolio. The main criticism of VaR, however, is that in general it lacks the property of subadditivity. Care has to be taken when risks are extremely skewed or heavy-tailed, or in case they encounter a special dependency structure. In such circumstances, VaR may not be sub-additive, as the following example with two very heavy-tailed risks shows. The implications for the modeling of economic capital are severe as the concept of diversication breaks down. We come back to this issue later in Sections 5 and 7 when we talk about operational risk losses and their aggregation. Example 1. Let X1 and X2 be two independent random variables with com mon distribution function FX (x) = 1 1/ x for x 1. Observe that the risks X1 , X2 have innite mean, and thus are very heavy-tailed. Furthermore, one easily shows that VaR (X) = (1 )2 . Straightforward calculation then yields FX1 +X2 (x) = P[X1 + X2 x]
x1
< 1 2/x = F2X (x), where F2X (u) = P[2X1 u] for u 2. From this, we then conclude that VaR (X1 + X2 ) > VaR (2X1 ). Since VaR (2X1 ) = VaR (X1 ) + VaR (X1 ),
it follows that VaR (X1 + X2 ) > VaR (X1 ) + VaR (X2 ), hence demonstrating that VaR is not sub-additive in this case. Note that a change in the risk measure from VaR to expected shortfall (see Denition 2 below), say, is no reasonable way out in this case. The problem being that expected shortfall is innite in an innite mean model. We conclude this section by showing that VaR is sub-additive for normally distributed risks. In fact, one can show that VaR is sub-additive for the wider class of linear combinations of the components of a multivariate elliptical distribution, see for instance McNeil et al. [MFE05], Theorem 6.8. Example 2. Let X1 , X2 be jointly normally distributed with mean vector = (1 , 2 ) and covariance matrix =
2 1 1 2 2 1 2 2
where 1 1 and i > 0, i = 1, 2. Let 0.5 < 1, then VaR (X1 + X2 ) VaR (X1 ) + VaR (X2 ). (1)
The main observation here is that, since (X1 , X2 ) is bivariate normally distributed, X1 , X2 and X1 + X2 all have univariate normal distributions. Hence it follows that VaR (Xi ) = i + i q (N ), VaR (X1 + X2 ) = 1 + 2 + i = 1, 2,
2 2 1 + 21 2 + 2 q (N ).
Here q (N ) denotes the -quantile of a standard normally distributed random variable. The assertion in (1) now follows because of q (N ) 0 (since 0.5 2 2 < 1) and (1 + 21 2 + 2 )1/2 1 + 2 (since 1). Expected shortfall As mentioned earlier, for a given level , VaR does not give information on the loss sizes beyond this quantile. To circumvent this problem, Artzner et al. [ADEH99] considered the notion of expected shortfall or conditional tail expectation instead. Denition 2. Let X be a risk and (0, 1). The expected shortfall or conditional tail expectation is dened as the conditional expected loss given that the loss exceeds VaR (X): ES (X) = E[X|X > VaR (X)] .
Intuitively, ES (X) represents the average loss in the worst 100(1)% cases. This representation is made more precise by observing that for a continuous random variable X one has ES (X) = 1 1
1
VaR (X) d,
0 < < 1.
For continuous risks X expected shortfall, as dened in Denition 2, is a coherent risk measure, see Artzner et al. [ADEH99]. For risks which are not continuous, a slight modication of Denition 2 leads to a coherent, i.e. subadditive risk measure; see McNeil et al. [MFE05], Section 2.2.4. 2.2 Risk factor mapping and loss portfolios Denote the value of a portfolio at time t by V (t). The loss of the portfolio over the period [t, t + h] is given by L[t,t+h] = V (t + h) V (t) . Note our convention to quote losses as positive values. Following standard risk management practice, the portfolio value is modeled as a function of a d-dimensional random vector Z(t) = (Z1 (t), Z2 (t), . . . , Zd (t)) of risk factors Zi . Hence, V (t) = V (t; Z(t)). (2) The representation (2) is known as mapping of risks. Representing a nancial institutions portfolio as a function of underlying market-risky instruments constitutes a crucial step in any reasonable risk management system. Indeed, any potential risk factor not included in this mapping will leave a blind spot on the resulting risk map. It is convenient to introduce the vector X[t,t+h] = Z(t + h) Z(t) of risk factor changes for the portfolio loss L[t,t+h] . It can be approximated by L [t,t+h] , where L (3) [t,t+h] = ( V ) X[t,t+h] provided the function V : Rd R is dierentiable. Here, f denotes the vector of partial derivatives f = (f /z1 , . . . , f /zd ) . Observe that in (3) we suppressed the explicit time dependency of V . The approximation (3) is convenient as it allows one to represent the portfolio loss as a linear function of the risk factor changes, see also Section 4.1. The linearity assumption can be viewed as a rst-order approximation (Taylor series expansion of order one) of the risk factor mapping. Obviously, the smaller the risk factor changes, the better the quality of the approximation.
3 Credit risk
Credit risk is the risk of default or change in the credit quality of issuers of securities to whom a company has an exposure. More precisely, default risk is the risk of loss due to a counter-party defaulting on a contract. Traditionally, this applies to bonds where debt holders are concerned that the counterparty might default. Rating migration risk is the risk resulting from changes in future default probabilities. For the modeling of credit risk, the following elements are therefore crucial: default probabilities: probability that the debtor will default on its obligations to repay its debt; recovery rate: proportion of the debts par value that the creditor would receive on a defaulted credit, and transition probabilities: probability of moving from one credit quality to another within a given time horizon.
In essence, there are two main approaches for the modeling of credit risk, so-called structural models and reduced form or intensity based methods. 3.1 Structural models Merton [Mer74] proposed a simple capital structure of a rm where the dynamics of the assets are governed by a geometric Brownian motion: dA(t) = A(t)( dt + dW (t)), t [0, T ].
In its simplest form, an obligors default in a structural model is said to occur if the obligors asset value A(T ) at time T is below a pre-specied deterministic barrier x, say. The default probability can then be calculated explicitly: P[A(T ) x] = log(x/A(0)) ( 2 /2)T T .
Here denotes the cumulative distribution function of a standard normal x random variable, i.e. (x) = 1/ 2 exp{y 2 /2} dy. Various extensions of Mertons original rm value model exist. For instance, one can let the barrier x be a (random) function of time. 3.2 Reduced form models In a reduced form pricing framework, it is assumed that the default time is governed by a risk neutral default intensity process = {(t) : t 0}. That is, default is dened as the rst arrival time (jump time) of a counting process with intensity . It can be shown that the conditional probability at time t, given all information at that time, of survival to a future time T , is given by
p(t, T ) = EQ e
RT
t
(u) du
|Ft
(4)
From (4) one immediately recognizes the analogy between an intensity process and a short interest rate process r for the time-t price of a (default free) zerocoupon bond maturing at time T . The latter is given by P (t, T ) = EQ B(t)/B(T )|Ft = EQ e
t 0
RT
t
r(u) du
|Ft ,
where B(t) = exp{ r(s) ds} denotes the risk free bank account numraire. e As shown by Lando [Lan98], the defaultable bond price at time t (assuming zero recovery) is then given by P (t, T ) = EQ e
RT
t
(r(u)+(u)) du
|Ft ,
provided default has not already occurred by time t. Reduced form models can be extended to allow for non-zero recovery. Due and Singleton [DS99] for instance introduce the concept of recovery of market value (RMV), where recovery is expressed as a fraction of the market value of the security just prior to default. Formally, it is assumed that the claim pays (1 L(t))V (t), where V (t) = limst V (s) is the price of the claim just before default, and L(t) is the random variable describing the fractional loss of market value of the claim at default. Under technical conditions, Due and Singleton [DS99] show that P (t, T ) = EQ e
RT
t
(r(u)+(u)L(u)) du
|Ft .
Here, r(u) + (u)L(u) is the default-adjusted short rate. 3.3 Credit risk for regulatory reporting Compared to the original 1988 Basel accord and its amendments, Basel II better reects the relative credit qualities of obligors based on their credit ratings. Two approaches are being proposed under Basel II, namely (A) Standardized approach (B) Internal-ratings-based approach The standardized approach better recognizes the benets of credit risk migration and also allows for a wider range of acceptable collateral. Under the internal-ratings-based approach, a bank can subject to the banks regulator approval use its own internal credit ratings. The ratings must correspond to the one-year default probabilities and have to be in place for a minimum of three years. The assessment of credit risk under the Solvency II regime essentially follows the Basel II principles. Within the Swiss Solvency Test for instance, the Basel II standardized approach is being advocated. Portfolio models are acceptable too, provided they capture the credit migration risk. It is for this reason why the CreditRisk+ model for instance would not be permissible within the Swiss Solvency Test as this model only covers the default risk, but not the credit migration risk.
10
4 Market risk
Market risk is the risk that the value of an investment will decrease due to moves in market risk factors. Standard market risk factors are interest rates, stock indices, commodity prices, foreign exchange rats, real estate indices, etc. 4.1 Market risk models Variance-covariance The standard analytical approach to estimate VaR or expected shortfall is known as variance-covariance method. This means that the risk factor changes are assumed to be samples from a multivariate normal distribution, and that the loss is represented as a linear function of the risk factor changes, see Section 2.2 for more details. This approach oers an analytical solution, and it is much faster to calculate the risk measures in a parametric regime than performing a simulation. However, a parametric approach has signicant limitations. The assumption of normally distributed risk factor changes may heavily underestimate the severeness of the loss distribution. Moreover, linearization may be a poor approximation of the risk factor mapping. Historical simulation The second well-established approach for measuring market risk exposure is the historical simulation method. Instead of estimating the loss distribution under some explicit parametric model for the risk factor changes, one uses the empirical distribution of the historical loss data. VaR and expected shortfall can then either be estimated directly from the simulated data or by rst tting a univariate distribution to the loss data. The main advantage of this approach is its simplicity of implementation. No statistical estimation of the distribution of the risk factor changes is required. In particular, no assumption on the interdependence of the underlying risk factors is made. On the downside, it may be dicult to collect enough historical data of good quality. Also, the observation period is typically not long enough such that samples of extreme changes in the portfolio value cannot be found. Therefore, adding suitable stress scenarios is very important. Monte Carlo simulation The idea behind the Monte Carlo method is to estimate the loss distribution under some explicit parametric model for the risk factor changes. To be more precise, one rst ts a statistical model to the risk factor changes. Typically, this model is inferred from the observed historical data. Monte Carlo-simulated risk factor changes then allow one to make inferences about the loss distribution and the associated risk measure. This approach is very general, albeit it may require extensive simulation.
11
4.2 Conditional versus unconditional modeling In an unconditional approach, one neglects the evolution of risk factor changes up to the present time. Consequently, tomorrows risk factor changes are assumed to have the same distribution as yesterdays, and thus the same variance as experienced historically. Such a stationary model corresponds to a longterm view and is often appropriate for insurance risk management purposes. However, empirical analysis often reveals that the volatility t of market risk factor changes X(t), conditionally on their past, uctuates over time. Sometimes, the market is relatively calm, then a crisis happens, and the volatility will suddenly increase. Time series models such as GARCH type models al2 low the variance t+1 to vary through time. They are suited for a short-term perspective. GARCH stands for generalized autoregressive conditional heteroskedasticity, which in essence means that the conditional variance on one day is a function of the conditional variances on the previous days. 4.3 Scaling of market risks A risk measures holding period should be related to the liquidity of the assets. If a nancial institution runs into diculties, the holding period should cover the time necessary to raise additional funds for corrective actions. The Basel II VaR approach for market risk for instance requires a holding period of ten days (and a condence level = 0.99). The time horizon thus often spans several days, and sometimes even extends to a whole year. While the measurement of short-term nancial risks is well established and documented in the nancial literature, much less has been done in the realm of long-term risk measurement. The main problem is that long-dated historical data in general is not representative for todays situation and therefore should not be used to make forecasts about the future changes in market risk factors. So the risk manager is typically left with little reliable data to make inference of long term risk measures. One possibility to close this gap is to scale a short-term risk estimate to a longer one. The simplest way to do this is to use the square-root-of-time scaling rule, where a k-day Value at Risk VaR(k) is scaled with n in order to get an estimate for the nk-day Value at Risk VaR(nk) nVaR(k) . This rule is motivated by the fact that one often considers the logarithm of tradable securities, say S, as risk factors. The return over a 10-day period for example is then expressible as R[0,10] := log(S(10))/S(0)) = X(1)+X(2)+ +X(10), where X(k) = log(S(k))log(S(k 1)) = log(S(k)/S(k 1)), k = 1, 2, . . . , 10. Observe that for independent random variables X(i) the standard deviation of R[0,10] equals 10 times the standard deviation of X(1). In this section we analyze under which conditions such scaling is appropriate. We concentrate on unconditional risk estimates and on VaR as risk measure. Recall our convention to quote losses as positive values. Thus the random variable
12
L(t) will subsequently denote the negative value of one-day log-returns, i.e. L(t) = log(S(t)/S(t 1)) for some security S. Scaling under normality Under the assumption of independent and identically zero-mean normally distributed losses L(t) N (0, 2 ) it follows that the n-day losses are also norn mally distributed, that is t=1 L(t) N (0, n 2 ). Recall that for a N (0, 2 )distributed loss L, VaR is given by VaR (L) = q (N ), where q (N ) denotes the -quantile of a standard normally distributed variate. Hence the squareroot-of-time scaling rule VaR(n) = n VaR(1) works perfectly in this case. Now let a constant value be added to the one-day returns, i.e. is subtracted from the one-day loss: L(t) N (, 2 ). Assuming independence among the one-day losses, the n-day losses are again normally distributed n with mean value n and variance n 2 , hence t=1 L(t) N (n, n 2 ). The VaR in this case will be increased by the trend of L. This follows from VaR(n) + n = n (VaR(1) + ), or equivalently VaR(n) = n VaR(1) (n n). Accounting for trends is important and therefore trends should never be neglected in a nancial model. Note that the eect increases linearly with the length n of the time period. To simplify matters, all the models presented below are restricted to the zero-mean case. They can easily be generalized to non-zero mean models, implying that the term (n n) must be taken into account when estimating and scaling VaR. Autoregressive models Next, we consider a stationary autoregressive model of the form L(t) = L(t 1) + t , where (t )tN is a sequence of iid zero-mean normal random variables with variance 2 and (1, 1). Not only are the one-day losses normally distributed, but also the n-day losses: L(t) N 0, 2 , 1 2
n
L(t) N 0,
t=1
2 1 n (n 2 ) . 2 (1 ) 1 2
13
1 n VaR(1) . (5) 1 2 Since the square-root expression in (5) tends to n as 0, one concludes that the scaled one-day value n VaR(1) is a good approximation of VaR(n) for small values of . For more general models, such as stochastic volatility models with jumps or AR(1)-GARCH(1,1) models, the correct scaling from a short to a longer time horizon depends on the condence level and cannot be calculated analytically. In many practical applications, the condence level varies from 0.95 to 0.99, say. Empirical studies show that for such values of , scaling a short-term VaR with the square-root-of-time yields a good approximation of a longer-term VaR, see Kaufmann [Kau05]. For smaller values of , however, the scaled risks tend to overestimate the true risks, whereas larger values of tend to underestimate the risks. In the limit 1, one should abstain from scaling risks, see Brummelhuis and Gugan [BG00a, BG00b]. e Sometimes risk managers are also confronted with the problem of transforming a 1-day VaR at the condence level = 0.95 to a 10-day VaR at the 0.99 level. From a statistical viewpoint, such scaling should be avoided. Our recommendation is to rst try to arrive at an estimate of the 1-day VaR at the 0.99 level and then to make inference of the 10-day VaR by means of scaling. In this section we analyzed the scaling properties in a VaR context. As a matter of fact, these properties in general do not carry over when replacing VaR through other risk measures such as expected shortfall. In an expected shortfall regime coupled with heavy-tailed risks, scaling turns out to be delicate. For light-tailed risk though the square-root-of-time rule still provides good results when expected shortfall is being used. VaR(n) = 1+ 1 n 2
5 Operational risk
According to the capital adequacy frameworks as set out by the Basel Committee, the general requirement for banks is to hold total capital equivalent to at least 8% of their risk-weighted assets. This denition was retained of the old capital adequacy framework (Basel I). In developing the revised framework now known as Basel II the idea was to arrive at signicantly more risk-sensitive capital requirements. A key innovation in this regard was that operational risk besides market and credit risk must be included in the calculation of the total minimum capital requirements. Following the Committees wording, we understand by operational risk the risk of losses resulting from inadequate or failed internal processes, people and systems, or external events. The Basel II framework provides a range of options for the determination of an operational risk capital charge. The proposed methods allow banks and supervisors to select approaches that are most appropriate for a banks operations. These methods are:
14
(1) basic indicator approach, (2) standardized approach, (3) advanced measurement approach. Both the basic indicator approach as well as the standardized approach are essentially volume-based measurement methods. The proxy in both cases is the average gross income over the past three years. These measurement methods are primarily destined for small and medium-sized banks whose exposure to operational risk losses is deemed to be moderate. Large internationally active banks, on the other hand, are expected to implement over time a more sophisticated measurement approach. Those banks must demonstrate that their approach is able to capture severe tail loss events. More formally, banks should set aside a capital charge COp for operational risk in line with the 99.9% condence level on a one-year holding period. Using VaR as risk measure, this approach is known as loss distribution approach (LDA). It is sug8 gested to use k=1 VaR (Lk ) for a capital charge and to allow for a capital reduction through diversication under appropriate dependency assumptions. Here, Lk denotes the one-year operational risk loss of business line k. The choice of 8 business lines and their precise denition is to be found in the Basel II Accord, banks are allowed to use fewer or more. It is at this point where one has to sway a warning ag. A recent study conducted by Moscadelli [Mos04] reveals that operational loss amounts are very heavy-tailed. This stylized fact has been known before, albeit not in such an unprecedented way. Moscadellis analysis suggests that the loss data from six out of eight business lines come from an innite mean model! An immediate consequence is that standard correlation coecients between two such oneyear losses do not exist. Nelehov et al. [NEC06] in their essay carry on with s a the study of Moscadellis data and show the serious implications extreme heavy-tailedness can have on the economic capital modeling, in particular when using VaR as a risk measure. Note that it is not the determination of a VaR per se that causes problems in an innite mean model. Rather, it is the idea of capital reduction due to aggregation or pooling of risks that breaks down in this case, see Example 1 on page 5. We will come back to this issue later in Section 7. Operational risk is also part of Solvency II and most of the insurance industrys national risk-based standard models. In the realm of the Swiss Solvency Test for instance it suces to assess operational risk on a pure qualitative basis. Other models, such as the German GDV model for instance, require a capital charge for operational risk. This charge is mainly volume-based, similar to the Basel II basic indicator or standardized approach.
15
6 Insurance risk
6.1 Life insurance risk Life insurance contracts are typically characterized by long-term nancial promises and guarantees towards the policyholders. The actuarys main task has therefore been to forecast the future liabilities, that is to set up sucient reserves in order that the company can meet its future obligations. Ideally, the modern actuary should also be able to form an opinion on how many assets will be required to meet the obligations and on how the asset allocation should look like from a so-called asset and liability management (ALM) perspective. So life insurance companies are heavily exposed to reserve risk. Under reserve risk, we understand the risk that the actual claims experience deviates from the booked reserves. Booked reserves are always based on some accounting conventions and are determined in such a way that sucient provisions are held to cover the expected actuarial liabilities based on the taris. Typically, these reserves are formula-based, that is, a specic calculation applied individually to each contract in force, then summed up, yields the reserves. Even though they include a margin for prudence, the reserves may prove insucient in the course of time because of e.g. demographic changes. Reserve risk can further be decomposed into the following sub-categories: (A) stochastic risk, (B) parametric risk, (C) model risk. The stochastic risk is due to the variation and severity of the claims. In principle, the stochastic risk can be diversied through a greater portfolio and an appropriate reinsurance program. By ceding large individual risks to a reinsurer via a surplus share for instance, the portfolio becomes aptly homogeneous. Parametric risk arises from the fact that taris can be subject to material changes over time. For example, an unprecedented increase in longevity implies that people will draw annuities over a longer period. It is the responsibility of the (chief) actuary to continually assess and monitor the adequacy of the reserves. Periodic updates of experience data give insight into the adequacy of the reserves. By model risk nally we understand the risk that a life insurance company has unsuitable reserve models in place. This can easily be the case when life insurance products encompass a variety of policyholder options such as e.g. early surrender or annuity take-up options. Changing economic variables and/or an increase in the longevity can result in signicant future liabilities, even when the options were far out of the money at policy inception. It was a combination of falling long-term interest rates and booming stock markets coupled with an increase in longevity that put the solvency of Equitable Life, a UK insurer, at stake and led to the closure of new business. The reason for
16
this was that so-called guaranteed annuity options dramatically increased in value and subsequently constituted a signicant liability which was neither priced nor reserved for. Traditional actuarial pricing and reserving methods based on the expectation pricing principle prove useless in this context were it not for those policyholders who behave in a nancially irrational way. Indeed, empirical studies may reveal that there is no statistical evidence supporting a link between the surrender behavior and the level of market interest rates. Arbitrage pricing techniques are always based on the assumption of nancially rational policyholder behavior though. This means that a person would surrender its endowment policy at the rst instant when the actual payo exceeded the value of continuation. The merits of arbitrage pricing techniques are that they provide insight into the mechanism of embedded options, and consequently these ndings should be used when designing new products. This will leave an insurance company immune against potential future changes in the policyholders behavior towards a more rational one from a mathematical economics point of view. 6.2 Modeling parametric life insurance risk In the following, we will present a model that allows for the quantication of parametric life insurance risk. This model is being used within the Swiss Solvency Test. In essence, it is a variance-covariance type model, that is risk factor changes have a multivariate normal distribution, and changes in the best estimate value of liabilities linearly depend on the risk factor changes.
More formally, it is assumed that for risk factor changes X and weights b L = b X where L = L(Z(t)) denotes the best estimate value of liabilities at the valuation date t, and Z(t) = Z1 (t), Z2 (t), . . . , Zd (t) is the vector of (underwriting) risk factors. Best estimate values are unbiased (neither optimistic, nor pessimistic, nor conservative) estimates which employ the most recent and accurate actuarial and nancial market information. Best estimate values are without any (safety) margins whatsoever. Typically, the value L is obtained by means of projecting the future cash ows and subsequent discounting with the current risk-free yield curve. Again, we denote the risk factor changes by X(t) = Z(t)Z(t1). Within the Swiss Solvency Test, the following set of risk factors is considered: The risk factor mortality for example refers to the best estimate one-year mortality rates qx , qy respectively (second-order mortality rates). The risk factor longevity refers to the improvement of mortality which is commonly expressed in exponential form
Dierent Kinds of Risk Table 1. Life insurance risk factors in the Swiss Solvency Test. (R1) mortality (R2) longevity (R3) disability (R4) recovery (R5) surrender/lapse (R6) annuity take-up
17
t t0 ,
where q(x, t0 ) stands for the best estimate mortality rate of an x year old male at time t0 . Typically, no analytical solutions exist for the partial derivatives bk = L/zk , and hence they have to be approximated numerically by means of sensitivity calculations: bk L(Z + ek ) L(Z)
for small, e.g. = 0.1. Here, ek = (0, . . . , 0, 1, 0, . . . , 0) denotes the kth basis vector in Rd . Combining everything, one concludes that the change L has a univariate normal distribution with variance b b, i.e. L N (0, b b). Here it is assumed that the dependence structure of the underwriting risk factor changes are governed by the covariance matrix . Note that can be decomposed into its correlation matrix R and a diagonal matrix comprising the risk factor changes standard deviations on the diagonal. Hence, = R. Both the correlation coecients and the standard deviations are based on expert opinion; no historical time series exists from which estimates could be inferred. Knowing the distribution of L, one can apply a risk measure to arrive at a capital charge for the parametric insurance risk. Within the Swiss Solvency Test, one uses expected shortfall at the condence level = 0.99. Table 2 shows the correlation matrix R currently being used in the Swiss Solvency Test, whereas Table 3 contains the standard deviations of the underwriting risk factor changes.
6.3 Non-life insurance risk For the purpose of this article, the risk categories (in their general form) already discussed for life above, also apply. Clearly there are many distinctions at the product level. For instance, in non-life we often have contracts over a shorter time period, frequency risk may play a bigger role (think for instance of hail storms) and especially in the realm of catastrophe risk, numerous specic methods have been developed by non-life actuaries. Often techniques borrowed from (non-life) risk theory are taken over by the banking world. Examples are the modeling of loss distributions, the axiomatization of risk measures, IBNR and related techniques, Panjer recursion, etc. McNeil et al. [MFE05] yield an
18
Paul Embrechts, Hansjrg Furrer, and Roger Kaufmann o Table 2. Correlation matrix R of the life insurance risk factor changes. Individual R1 R2 R3 R4 R1 R2 R3 R4 R5 R6 R1 R2 R3 R4 R5 R6 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 Individual R5 R6 Group R1 R2 R3 R4 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 R5 R6
Table 3. Standard deviations of the life insurance risk factor changes (in percentage). Individual Group
Group i
R1 R2 R3 R4 R5 R6 R7 R1 R2 R3 R4 R5 R6 R7 5 10 10 10 25 0 10 5 10 20 10 25 0 10
exhaustive overview on the latter techniques and refer to them as Insurance Analytics. For a comprehensive summary of the modeling of loss distributions, see for instance Klugman et al. [KPW04]. An example stressing the interplay between nancial and insurance risk is Schmock [Sch99]. In the Swiss Solvency Test non-life model the aim is to determine the change in risk bearing capital within one year due to the variability of the technical result. The model is based on the accident year principle. That is, claims are grouped according to the date of occurrence (and not according to the date or year when they are reported). Denoting by [T0 , T1 ] with T1 = T0 + 1 the one-year time interval under consideration, the technical result within [T0 , T1 ] is not only determined by the claims occurring in this period, but also by the claims that have previously occurred and whose settlement stretches across [T0 , T1 ]. The current year claims are further grouped into high frequency-small severity claims (small claims) and low frequency-high severity claims (large claims). It is stipulated that the total of small claims has a gamma distri-
19
bution, whereas in the large claims regime a compound Poisson distribution with Pareto distributed claim sizes is used. As for the claims that have occurred in the past and are not yet settled, the focus is on the annual reserving result; it is dened as the dierence between the sum of the claim payments during [T0 , T1 ] plus the remaining provisions after T1 minus the provisions that were originally set up at time T0 . Within the Swiss Solvency Test, this one-year reserve risk is modeled by means of a (shifted) log-normally distributed random variable. To obtain the ultimate probability distribution of the non-life risk, one rst aggregates the small claims and the large claims risk, thereby assuming independence between these two risk categories. A second convolution is then required to combine the resulting current year risk with the reserve risk, again assuming independence.
7 Aggregation of risks
A key issue for the economic capital modeling is the aggregation of risks. Economic capital models are too often based on the tacit assumption that risk can be diversied via aggregation. For VaR in the context of very heavytailed distributions, however, the idea of a capital relief due to pooling of risks may shipwreck, see Example 1 on page 5 where it is shown that VaR is not sub-additive for an innite mean model. The (non-) existence of subadditivity is closely related to Kolmogorovs strong law of large numbers, see Nelehov s a et al. [NEC06]. In a formal way, diversication could be dened as follows: Denition 3. Let X1 , . . . , Xn be a sequence of risks and Diversication is then expressed as
n n
a risk measure.
D :=
k=1
(Xk )
k=1
Xk .
Extreme heavy-tailedness is one reason why VaR fails to be sub-additive. Another reason overthrowing the idea of diversication is extreme skewness of risks as the following simple example demonstrates. Assume that a loss of EUR 10 million or more is incurred with a probability of 3% and that the loss will be EUR 100000 with a probability of 97%. In this case the VaR at the 95% level is EUR 100000, while aggregating two such independent losses yields a VaR of more than EUR 10 million. The modeling of dependence is a central element in quantitative risk management. In most cases, the assumption of independent (market-) risky instruments governing the portfolio value is too simplistic and unrealistic. Correlation is by far the most used technique in modern nance and insurance to describe dependence between risks. And yet correlation is only one particular
20
measure of stochastic dependence among others. Whereas correlation is perfectly suited for elliptically distributed risks, dangers lurk if correlation is used in a non-elliptical world. Recall that independence of two random variables always implies their uncorrelatedness. The converse, however, does in general not hold. We have shown in Example 2 on page 6 that VaR is sub-additive in a normal risks regime. Indeed, this fact can be used to aggregate market and insurance risk in a variance-covariance type model, see Sections 4.1 and 6.2. There, the required economic capital when combining market and insurance risks will naturally be reduced compared to the stand-alone capital requirements. The above example with extremely skewed risks also shows that independence can be worse than comonotonicity. Comonotonicity means that the risks X1 , . . . , Xd are expressible as increasing functions of a single random variable, Z say. In the case of comonotonic risks VaR is additive, see for instance McNeil et al. [MFE05], Proposition 6.15. For given marginal distribution functions and unknown dependence structure, it is in fact possible to calculate upper and lower bounds for VaR, see Embrechts et al. [EHJ03]. However, these bounds often prove inappropriate in many practical risk management applications. As a consequence, the dependence structure among the risks needs to be modeled explicitly if necessary by making the appropriate assumptions.
8 Summary
In this paper, we have summarized some of the issues underlying the quantitative modeling of risks in insurance and nance. The taxonomy of risk discussed is of course incomplete and very much driven by the current supervisory process within the nancial and insurance services industry. We have hardly vouched upon the huge world of risk mitigation via nancial derivatives and alternative risk transfer, like for instance catastrophe bonds. Nor did we discuss in any detail specic risk classes like liquidity risk and model risk; for the latter, Gibson [Gib00] yields an introduction. Beyond the discussion of quantitative risk measurement and management, there is also an increasing awareness that qualitative aspects of risk need to be taken seriously. Especially through the recent discussions around operational risk, this qualitative aspect of risk management became more important. Though modern nancial and actuarial techniques have highly inuenced the quantitative modeling of risk, there is also a growing awareness that there is an end to the line for this quantitative approach. Though measures like VaR and the whole statistical technology behind it have no doubt had a considerable inuence on the handling of modern nancial instruments, hardly anybody might believe that a single number like VaR can really summarize the overall complexity of risk in an adequate way. For operational risk, this issue is discussed in Nelehov et s a al. [NEC06]; see also Klppelberg and Rootzn [KR99]. u e
21
Modern risk management is being applied to areas of industry well beyond the nancial ones. Examples include the energy sector and the environment. Geman [Gem05] gives an overview of some of the modeling and risk management issues for these markets. A more futuristic view on the types of risk modern society may want to manage is given in Shiller [Shi03].
Acknowledgment
The authors would like to thank Thomas Mikosch for a careful reading of a rst version of the paper.
References
[ADEH99] Artzner, P., Delbaen, F., Eber, J.M. and Heath, D.: Coherent measures of risk. Mathematical Finance 9, 203-228, (1999) [BII] Basel Committee on Banking Supervision: International Convergence of Capital Measurement and Capital Standards. A Revised Framework. Bank for International Settlements (BIS), Basel, (2005), www.bis.org/publ/bcbs118.pdf [BG00a] Brummelhuis, R. and Gugan, D.: Extreme values of conditional distrie butions of GARCH(1,1) processes. Preprint, University of Reims, (2000) [BG00b] Brummelhuis, R. and Gugan, D.: Multi period conditional distribue tion functions for conditionallly normal GARCH(1,1) models. Journal of Applied Probability 42, 426-445, (2005) [CEN06] Chavez-Demoulin, V., Embrechts, P. and Nelehov, J.: Quantitative s a models for operational risk: extremes, dependence and aggregation. Journal of Banking and Finance (to appear), (2006) [DS99] Due, D. and Singleton, K.J.: Modeling term structures of defaultable bonds. Review of Financial Studies, 12, 687-720, (1999) [DS03] Due, D. and Singleton, K.J.: Credit Risk. Pricing, Measurement and Management. Princeton University Press, Princeton, (2003) [EHJ03] Embrechts, P., Hing, A. and Juri, A.: Using copulae to bound the o Value-at-Risk for functions of dependent risks. Finance and Stochastics 7, 145-167, (2003) [Gem05] Geman, H.: Commodities and Commodity Derivatives: Modelling and Pricing for Agriculturals, Metals and Energy. John Wiley, Chichester, (2005) [Gib00] Gibson, R. (Ed.): Model Risk, Concepts, Calibration and Pricing. London: Risk Waters Group, (2000) [Jor00] Jorion, P.: Value at Risk. McGraw-Hill, New York, (2000) [Kau05] Kaufmann, R.: Long-term risk management, Proceedings of the 15th International AFIR Colloquium, Zurich, (2005) [KPW04] Klugman, S.A., Panjer, H.H. and Willmot, G.E.: Loss Models: From Data to Decisions. John Wiley, New York, 2nd ed., (2004) [KR99] Klppelberg, C. and Rootzn, H.: A single number cant hedge against u e economic catastrophes. Ambio 28, 550-555, (1999)
22 [Lan98]
Paul Embrechts, Hansjrg Furrer, and Roger Kaufmann o Lando, D.: Cox processes and credit-risky securities. Review of Derivatives Research 2, 99-120, (1998) McNeil, A.J., Frey, R. and Embrechts, P.: Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press, Princeton, (2005) Merton, R.: On the pricing for corporate debt: the risk structure of interest rates. Journal of Finance 29, 449-470, (1974) Moscadelli, M.: The modelling of operational risk: experience with the analysis of the data collected by the Basel committee. Technical Report 517, Banca dItalia, (2004) Mller, H.: Solvency of insurance undertakings. Conference of the Inu surance Supervisory Authorities of the Member States of the European Union, (1997), www.ceiops.org Nelehov, J., Embrechts, P. and Chavez-Demoulin, V.: Innite-mean s a models and the LDA for operational risk. Journal of Operational Risk 1, 3-25, (2006) Sandstrm, A.: Solvency: Models, Assessment and Regulation. Chapo man & Hall/CRC, Boca Raton, (2005) Schmock, U.: Estimating the value of the Wincat coupons of the Winterthur insurance convertible bond: a study of the model risk. ASTIN Bulletin 29, 101-163, (1999) Sharma, P.: Prudential supervision of insurance undertakings. Conference of Insurance Supervisory Services of the Member States of the European Union, (2002), www.ceiops.org Shiller, R. J. The New Financial Order: Risk in the Twenty-First Century. Princeton University Press, Princeton, (2003)
[MFE05]
[Mer74] [Mos04]
[Mul97]
[NEC06]
[San06] [Sch99]
[Sha02]
[Shi03]