Robust Prediction with Risk Measures
by
Yerlan Duisenbay
Submitted to the Department of Mathematics
in partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics
at the
NAZARBAYEV UNIVERSITY Apr 2020
โ c Nazarbayev University 2020. All rights reserved.
Author . . . . Department of Mathematics
Apr 29, 2020
Certified by . . . . Kerem Ugurlu Assistant Professor Thesis Supervisor
Accepted by . . . .
Daniel Pugh
Dean, School of Sciences and Humanities
Robust Prediction with Risk Measures
by
Yerlan Duisenbay
Submitted to the Department of Mathematics on Apr 29, 2020, in partial fulfillment of the
requirements for the degree of Master of Science in Applied Mathematics
Abstract
This thesis deals with coherent risk measures and its simulation with respect to dif- ferent probability distributions. This study gives a numerical scheme to approximate any coherent risk measure via a sum of specific quantiles. We give the theoretical background on coherent risk measures in the first part and in the second part of this thesis we illustrate our findings via several simulations.
Thesis Supervisor: Kerem Ugurlu Title: Assistant Professor
Acknowledgments
This thesis is the last and most important work of my graduate degree in applied mathematics. I would like to thank my advisor, Kerem Ugurlu, for accompanying me from the beginning to the end of writing this thesis, giving advice and helping me understanding this area in mathematics. Also, I would like to especially thank Nazarbayev University for the opportunity to gain high knowledge by providing all the necessary conditions to obtain it.
Contents
1 Statistical Background 13
1.1 Random Variables . . . 13
1.1.1 Expected value(mean) and variance . . . 14
1.2 Common random variables . . . 15
1.2.1 Normal distribution . . . 15
1.2.2 Chi-square distribution . . . 15
1.2.3 Exponential distribution . . . 16
1.2.4 Student-t distribution . . . 17
1.2.5 Weibull distribution . . . 18
1.3 Verification of a Random Sample from a Distribution . . . 18
1.3.1 Kolmogorov-Smirnov Test . . . 18
1.3.2 Kolmogorov Smirnov test for 2 samples . . . 20
1.4 Linear Regression . . . 20
2 23 2.1 Risk measure . . . 23
2.2 Convex risk measures . . . 23
2.3 Coherent risk measures . . . 24
2.4 Value at Risk . . . 24
2.5 Average Value at Risk . . . 24
2.6 Tail conditional expectation . . . 27
2.7 Entropic risk measures . . . 28
3 29
3.1 Calculating AVaRโs of different distributions . . . 29
3.2 Data . . . 30
3.3 Linear regression analysis . . . 31
3.4 Error analysis . . . 31
3.5 Calculating AVaRโs from error . . . 32
A Code 39
List of Figures
1-1 Standard Normal Distribution . . . 15
1-2 Chi square distribution with different df . . . 16
1-3 Exponential distribution with different ๐ . . . 17
1-4 Student-t distribution with different degree of freedom . . . 17
1-5 Weibull distribution with different k and๐=1 . . . 18
3-1 AVaRs of different distributions[Listing A.9] . . . 30
3-2 Real data . . . 30
3-3 Histogram of error[Listing A.4] . . . 31
3-4 QQ-plot of error[Listing A.7] . . . 32
3-5 AVaRs from error[Listing A.2] . . . 32
3-6 AVaRs and tail means[Listing A.8] . . . 33
3-7 AVaR(X) and -AVaR(-X)[Listing A.3] . . . 33
3-8 AVaR and Confidence levels[Listing A.5] . . . 34
3-9 Coherent risk measure and Entropic risk measure[Listing A.6] . . . . 35
3-10 AVaR and ERM with positive/negative signs[Listing A.10] . . . 35
Introduction
Expected performance criteria are used for solving optimization problems. Starting from Bellman, dynamic programming techniques have used risk neutral performance evaluation. Risk aversive methods have been started to used for prediction the cor- responding problems by utility functions because of not usefulness of the expected value to measure the criteria of the performance. Defining preferences of the risk- aversion into an axiomatic framework, by the paper of Artzner et al., then assessment of risks has a new side for random outcomes. Thus, coherent risk measure has been illustrated. The derivation of the dynamic programming equations of the risk-averse operations and measurement of risks is not so complicated in terms of it is not time- consuming. The reason behind this is that the Bellmann optimality principle is invalid to use in the operations in risk-averse problems. It is already known that the problem of stochastic decision in the multistage case takes more time. If the problem can be resolved at later steps, then there exist a solution for optimality on later stages. But this problem can be tackled by applying one-time Markovian dynamic risk measures.
Also, this difficulty can be solved with the application of state aggregation and AVaR decomposition theorem. In this approach, a dual representation of AVaR is used, and thus this method needs optimization over a probability space when finding a solution to the equation of Bellman. Specific sums of quantiles give AVaRs, which used to calculate any coherent risk measures. It visualised as table of numerical results of simulations with quantiles of different distributions.
Chapter 1
Statistical Background
1.1 Random Variables
Consider an experiment, where we throw a die 8 times and we count even outcome.
Then, we have sample spaceโฆ,such that ๐ค0 ={๐ธ, ๐, ๐ธ, ๐ธ, ๐, ๐, ๐, ๐ธ} โโฆ. In real, we are not interested about probability of coming outcome Even or Odd. We inter- ested on functions(real-valued) of outcomes, such as the number of Even outcomes that appear among our 8 tosses, or the length of the longest run of Odd outcomes.
These functions are called random variables.
Definition 1.1. Random variable X(w) is function X: โฆโR.
Suppose ๐(๐ค)is random variable of how many even outcome will be thrown in ๐ค trials. Given 8 experiments, so๐(๐ค)is finite number of values called discrete random variables. Here, the probability of the set associated with a random variable๐ taking on some specific value ๐ is
๐(๐ =๐) :=๐({๐ค:๐(๐ค) = ๐})
Now, we take random variable๐(๐ค)as decay time of Uranium. In this case,๐(๐ค)has infinite number of values, so it is called continuous random variables. We designate
probability that ๐ takes between two real values a and b:
๐(๐โค๐ โค๐) :=๐({๐ค:๐ โค๐(๐ค)โค๐})
1.1.1 Expected value(mean) and variance
Definition 1.2. For continuous random variables ๐ with range [๐, ๐] and probability density function ๐(๐ฅ) expected value is defined by integral:
E[๐] =
โซ๏ธ ๐ ๐
๐ฅ๐(๐ฅ)๐๐ฅ (1.1)
Definition 1.3. For discrete random variables ๐ expected value is weighted sum of values ๐ฅ๐, where weights are probabilities ๐(๐ฅ๐) :
E[๐] =
๐
โ๏ธ
๐=1
๐ฅ๐๐(๐ฅ๐) (1.2)
Also, expected value is called mean with symbol ๐. It has the following properties:
โ E[๐] =๐ โ๐ โR.
โ E[๐๐+๐] =๐E[๐] +๐ โ๐, ๐โR.
โ E[๐+๐] =E[๐] +E[๐]
Definition 1.4. Variance ๐2 is measure of concentration of distribution of random variables ๐ around its mean.
๐ ๐๐[๐] =E[(๐โE[๐])2] =E[๐2โ2E[๐]๐+E[๐]2] =
=E[๐2]โ2E[๐]E[๐] +E[๐]2 =E[๐2]โE[๐]2
(1.3)
Properties:
โ ๐ ๐๐[๐] = 0 โ๐โR.
โ ๐ ๐๐[๐๐+๐] =๐2๐ ๐๐[๐] โ๐, ๐โR.
โ ๐ ๐๐[๐+๐] =๐ ๐๐[๐] +๐ ๐๐[๐]
Definition 1.5. Square root of variance is called standard deviation and denoted by ๐.
Definition 1.6. ๐๐กโ quantile of distribution ๐ is value such that ๐(๐ โค๐๐) = ๐
1.2 Common random variables
1.2.1 Normal distribution
Definition 1.7. A random variable ๐ is called normal distribution(Gausian distri- bution) if and only if, for ๐ > 0 and -โ< ๐ <โ, its density function is:
๐(๐ฅ) = 1 ๐โ
2๐๐โ12(๐ฅโ๐๐ )2 (1.4)
Figure 1-1: Standard Normal Distribution
1.2.2 Chi-square distribution
Definition 1.8. The distribution which is generated by sum of squares of independent standard normal random variables ๐1, ๐2...๐๐ is called Chi-square distribution with degree of freedom k.
๐=
๐
โ๏ธ
๐=1
๐๐๐
Probability density function of Chi-square distribution is :
๐(๐ฅ, ๐) =
โง
โชโช
โจ
โชโช
โฉ
๐ฅ๐2โ1๐โ๐ฅ2
2๐2ฮ(๐2) , ๐ฅ >0;
0, ๐๐กโ๐๐๐ค๐๐ ๐.
(1.5)
where ฮ is gamma function.
Figure 1-2: Chi square distribution with different df
1.2.3 Exponential distribution
Definition 1.9. Exponential distribution with๐ >0 is random variable X with density function of X:
๐(๐ฅ, ๐) =
โง
โชโจ
โชโฉ
๐๐โ๐๐ฅ, ๐ฅโฅ0;
0, ๐ฅ <0.
(1.6)
Figure 1-3: Exponential distribution with different๐
1.2.4 Student-t distribution
Definition 1.10. Student-t distribution๐ is random variable with probability density function of ๐:
๐(๐ฅ, ๐) = ฮ(๐+12 )
โ๐๐ฮ(๐2) (๏ธ
1 + ๐ฅ2 ๐
)๏ธโ๐+1
2
(1.7) where ๐ >0 is degree of freedom.
Figure 1-4: Student-t distribution with different degree of freedom
1.2.5 Weibull distribution
Definition 1.11. Weibull distribution ๐ is random variable with probability density function of ๐:
๐(๐ฅ, ๐, ๐) =
โง
โชโจ
โชโฉ
๐
๐(๐ฅ๐)๐โ1๐(โ๐ฅ๐)๐, ๐ฅโฅ0;
0, ๐ฅ < 0.
(1.8)
where ๐ >0 is shape parameter and ๐ >0 is scale parameter.
Figure 1-5: Weibull distribution with different k and ๐=1
1.3 Verification of a Random Sample from a Distri- bution
1.3.1 Kolmogorov-Smirnov Test
Suppose that we have an i.i.d. sample๐1, ๐2, ..., ๐๐ with some unknown distribution Pand we would like to test the hypothesis thatPis equal to a particular distribution P0 i.e. decide between the following hypotheses:
๐ป0 :P=P0, ๐ป1 :Pฬธ=P0
Kolmogorov-Smirnov test is a non-parametric test used for testing a hypothesis๐1, ๐2...๐๐ have a given continuous distribution function ๐น, against the one-sided alternative ๐ป1+ : sup|๐|<โ(๐ธ๐น๐(๐ฅ))โ๐น(๐ฅ))>0, where ๐ธ๐น๐ is the mathematical expectation of the empirical distribution function ๐น๐. The KolmogorovโSmirnov test is constructed from the statistic:
๐ท+๐ = sup
|๐|<โ
(๐น๐(๐ฅ))โ๐น(๐ฅ)) = max
1โค๐โค๐(๐
๐ โ๐น(๐(๐)))
where ๐(1)...๐(๐) is the variational series (or set of order statistics) obtained from the sample ๐1, ๐2...๐๐. Thus, the KolmogorovโSmirnov test is a variant of the Kolmogorov test for testing the hypothesis ๐ป0 against a one-sided alternative ๐ป1+. By studying the distribution of the statistic ๐ท+๐, N.V. Smirnov showed that
๐{๐ท๐+โฅ๐}=
(1โ๐)๐
โ๏ธ
๐=1
๐(๏ธ๐
๐
)๏ธ(๐+ ๐
๐)๐โ1(1โ๐โ ๐ ๐)๐โ๐,
where 0 < ๐ <1 and [๐ผ] is integer part of number ๐ผ. Smirnov obtained in addition to the exact distribution of๐ท๐ its limit distribution, namely: if๐โ โand 0< ๐0 <
๐=๐(๐16),
๐{๐ท๐+โฅ๐}=๐โ2๐2[๏ธ
1 +๐(๏ธ 1
โ๐ )๏ธ]๏ธ
,
where ๐0 is any positive number. By means of the technique of asymptotic Pearson transformation it has been proved that if ๐ โ โand 0< ๐0 < ๐=๐(๐13), then
๐{ 1
18๐(6๐๐ท+๐ + 1)2 โฅ๐}=๐โ๐[๏ธ
1 +๐(๏ธ1 ๐
)๏ธ]๏ธ
.
According to the KolmogorovโSmirnov test, the hypothesis๐ป0 must be rejected with significance level ๐ผ whenever
๐๐ฅ๐[๏ธ 1
18๐(6๐๐ท+๐ + 1)2]๏ธ
โค๐ผ,
where,
๐{๏ธ
๐๐ฅ๐[๏ธ 1
18๐(6๐๐ท+๐ + 1)2]๏ธ
โค๐ผ}๏ธ
=๐ผ[๏ธ
1 +๐(๏ธ1 ๐
)๏ธ]๏ธ
.
The testing of ๐ป0 against the alternative ๐ป0โ: inf|๐|<โ(๐ธ๐น๐(๐ฅ))โ๐น(๐ฅ))<0 is dealt with similarly. In this case the statistic of the KolmogorovโSmirnov test is the random variable
๐ท๐โ=โ inf
|๐|<โ(๐น๐(๐ฅ))โ๐น(๐ฅ)) = max
1โค๐โค๐(โ๐โ1
๐ +๐น(๐(๐))) whose distribution is the same as that of the statistic ๐ท๐+ when ๐ป0 is true.
1.3.2 Kolmogorov Smirnov test for 2 samples
KS test for 2 sample is similar to KS test. Suppose we have first sample size of ๐ with c.d.f ๐น(๐ฅ) and second sample with size๐ with c.d.f๐บ(๐ฅ) and we want to test:
๐ป0 :๐น =๐บ ๐ฃ๐ ๐ป1 :๐น ฬธ=๐บ
If ๐น๐(๐ฅ)and ๐บ๐ are corresponding empirical c.d.f.s then the statistic ๐ท๐๐ =(๏ธ ๐๐
๐+๐ )๏ธ12
sup
๐ฅ
|๐น๐(๐ฅ)โ๐บ๐(๐ฅ)|
and other are same.
1.4 Linear Regression
In statistics, linear regression is a linear approach to modeling the relationship be- tween a scalar response (or dependent variable) and one or more explanatory variables (or independent variables). The case of one explanatory variable is called simple linear regression.
๐ฆโ๐ฆห=๐๐ค
As loss function we take sum of squared errors:
๐ฟ(๐ค) =
๐
โ๏ธ
๐=1
(๐ฅ๐๐ ๐คโ๐ฆ๐)2 =๐๐๐๐ค||๐๐คโ๐ฆ||ห 22
๐ค* =๐๐๐๐๐๐๐ค๐ฟ(๐ค) = (๏ธ
๐๐๐)โ1๐๐๐ฆ
Proof: Now, we can find the smallest ๐ค by minimizing loss function.
๐ฟ(๐ค) =
๐
โ๏ธ
๐=1
(๐ฅ๐๐ ๐คโ๐ฆ๐)2 =๐๐๐๐ค||๐๐คโ๐ฆ||ห 22
= (๐๐คโ๐ฆ)๐(๐๐คโ๐ฆ)
= (๐๐ค)๐๐๐คโ(๐๐ค)๐๐ฆโ๐ฆ๐๐๐ค+๐ฆ๐๐ฆ
=๐ค๐๐๐๐๐คโ2๐ค๐๐๐๐ฆ+๐ฆ๐๐ฆ We take gradient of ๐ฟ(๐ค) and equate to 0:
โ๐ฟ(๐ค) =โ(๐ค๐๐๐๐๐คโ2๐ค๐๐๐๐ฆ+๐ฆ๐๐ฆ)
=โ(๐ค๐๐๐๐๐ค)โ2โ(๐ค๐๐๐๐ฆ) +โ(๐ฆ๐๐ฆ)
= 2๐๐๐๐คโ2๐๐๐ฆ= 0 ๐ค* =๐๐๐๐๐๐๐ค๐ฟ(๐ค) =(๏ธ
๐๐๐)โ1๐๐๐ฆ
Chapter 2
We refer the reader for the definitions and theorems of this chapter and for further study of risk measures to [5].
2.1 Risk measure
Definition 2.1. A risk measure ๐ is a function from random variables to real num- bers: ๐:๐ โR.
Properties:
โ ๐(0) = 0
โ ๐(๐+๐) =๐(๐) +๐ for โ๐โR
โ if ๐1 โค๐2 then ๐(๐1)โค๐(๐2) for โ๐1, ๐2 โ๐
2.2 Convex risk measures
Definition 2.2. Consider a probability space(โฆ,โฑ,P)and the space๐ต :=๐ฟ1(โฆ,โฑ,P) of measurable functions ๐ : โฆ โ R (random variables) that have finite first order moments, i.e. EP[|๐|] < โ, where E[ยท] means the expectation with respect to the probability measure P. A mapping ๐: ๐ต โR is called a convex risk measure, if the following axioms hold:
โ (A1)(Convexity)๐(๐๐+ (1โ๐)๐)โค๐๐(๐) + (1โ๐)๐(๐)โ๐โ(0,1), ๐, ๐ โ ๐ต
โ (A2)(Monotonicity) If๐ โค๐, then๐(๐)โค๐(๐),โ๐, ๐ โ ๐ต
โ (A3)(Translation Invariance)๐(๐+๐) =๐+๐(๐),โ๐โR, ๐โ ๐ต
2.3 Coherent risk measures
Definition 2.3. Consider a probability space(โฆ,โฑ,P)and the space๐ต :=๐ฟ1(โฆ,โฑ,P) of measurable functions ๐ : โฆ โ R (random variables) that have finite first order moments, i.e. EP[|๐|] < โ, where E[ยท] means the expectation with respect to the probability measure P. A mapping ๐ : ๐ต โ R is said to be called a coherent risk measure, if the following axioms hold:
โ (A1)(Convexity)๐(๐๐+ (1โ๐)๐)โค๐๐(๐) + (1โ๐)๐(๐)โ๐โ(0,1), ๐, ๐ โ ๐ต
โ (A2)(Monotonicity) If๐ โค๐, then๐(๐)โค๐(๐),โ๐, ๐ โ ๐ต
โ (A3)(Translation Invariance)๐(๐+๐) =๐+๐(๐),โ๐โR, ๐โ ๐ต
โ (A4)(Homogeneity)๐(๐ฝ๐) =๐ฝ๐(๐), ๐โ ๐ต, ๐ฝโฅ0
2.4 Value at Risk
Definition 2.4. Let (โฆ,๐ข,P) be a measurable space and let ๐ โ ๐ฟ1(โฆ,๐ข,P) be a real-valued random variable and ๐ผ โ(0,1).Then VaR is:
๐ ๐๐ ๐ผ(๐) =๐๐๐{๐ฅโR:P(๐ โค๐ฅ)โฅ๐ผ} (2.1) But ๐ ๐๐ is not coherent risk measure. So we will use Average Value at Risk.
2.5 Average Value at Risk
Definition 2.5. Let (โฆ,๐ข,P) be a measurable space and let ๐ โ ๐ฟ1(โฆ,๐ข,P) be a real-valued random variable and ๐ผ โ(0,1).Then AVaR is:
๐ด๐ ๐๐ ๐ผ(๐) = 1 1โ๐ผ
โซ๏ธ 1 ๐ผ
๐ ๐๐ ๐ก(๐)๐๐ก (2.2)
Definition 2.6. Suppose ๐ฟ is probability measure on R and ๐ผโ(0,1).
โ A number ๐โR is called ๐ผ-quantile of ๐ฟ if:
๐ฟ((โโ, ๐])โฅ๐ผ ๐๐๐ ๐ฟ(([๐,โ))โฅ1โ๐ผ. (2.3)
โ A function๐๐ฟ: (0,1)โRis called a quantile function of ๐ฟif for each๐ผ โ(0,1), ๐๐ฟ(๐ผ) is an ๐ผ-quantile of ๐ฟ.
Remark:
1.The set of ๐ผ-quantile of ๐ฟ is a non-empty bounded closed interval and end points are ๐๐ฟโ(๐ผ) and ๐๐ฟ+(๐ผ).
2.The set {๐ผโ (0,1) | ๐๐ฟโ < ๐+๐ฟ} is countable. Because ๐ผ-s for which ๐๐ฟโ(๐ผ)< ๐+๐ฟ(๐ผ) corresponds to intervals of constancy of the cumulative distribution function of๐ฟ.
Theorem 2.5.1. If ๐ โ๐ฟ1(โฆ,๐ข,P) and ๐ผ โ(0,1), then there exists an integral
โซ๏ธ 1 0
๐ ๐๐ ๐ผ๐๐ผ
and it is equal to E[๐].
Proof:
โซ๏ธ 1 0
๐ ๐๐ ๐ผ๐๐ผ =
โซ๏ธ 1 0
๐๐ฟโ(๐ผ) =E[๐โ๐ฟ(๐ผ)] =E[๐]
Theorem 2.5.2. Suppose ๐๐ is quantile function for distribution ๐. Then
๐ด๐ ๐๐ ๐(๐) = 1
๐E[(๐โ๐๐(๐))+] +๐๐(๐). (2.4) Proof: Suppose๐ is standard uniform random variable. Then distributions of๐
and ๐๐(๐)are same. And consequently:
1
๐E[(๐โ๐๐(๐))+] +๐๐(๐)
=1
๐E[(๐๐(๐)โ๐๐(๐))+] +๐๐(๐)
=1 ๐
โซ๏ธ 1 0
(๐๐(๐ผ)โ๐๐(๐))+๐๐ผ+๐๐(๐)
=1 ๐
โซ๏ธ 1 1โ๐
(๐๐(๐ผ)โ๐๐(๐))๐๐ผ+๐๐(๐)
=1 ๐
โซ๏ธ 1 1โ๐
๐ ๐๐ ๐ผ(๐)๐๐ผ
=AVaR๐(๐)
(2.5)
Theorem 2.5.3.
AVaR๐(๐) = sup {๏ธ
EQ[๐]
โ
โ
โ QโชP,๐๐
๐P โค 1 1โ๐
}๏ธ
. (2.6)
Proof: The supremum on the right-hand side is equal to
sup{๏ธ
E[๐๐]โ
โ๐โ โ1(โฆ,โฑ,P),E[๐] = 1,0โค๐โค 1 1โ๐
}๏ธ
E[๐๐] is large, if ๐ takes large values at points, where ๐ takes large values. Hence the supremum is attained for
๐:=
โง
โชโช
โชโช
โชโจ
โชโช
โชโช
โชโฉ
1/1โ๐ on{๐ > ๐๐(๐)}, 0 on{๐ < ๐๐(๐)}, ๐ on{๐ =๐๐(๐)}.
where๐๐ is any quantile function of the distribution of๐and๐ is such thatE[๐] = 1, i.e.
1
1โ๐P(๐ > ๐๐(๐)) +๐P(๐ =๐๐(๐)) = 1.
it follows that
sup{๏ธ
E[๐๐]
โ
โ
โ๐โ โ1(โฆ,โฑ,P),E[๐] = 1, 0โค๐โค 1 1โ๐
}๏ธ
= 1
1โ๐E[๐ยท1{๐>๐๐(๐)}] +๐E[๐ยท1{๐=๐+(๐)}]
=AVaR๐(๐).
2.6 Tail conditional expectation
Tail conditional expectation is used to measure market and non-market risks, pre- sumably for a portfolio of investments. It gives a measure of a right-tail risk, one with which actuaries are very familiar because insurance contracts typically possess exposures subject to โlow-frequency but large-lossesโ.
Definition 2.7. For ๐โ(0,1) tail conditional expectation is defined by:
๐ ๐ถ๐ธ๐(๐) :=E[๐|๐ โฅ๐ ๐๐ ๐(๐)] (2.7)
Theorem 2.6.1. For ๐ โ (0,1) ๐ ๐ถ๐ธ๐(๐) and ๐ด๐ ๐๐ ๐(๐) are equal if and only if P(๐ โฅ๐ ๐๐ ๐(๐)) = 1โ๐. It happens if and only if๐ has continuous distribution.
Proof:
Suppose P(๐ โฅ๐ ๐๐ ๐(๐)) =๐. Then:
๐ ๐ถ๐ธ๐(๐) =E[๐ยท1{๐โฅ๐ ๐๐ ๐(๐)}] P(๐ โฅ๐ ๐๐ ๐(๐))
= 1
1โ๐E[๐ยท1{๐โฅ๐ ๐๐ ๐(๐)}]
= 1
1โ๐E[(๐โ๐ ๐๐ ๐(๐))ยท1{๐โฅ๐ ๐๐ ๐(๐)}] +๐ ๐๐ ๐(๐)
= 1
1โ๐E[(๐โ๐ ๐๐ ๐(๐))+] +๐ ๐๐ ๐(๐) =๐ด๐ ๐๐ ๐(๐)
(2.8)
2.7 Entropic risk measures
In financial mathematics, the entropic risk measure is a risk measure which depends on the risk aversion of the user through the exponential utility function.
Definition 2.8. The entropic risk measure with the risk aversion parameter ๐ >0is defined as :
๐๐๐๐ก(๐) = 1
๐๐๐๐(E[๐๐๐]) = sup{๏ธ
E๐[๐]โ 1
๐๐ป(๐|P)}๏ธ
(2.9) If X has standard normal distribution, then
1
๐log(E[๐๐๐]) =1
๐log( 1
โ2๐๐
โซ๏ธ โ
โโ
๐๐๐ฅ๐โ๐ฅ
2 2๐2 ๐๐ฅ)
=1
๐log( 1
โ2๐๐
โซ๏ธ โ
โโ
๐๐
2๐2 2 โ(โ๐ฅ
2๐โโ๐๐
2)2
๐๐ฅ)
โ
โ
โ๐ข= ๐ฅโ๐๐2
โ2๐ โ ๐๐ข
๐๐ฅ = 1
โ2๐๐
โ
โ
โ
=1
๐log( 1
โ2๐๐
โซ๏ธ โ
โโ
โ 2๐๐๐
2๐2 2 โ๐ข
๐๐ข)
=1 ๐log(๐๐
2๐2 2
โซ๏ธ โ
โโ
โ1
๐๐โ๐ข ๐๐ข) = 1 ๐log(๐๐
2๐2
2 ๐๐๐(โ)) = ๐๐2 2
Chapter 3
3.1 Calculating AVaRโs of different distributions
We calculate AVaRโs of common distributions by the formula (2.2) and present their graphs with respect to different๐ผ values.
Figure 3-1: AVaRs of different distributions[Listing A.9]
3.2 Data
We use the data from 18.12.2019 to 19.12.2018 (https://http//investfunds.kz). It contains exchange rate value of USD/KZT and main factors affecting on it (BRENT price and USD/RUR exchange rate value).
Figure 3-2: Real data
3.3 Linear regression analysis
We do linear regression to all our data. Then we predict new Y by given X values of our data. After we calculate error by subtracting exact value from predicted value.
3.4 Error analysis
From error we calculate sample mean and sample standard deviation. To define dis- tribution of error we do Kolmogorov-Smirnov test with different distributions with same mean and standard deviation as in our error. It gives p-value equal to 0.52 with Normal distribution. Then we do histogram and qq-plot of our error :
Figure 3-3: Histogram of error[Listing A.4]
It looks very similar to normal distribution.But we can see some values are not normal because plot is not exactly bell shaped.
Then, to dispel doubts, we draw qq-plot. As qq-plot is scatter plot which is plotted as one set of quantiles against second, we plot qq-plot our error against normal dis- tribution with same mean and variance. Here we can see that most of the values are normal but in right and left sides some amount of values not normal.
Figure 3-4: QQ-plot of error[Listing A.7]
3.5 Calculating AVaRโs from error
Then we calculate AVaRโs by definition, for VaR we use quantiles of normal distribu- tion with equal mean and standard deviation with mean and standard deviation of our error. We plot bars to show amounts of AVaR at every value ๐ผ.
Figure 3-5: AVaRs from error[Listing A.2]
Figure shows that by rising of ๐ผ values AVaR is rising too. Because by formula of AVaR, AVaR is inverse proportional to ๐ผ. Then we calculate mean from last tail mean of our error such as last 10 percent then last 20 percent of our error until whole
data and sort them to plot with our AVaRs. We do it to compare sorted tail means of error with AVaRs. As shown in figure itโs less than AVaR but after ๐ผ = 0.5 this value become bigger than AVaR.
Figure 3-6: AVaRs and tail means[Listing A.8]
Then we calculateโ๐ด๐ ๐๐ (โ๐)asโ๐from error(๐๐ฅ๐๐๐กโ๐๐๐๐๐๐๐ก๐๐) and we see that ๐ด๐ ๐๐ (๐) and โ๐ด๐ ๐๐ (โ๐) are symmetric. We do it to use AVaRs as confidence level in statistics.
Figure 3-7: AVaR(X) and -AVaR(-X)[Listing A.3]
Then we compare our ๐ด๐ ๐๐ (๐)and โ๐ด๐ ๐๐ (โ๐)values with confidence level and make plot :
Figure 3-8: AVaR and Confidence levels[Listing A.5]
We plot๐ด๐ ๐๐ โฒ๐ by๐ผvalues and confidence levels by percents. We can compare them because for positive๐ด๐ ๐๐ ,โ๐ด๐ ๐๐ is negative values, at that time, confidence levels took negative values until 50 percent and then they become positive .
Then we compare coherent risk measure with entropic risk measure to show which will give more better. We calculate entropic risk measure with formula
๐๐๐๐ก(๐) = 1
๐๐๐๐(E[๐๐๐]) = sup{๏ธ
E๐[๐]โ 1
๐๐ป(๐|P)}๏ธ
Figure is done for equal ๐ โ๐ผ values to compare them, and we see ๐ด๐ ๐๐ value is greater than entropic risk measure value only when ๐ โ๐ผ = 0.1 and ๐โ๐ผ = 0.9.
Then we compare๐ด๐ ๐๐ (๐)andโ๐ด๐ ๐๐ (โ๐)with๐ธ๐ ๐(๐)andโ๐ธ๐ ๐(โ๐)to show how it will be for uses as confidence level.
Figure 3-9: Coherent risk measure and Entropic risk measure[Listing A.6]
Figure 3-10: AVaR and ERM with positive/negative signs[Listing A.10]
Conclusion
The main purpose of this thesis is make prediction by risk measures. Main tool was coherent risk measure - AVaR. To achieve this,first we collected data of USD/KZT and main affecting factors from investfunds.kz and did linear regression analysis.
Second, we calculated sample mean and sample standard deviation of prediction errors. Using Kolmogorov-Smirnov test, we concluded that errors are coming from normal distribuiton. Third, we calulated AVaRโs and negative values of AVaRโs to plot them with confidence levels. As confidence levels, AVaRโs with different๐ผvalues gave best intervals for errors. Every interval between AVaR and minus AVaR with different ๐ผ values contains ๐ผ*10 percent of errors. We also used entropic risk measure instead of AVaR and ploted AVaR confidence levels and entropic risk measure confidence levels.
Appendix A Code
1 i m p o r t n u m p y as np
2 f r o m s c i p y . s t a t s i m p o r t c h i2
3 i m p o r t m a t p l o t l i b . p y p l o t as plt
4 i m p o r t s c i p y
5 i m p o r t p a n d a s as pd
6 f r o m s k l e a r n . l i n e a r _ m o d e l i m p o r t L i n e a r R e g r e s s i o n
7 i m p o r t m a t h
8 i m p o r t r a n d o m
9 f r o m s c i p y . s t a t s i m p o r t n o rm
10 f r o m s c i p y . s t a t s i m p o r t c h i2
11 f r o m s c i p y . s t a t s i m p o r t w e i b u l l _ m i n
12 i m p o r t s e a b o r n as sns
13 i m p o r t s c i p y . i n t e g r a t e as i n t e g r a t e
14 f r o m s t a t s m o d e l s . g r a p h i c s . g o f p l o t s i m p o r t q q p l o t _ 2 s a m p l e s
15 df = pd . r e a d _ e x c e l (โ d a t a . xls โ)
16 y_1 = df [โ kzt โ]
17 y _ e x a c t _ 1 = np . a r r a y ( y_1 )
18 x_1 = df [[โ b r e n t โ,โ rur โ]]
19 r e g _ 1 = L i n e a r R e g r e s s i o n () . fit ( x_1 , y_1 )
20 y _ p r e d _ 1 = r e g _ 1 . p r e d i c t ( x_1 )
21 e r r o r _ 1 = np . a r r a y ( y _ p r e d _ 1 - y _ e x a c t _ 1 )
22 s t d _ 1 = np . std ( e r r o r _ 1 )
23 m e a n _ 1 = np . m e an ( e r r o r _ 1 )
24 def V a r N o r m ( alpha , mean , s t d d e v ) :
25 s = s c i p y . s t a t s . n or m . ppf ( alpha , mean , s t d d e v )
26 r e t u r n s
27 def A V a R ( k , mean , std ) :
28 r e s u l t = (1/ (1 - k ) ) *f l o a t( i n t e g r a t e . q u a d (l a m b d a x : V a r N o r m ( x , mean , std ) , k ,1) [ 0 ] )
29 r e t u r n r e s u l t
Listing A.1: AVAR
1 X = np . a r a n g e (0 ,1 ,0.1)
2 d a t a _ a v a r = [[ A V a R (0 , mean_1 , s t d _ 1 ) , A V a R (0.1 , mean_1 , s t d _ 1 ) , A Va R (0.2 , mean_1 , s t d _ 1 ) , A V a R (0.3 , mean_1 , s t d _ 1 ) , A V a R (0.4 , mean_1 , s t d _ 1 ) , A V a R (0.5 , mean_1 , s t d _ 1 ) , A V a R (0.6 , mean_1 , s t d _ 1 ) , A V a R (0.7 , mean_1 , s t d _ 1 ) , A V a R (0.8 , mean_1 , s t d _ 1 ) , A V a R (0.9 , mean_1 , s t d _ 1 ) ]]
3
4
5 plt . bar ( X , d a t a _ a v a r [0] , c o l o r = โ red โ, w i d t h = 0 . 0 5 )
6
7 c o l o r s = {โ A V a R โ:โ red โ}
8 l a b e l s = l i s t( c o l o r s . k e y s () )
9 h a n d l e s = [ plt . R e c t a n g l e ((0 ,0) ,1 ,1 , c o l o r = c o l o r s [ l a b e l ]) for l a b e l in l a b e l s ]
10 plt . l e g e n d ( handles , l a b e l s )
11 plt . x l a b e l (โ A l p h a โ)
12 plt . y l a b e l (โ A V a R โ)
13 plt . t i t l e (โ A V a R w h o l e d a t a โ )
Listing A.2: Calculation of AVaR
1 e r r o r _ 2 = np . a r r a y ( y _ e x a c t _ 1 - y _ p r e d _ 1 )
2 s t d _ 2 = np . std ( e r r o r _ 2 )
3 m e a n _ 2 = np . m e an ( e r r o r _ 2 )
4
5
6 X = np . a r a n g e (10
7 )
8 d a t a = -1 * np . a r r a y ([[ AV a R (0 , mean_2 , s t d _ 2 ) , A Va R (0.1 , mean_2 , s t d _ 2 ) , A V a R (0.2 , mean_2 , s t d _ 2 ) ,
9 A V a R (0.3 , mean_2 , s t d _ 2 ) , A V a R (0.4 , mean_2 , s t d _ 2 ) , A V aR (0.5 , mean_2 , s t d _ 2 ) ,
10 A V a R (0.6 , mean_2 , s t d _ 2 ) ,
11 A V a R (0.7 , mean_2 , s t d _ 2 ) , A V a R (0.8 , mean_2 , s t d _ 2 ) , A V aR (0.9 , mean_2 , s t d _ 2 ) ]])
12 d a t a 1 = -1 * d a t a
13 fig = plt . f i g u r e ()
14 ax = fig . a d d _ a x e s ([0 ,0 ,1 ,1])
15 ax . bar ( X + 0.00 , d a t a 1 [0] , c o l o r = โ b โ, w i d t h = 0 . 5 )
16 ax . bar ( X , d a t a [0] , c o l o r = โ g โ, w i d t h = 0 . 5 )
17 c o l o r s = {โ A V a R ( X ) โ:โ b l u e โ,โ - AV a R ( - X ) โ:โ g r e e n โ}
18 l a b e l s = l i s t( c o l o r s . k e y s () )
19 h a n d l e s = [ plt . R e c t a n g l e ((0 ,0) ,1 ,1 , c o l o r = c o l o r s [ l a b e l ]) for l a b e l in l a b e l s ]
20 ax . l e g e n d ( handles , l a b e l s )
Listing A.3: AVaR and -AVaR
1 sns . s e t _ s t y l e (โ d a r k g r i d โ)
2 sns . d i s t p l o t ( e r r o r _ 1 )
Listing A.4: Histogram
1 d a t a = np . a r r a y ([[ A V a R (0.1 , mean_2 , s t d _ 2 ) , A V a R (0.2 , mean_2 , s t d _ 2 ) ,
2 A V a R (0.3 , mean_2 , s t d _ 2 ) , A V a R (0.4 , mean_2 , s t d _ 2 ) , A V aR (0.5 , mean_2 , s t d _ 2 ) ,
3 A V a R (0.6 , mean_2 , s t d _ 2 ) ,
4 A V a R (0.7 , mean_2 , s t d _ 2 ) , A V a R (0.8 , mean_2 , s t d _ 2 ) , A V aR (0.9 , mean_2 , s t d _ 2 ) ]])
5 d a t a 1 = np . m u l t i p l y ( data , -1)
6 d a t a 2 =[[ V a r N o r m (0.775 ,0 ,1) * s t d _ 2 + mean_2 , V a r N o r m (0.8 ,0 ,1) * s t d _ 2 + mean_2 ,
7 V a r N o r m (0.825 ,0 ,1) * s t d _ 2 + mean_2 ,
8 V a r N o r m (0.85 ,0 ,1) * s t d _ 2 + mean_2 , V a r N o r m (0.875 ,0 ,1) * s t d _ 2 + mean_2 ,
9 V a r N o r m (0.9 ,0 ,1) * s t d _ 2 + mean_2 , V a r N o r m (0.925 ,0 ,1) * s t d _ 2 + mean_2 ,
10 V a r N o r m (0.95 ,0 ,1) * s t d _ 2 + mean_2 , V a r N o r m (0.975 ,0 ,1) * s t d _ 2 + m e a n _ 2 ]]
11 d a t a 3 = np . m u l t i p l y ( data2 , -1)
12 fig = plt . f i g u r e ()
13 ax = fig . a d d _ a x e s ([0 ,0 ,1 ,1])
14 ax . bar ( X + 0.05 , d a t a 2 [0] , c o l o r = โ g โ, w i d t h = 0 . 0 4 )
15 ax . bar ( X , d a t a [0] , c o l o r = โ b โ, w i d t h = 0 . 0 4 )
16 ax . bar ( X + 0.05 , d a t a 3 [0] , c o l o r = โ g โ, w i d t h = 0 . 0 4 )
17 ax . bar ( X , d a t a 1 [0] , c o l o r = โ b โ, w i d t h = 0 . 0 4 )
18 c o l o r s = {โ A V a R ( X ) โ:โ b l u e โ,โ C on f lvl โ:โ g r e e n โ}
19 l a b e l s = l i s t( c o l o r s . k e y s () )
20 h a n d l e s = [ plt . R e c t a n g l e ((0 ,0) ,1 ,1 , c o l o r = c o l o r s [ l a b e l ]) for l a b e l in l a b e l s ]
21 ax . l e g e n d ( handles , l a b e l s )
Listing A.5: AVaR and Conf lvl
1 def E R M _ h a n d ( teta , s i g m a ) :
2 e =1/ t e t a * np . log ( np . exp ( t e t a * t e t a * s i g m a * s i g m a /2) )
3 r e t u r n e
4 b = s t d _ n o r m a l
5 X = np . a r a n g e (0.1 ,1 ,0.1)
6 d a t a = [[ E R M _ h a n d (0.1 , b ) , E R M _ h a n d (0.2 , b ) , E R M _ h a n d (0.3 , b ) , E R M _ h a n d (0.4 , b ) , E R M _ h a n d (0.5 , b ) , E R M _ h a n d (0.6 , b ) , E R M _ h a n d (0.7 , b ) , E R M _ h a n d (0.8 , b ) , E R M _ h a n d (0.9 , b ) ]]
7 d a t a 2 = [[ A V a R (0.1 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.2 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.3 , m e a n _ n o r m a l , s t d _ n o r m a l ) ,
8 A V a R (0.4 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.5 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.6 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.7 , m e a n _ n o r m a l , s t d _ n o r m a l ) ,
9 A V a R (0.8 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.9 , m e a n _ n o r m a l , s t d _ n o r m a l ) ]]
10 plt . bar ( X , d at a [0] , c o l o r = โ red โ, w i d t h = 0 . 0 5 )
11 plt . bar ( X + 0 . 0 5 , d a t a 2 [0] , c o l o r = โ b l u e โ, w i d t h = 0 . 0 5 )
12 c o l o r s = {โ E n t r o p i c r i s k m e a s u r e โ:โ red โ,โ A V a R โ:โ b l u e โ}
13 l a b e l s = l i s t( c o l o r s . k e y s () )
14 h a n d l e s = [ plt . R e c t a n g l e ((0 ,0) ,1 ,1 , c o l o r = c o l o r s [ l a b e l ]) for l a b e l
in l a b e l s ]
15 plt . l e g e n d ( handles , l a b e l s )
16 plt . x l a b e l (โ Teta - A l p h a โ)
17 plt . y l a b e l (โ ERM - A Va R โ)
18 plt . t i t l e (โ ERM wi t h S t a n d a r d N o r m a l d i s t and A V a R โ )
Listing A.6: ERM with Standard Normal dist and AVaR
1 s = np . r a n d o m . n o r m a l ( mean_1 , std_1 , 2 4 3 )
2 fig = q q p l o t _ 2 s a m p l e s ( error_1 , s , l i n e =โ 45 โ)
3 plt . s h o w ()
Listing A.7: QQ-plot
1 m e a n _ 1 0 0 = np . m e a n ( e r r o r _ 1 )
2 m e a n _ 1 0 = np . m e a n ( e r r o r _ 1 [ 2 2 5 : ] )
3 m e a n _ 2 0 = np . m e a n ( e r r o r _ 1 [ 2 0 0 : ] )
4 m e a n _ 3 0 = np . m e a n ( e r r o r _ 1 [ 1 7 5 : ] )
5 m e a n _ 4 0 = np . m e a n ( e r r o r _ 1 [ 1 5 0 : ] )
6 m e a n _ 5 0 = np . m e a n ( e r r o r _ 1 [ 1 2 5 : ] )
7 m e a n _ 6 0 = np . m e a n ( e r r o r _ 1 [ 1 0 0 : ] )
8 m e a n _ 7 0 = np . m e a n ( e r r o r _ 1 [ 7 5 : ] )
9 m e a n _ 8 0 = np . m e a n ( e r r o r _ 1 [ 5 0 : ] )
10 m e a n _ 9 0 = np . m e a n ( e r r o r _ 1 [ 2 5 : ] )
11 d a t a _ t a i l =[[ m e a n _ 1 0 0 , mean_90 , mean_80 , mean_70 , mean_60 , mean_50 , mean_40 , mean_30 , mean_20 , m e a n _ 1 0 ]]
12 plt . bar ( X , d a t a _ a v a r [0] , c o l o r = โ red โ, w i d t h = 0 . 0 5 )
13 plt . bar ( X + 0 . 0 5 , d a t a _ t a i l [0] , c o l o r = โ b l u e โ, w i d t h = 0 . 0 5 )
14 c o l o r s = {โ A V a R โ:โ red โ,โ T a i l _ m e a n โ:โ b l u e โ}
15 l a b e l s = l i s t( c o l o r s . k e y s () )
16 h a n d l e s = [ plt . R e c t a n g l e ((0 ,0) ,1 ,1 , c o l o r = c o l o r s [ l a b e l ]) for l a b e l in l a b e l s ]
17 plt . l e g e n d ( handles , l a b e l s )
18 plt . x l a b e l (โ T a i l percent - A l p h a โ)
19 plt . y l a b e l (โ AVaR - T a i l _ m e a n โ)
20 plt . t i t l e (โ T a i l m e a n and A V a R โ )
Listing A.8: Tail mean and AVaR
1 def V a r C h i 2 ( x , y ) :
2 s = s c i p y . s t a t s . c hi 2 . ppf ( x , df = y )
3 r e t u r n s
4 def A v a r C h i 2 ( t , y ) :
5 r e s u l t = (1/ (1 - t ) ) *f l o a t( i n t e g r a t e . q u a d (l a m b d a x : V a r C h i 2 ( x , y ) , t , 1) [ 0 ] )
6 r e t u r n r e s u l t
7 k = np . a r a n g e (0.1 , 1 , 0 . 1 )
8 y =[]
9 for j in r a n g e(9) :
10 for i in r a n g e(9) :
11 y . a p p e n d ( A v a r C h i 2 (( i +1) /10 , j +1) )
12 plt . p l o t ( k , y [:9] ,โ g โ, l a b e l =โ df =1 โ)
13 plt . p l o t ( k , y [9:18] ,โ r โ, l a b e l =โ df =2 โ)
14 plt . p l o t ( k , y [ 1 8 : 2 7] ,โ b l u e โ, l a b e l =โ df =3 โ)
15 plt . p l o t ( k , y [ 2 7 : 3 6] ,โ y e l l o w โ, l a b e l =โ df =4 โ)
16 plt . p l o t ( k , y [ 3 6 : 4 5] ,โ v i o l e t โ, l a b e l =โ df =5 โ)
17 plt . p l o t ( k , y [ 4 5 : 5 4] ,โ b r o w n โ, l a b e l =โ df =6 โ)
18 plt . p l o t ( k , y [ 5 4 : 6 3] ,โ p u r p l e โ, l a b e l =โ df =7 โ)
19 plt . p l o t ( k , y [ 6 3 : 7 2] ,โ c h o c o l a t e โ, l a b e l =โ df =8 โ)
20 plt . p l o t ( k , y [72:] ,โ c o r a l โ, l a b e l =โ df =9 โ)
21 plt . l e g e n d ()
22 plt . x l a b e l (โ a l p h a โ, f o n t s i z e = 2 0 )
23 plt . y l a b e l (โ A V a R at a l p h a โ, f o n t s i z e =2 0 )
24 plt . t i t l e (โ C h i 2 โ, f o n t s i z e = 3 0 )
25 plt . s h o w ()
26 def V a r W e i ( x , y ) :
27 s = s c i p y . s t a t s . w e i b u l l _ m i n . ppf ( x , y )
28 r e t u r n s
29 def A v a r W e i ( t , y ) :
30 A V a R = (1/ (1 - t ) ) *f l o a t( i n t e g r a t e . q u a d (l a m b d a x : V a r W e i ( x , y ) , t , 1) [ 0 ] )
31 r e t u r n A V a R
32 plt . f i g u r e ()
33 k = np . a r a n g e (0.1 , 1. , 0 . 1 )
34 y =[]
35 for j in r a n g e(9) :
36 for i in r a n g e(9) :
37 y . a p p e n d ( A v a r W e i (( i +1) /10 , j +1) )
38 plt . p l o t ( k , y [:9] ,โ g โ, l a b e l =โ k =1 โ)
39 plt . p l o t ( k , y [9:18] ,โ r โ, l a b e l =โ k =2 โ)
40 plt . p l o t ( k , y [ 1 8 : 2 7] ,โ b l u e โ, l a b e l =โ k =3 โ)
41 plt . p l o t ( k , y [ 2 7 : 3 6] ,โ y e l l o w โ, l a b e l =โ k =4 โ)
42 plt . p l o t ( k , y [ 3 6 : 4 5] ,โ v i o l e t โ, l a b e l =โ k =5 โ)
43 plt . p l o t ( k , y [ 4 5 : 5 4] ,โ b r o w n โ, l a b e l =โ k =6 โ)
44 plt . p l o t ( k , y [ 5 4 : 6 3] ,โ p u r p l e โ, l a b e l =โ k =7 โ)
45 plt . p l o t ( k , y [ 6 3 : 7 2] ,โ c h o c o l a t e โ, l a b e l =โ k =8 โ)
46 plt . p l o t ( k , y [72:] ,โ c o r a l โ, l a b e l =โ k =9 โ)
47 plt . l e g e n d ()
48 plt . x l a b e l (โ a l p h a โ, f o n t s i z e = 2 0 )
49 plt . y l a b e l (โ A V a R at a l p h a โ, f o n t s i z e =2 0 )
50 plt . t i t l e (โ W e i b u l l โ, f o n t s i z e = 3 0 )
51 plt . s h o w ()
52 def V a r S t u d e n t ( x , y , z ) :
53 s = s c i p y . s t a t s . nct . ppf ( x , y , z )
54 r e t u r n s
55 def A v a r S t u d e n t ( t , y ) :
56 A V a R = (1/ (1 - t ) ) *f l o a t( i n t e g r a t e . q u a d (l a m b d a x : V a r S t u d e n t ( x , y , 0 . 2 4 0 4 5 0 3 1 3 3 1 2 ) , t , 1) [ 0 ] )
57 r e t u r n A V a R
58 plt . f i g u r e ()
59 k = np . a r a n g e (0.1 , 1. , 0 . 1 )
60 y =[]
61 for j in r a n g e(9) :
62 for i in r a n g e(9) :
63 y . a p p e n d ( A v a r S t u d e n t (( i +1) /10 , j +1) )
64 plt . p l o t ( k , y [:9] ,โ g โ, l a b e l =โ k =1 โ)
65 plt . p l o t ( k , y [9:18] ,โ r โ, l a b e l =โ k =2 โ)
66 plt . p l o t ( k , y [ 1 8 : 2 7] ,โ b l u e โ, l a b e l =โ k =3 โ)
67 plt . p l o t ( k , y [ 2 7 : 3 6] ,โ y e l l o w โ, l a b e l =โ k =4 โ)
68 plt . p l o t ( k , y [ 3 6 : 4 5] ,โ v i o l e t โ, l a b e l =โ k =5 โ)
69 plt . p l o t ( k , y [ 4 5 : 5 4] ,โ b r o w n โ, l a b e l =โ k =6 โ)
70 plt . p l o t ( k , y [ 5 4 : 6 3] ,โ p u r p l e โ, l a b e l =โ k =7 โ)
71 plt . p l o t ( k , y [ 6 3 : 7 2] ,โ c h o c o l a t e โ, l a b e l =โ k =8 โ)
72 plt . p l o t ( k , y [72:] ,โ c o r a l โ, l a b e l =โ k =9 โ)
73 plt . l e g e n d ()
74 plt . x l a b e l (โ a l p h a โ, f o n t s i z e = 2 0 )
75 plt . y l a b e l (โ A V a R at a l p h a โ, f o n t s i z e =2 0 )
76 plt . t i t l e (โ Student - T โ, f o n t s i z e = 3 0 )
77 plt . s h o w ()
78 def V a r E x p ( x , y ) :
79 s = s c i p y . s t a t s . e x p o n . ppf ( x , s c a l e =1/ y )
80 r e t u r n s
81 def A v a r E x p ( t , y ) :
82 A V a R = (1/ (1 - t ) ) *f l o a t( i n t e g r a t e . q u a d (l a m b d a x : V a r E x p ( x , y ) , t , 1) [ 0 ] )
83 r e t u r n A V a R
84 plt . f i g u r e ()
85 k = np . a r a n g e (0.1 , 1. , 0 . 1 )
86 y =[]
87 for j in r a n g e(9) :
88 for i in r a n g e(9) :
89 y . a p p e n d ( A v a r E x p (( i +1) /10 , j +1) )
90 plt . p l o t ( k , y [:9] ,โ g โ, l a b e l =โ l a m b d a =1 โ)
91 plt . p l o t ( k , y [9:18] ,โ r โ, l a b e l =โ l a m b d a =2 โ)
92 plt . p l o t ( k , y [ 1 8 : 2 7] ,โ b l u e โ, l a b e l =โ l a m b d a =3 โ)
93 plt . p l o t ( k , y [ 2 7 : 3 6] ,โ y e l l o w โ, l a b e l =โ l a m b d a =4 โ)
94 plt . p l o t ( k , y [ 3 6 : 4 5] ,โ v i o l e t โ, l a b e l =โ l a m b d a =5 โ)
95 plt . p l o t ( k , y [ 4 5 : 5 4] ,โ b r o w n โ, l a b e l =โ l a m b d a =6 โ)
96 plt . p l o t ( k , y [ 5 4 : 6 3] ,โ p u r p l e โ, l a b e l =โ l a m b d a =7 โ)
97 plt . p l o t ( k , y [ 6 3 : 7 2] ,โ c h o c o l a t e โ, l a b e l =โ l a m b d a =8 โ)
98 plt . p l o t ( k , y [72:] ,โ c o r a l โ, l a b e l =โ l a m b d a =9 โ)
99 plt . l e g e n d ()
100 plt . x l a b e l (โ a l p h a โ, f o n t s i z e = 2 0 )
101 plt . y l a b e l (โ A V a R at a l p h a โ, f o n t s i z e =2 0 )
102 plt . t i t l e (โ E x p o n e n t i a l โ, f o n t s i z e = 3 0 )
103 plt . s h o w ()
104 def V a r N o r m ( x , y ) :
105 s = s c i p y . s t a t s . n or m . ppf ( x ,0 , y )
106 r e t u r n s
107 def A v a r N o r m ( t , y ) :
108 A V a R = (1/ (1 - t ) ) *f l o a t( i n t e g r a t e . q u a d (l a m b d a x : V a r N o r m ( x , y ) , t , 1) [ 0 ] )
109 r e t u r n A V a R
110 plt . f i g u r e ()
111 k = np . a r a n g e (0.1 , 1. , 0 . 1 )
112 y =[]
113 for j in r a n g e(9) :
114 for i in r a n g e(9) :
115 y . a p p e n d ( A v a r N o r m (( i +1) /10 , j +1) )
116 plt . p l o t ( k , y [:9] ,โ g โ, l a b e l =โ std =1 โ)
117 plt . p l o t ( k , y [9:18] ,โ r โ, l a b e l =โ std =2 โ)
118 plt . p l o t ( k , y [ 1 8 : 2 7] ,โ b l u e โ, l a b e l =โ std =3 โ)
119 plt . p l o t ( k , y [ 2 7 : 3 6] ,โ y e l l o w โ, l a b e l =โ std =4 โ)
120 plt . p l o t ( k , y [ 3 6 : 4 5] ,โ v i o l e t โ, l a b e l =โ std =5 โ)
121 plt . p l o t ( k , y [ 4 5 : 5 4] ,โ b r o w n โ, l a b e l =โ std =6 โ)
122 plt . p l o t ( k , y [ 5 4 : 6 3] ,โ p u r p l e โ, l a b e l =โ std =7 โ)
123 plt . p l o t ( k , y [ 6 3 : 7 2] ,โ c h o c o l a t e โ, l a b e l =โ std =8 โ)
124 plt . p l o t ( k , y [72:] ,โ c o r a l โ, l a b e l =โ std =9 โ)
125 plt . l e g e n d ()
126 plt . x l a b e l (โ a l p h a โ, f o n t s i z e = 2 0 )
127 plt . y l a b e l (โ A V a R at a l p h a โ, f o n t s i z e =2 0 )
128 plt . t i t l e (โ G a u s i a n โ, f o n t s i z e = 3 0 )
129 plt . s h o w ()
Listing A.9: AVaRs of different distributions
1 b = s t d _ n o r m a l
2 X = np . a r a n g e (0.1 ,1 ,0.1)
3 d a t a = [[ E R M _ h a n d (0.1 , b ) , E R M _ h a n d (0.2 , b ) , E R M _ h a n d (0.3 , b ) , E R M _ h a n d (0.4 , b ) , E R M _ h a n d (0.5 , b ) , E R M _ h a n d (0.6 , b ) , E R M _ h a n d (0.7 , b ) , E R M _ h a n d (0.8 , b ) , E R M _ h a n d (0.9 , b ) ]]
4 d a t a 2 = [[ A V a R (0.1 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.2 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.3 , m e a n _ n o r m a l , s t d _ n o r m a l ) ,
5 A V a R (0.4 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.5 , m e a n _ n o r m a l ,
s t d _ n o r m a l ) , A V a R (0.6 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.7 , m e a n _ n o r m a l , s t d _ n o r m a l ) ,
6 A V a R (0.8 , m e a n _ n o r m a l , s t d _ n o r m a l ) , A V a R (0.9 , m e a n _ n o r m a l , s t d _ n o r m a l ) ]]
7 d a t a 3 = [[ - E R M _ h a n d (0.1 , b ) , - E R M _ h a n d (0.2 , b ) , - E R M _ h a n d (0.3 , b ) , -
E R M _ h a n d (0.4 , b ) , - E R M _ h a n d (0.5 , b ) , - E R M _ h a n d (0.6 , b ) , - E R M _ h a n d (0.7 , b ) , - E R M _ h a n d (0.8 , b ) , - E R M _ h a n d (0.9 , b ) ]]
8 d a t a 4 = [[ - A V a R (0.1 , m e a n _ n o r m a l , s t d _ n o r m a l ) , - A V a R (0.2 , m e a n _ n o r m a l , s t d _ n o r m a l ) , - A V a R (0.3 , m e a n _ n o r m a l , s t d _ n o r m a l ) ,
9 - A V a R (0.4 , m e a n _ n o r m a l , s t d _ n o r m a l ) , - A V a R (0.5 , m e a n _ n o r m a l , s t d _ n o r m a l ) , - A V a R (0.6 , m e a n _ n o r m a l , s t d _ n o r m a l ) , - A V a R (0.7 , m e a n _ n o r m a l , s t d _ n o r m a l ) ,
10 - A V a R (0.8 , m e a n _ n o r m a l , s t d _ n o r m a l ) , - A V a R (0.9 , m e a n _ n o r m a l , s t d _ n o r m a l ) ]]
11
12 plt . bar ( X , d at a [0] , c o l o r = โ red โ, w i d t h = 0 . 0 5 )
13 plt . bar ( X + 0 . 0 5 , d a t a 2 [0] , c o l o r = โ b l u e โ, w i d t h = 0 . 0 5 )
14 plt . bar ( X , d a t a 3 [0] , c o l o r = โ red โ, w i d t h = 0 . 0 5 )
15 plt . bar ( X + 0 . 0 5 , d a t a 4 [0] , c o l o r = โ b l u e โ