Upper Bound for RER under VaR

In continuation of some previous posts on residual estimation risk (RER), we establish an upper bound for RER when the risk measure is VaR for any arbitrary error distribution X, where the error distribution is defined as the difference between an actual loss distribution Y and a loss estimator \hat{Y} (see [1] for more details).

 

Asymmetric Error Distribution

For an arbitrary distribution we can establish an upper bound on RER under VAR. This result follows from Chebyshev’s Inequality which assigns an upper bound on the number of extreme observations that can exist in an arbitrary distribution. Mathematically, if X is a random variable with finite mean {{\mu }_{X}} and finite non-zero variance \sigma _{X}^{2}, then for positive k\in R,

 \Pr \left( {\left| {X-~{{\mu }_{X}}} \right|\ge k{{\sigma }_{X}}} \right)\le \frac{1}{{{{k}^{2}}}}

This places an upper bound on the probability of observing values above k\sigma and below. Equivalently, this provides us with the range of observations that are expected to occur within k standard deviations of the mean. Following from the above inequality,

\frac{1}{{{{k}^{2}}}}\ge \Pr \left( {\left| {{{\mu }_{X}}-X} \right|\ge k{{\sigma }_{X}}} \right)

=\Pr \left( {{{\mu }_{X}}-X\ge k{{\sigma }_{X}}} \right)\cup \Pr \left( {{{\mu }_{X}}-X\le -k{{\sigma }_{X}}} \right)\ge \Pr \left( {{{\mu }_{X}}-X\le -k{{\sigma }_{X}}} \right)

 =\Pr \left( {-X\le -k{{\sigma }_{X}}-{{\mu }_{X}}} \right)=\Pr \left( {X\ge {{\mu }_{X}}+k{{\sigma }_{X}}} \right)=1-\text{Pr}(X\le {{\mu }_{X}}+k{{\sigma }_{X}})

Thus,

 \frac{1}{{{{k}^{2}}}}\ge 1-\text{Pr}\left( {X\le {{\mu }_{X}}+k{{\sigma }_{X}}} \right)

 \Pr \left( {X\le {{\mu }_{X}}+k{{\sigma }_{X}}} \right)\ge 1-\frac{1}{{{{k}^{2}}}}

By setting m={{\mu }_{X}}+k{{\sigma }_{X}}~and  p=1-\frac{1}{{{{k}^{2}}}} in the definition of  VaR from equation (3), and taking the infimum over  m  of both sides we obtain

 RER:=Va{{R}_{{1-\frac{1}{{{{k}^{2}}}}}}}\left( X \right)\le {{\mu }_{X}}+k{{\sigma }_{X}}.

This result places an upper bound on RER under VAR for any arbitrary error distribution where the mean and standard deviation are known.

 

Symmetric Error Distribution

If the error distribution is symmetric, we may obtain a sharper result. Thus,

 \Pr \left( {X\le {{\mu }_{X}}+k{{\sigma }_{X}}} \right)\ge 1-\frac{1}{{2{{k}^{2}}}}

By setting m={{\mu }_{X}}+k{{\sigma }_{X}}~and  p=1-\frac{1}{{2{{k}^{2}}}} in the definition of $latex VaR$ from equation (3), and taking the infimum over   m  of both sides we obtain

 RER:=Va{{R}_{{1-\frac{1}{{2{{k}^{2}}}}}}}\left( X \right)\le {{\mu }_{X}}+k{{\sigma }_{X}}.

This result places an upper bound on RER under VAR for any arbitrary symmetric error distribution where the mean and standard deviation are known.

To demonstrate the improvement in sharpness from the asymmetric and symmetric cases, suppose k=\sqrt{2}. Due to the monotonicity of the risk measure, it is obvious that ~Va{{R}_{{0.5}}}\left( X \right)\le Va{{R}_{{0.75}}}\left( X \right)\le {{\mu }_{X}}+\sqrt{2}{{\sigma }_{X}}. In other words, the confidence level of VAR improves from 50% to 75% when the error distribution is symmetric.

If we instead have the mean and standard deviation of the actual and estimated distributions, then we may decompose the upper bound into these components via

 RER:=Va{{R}_{{1-\frac{1}{{{{k}^{2}}}}}}}\left( {Y-\hat{Y}} \right)\le ~{{\mu }_{Y}}-{{\mu }_{{\hat{Y}}}}+k\sqrt{{\sigma _{Y}^{2}+\sigma _{{\hat{Y}}}^{2}-2~cov\left( {Y,\hat{Y}} \right)~}} .

Since  {{\mu }_{X}}=E\left[ X \right]=E\left[ {Y-\hat{Y}} \right]={{\mu }_{Y}}-{{\mu }_{{\hat{Y}}}} and  ~\sigma _{X}^{2}=\sigma _{{Y-\hat{Y}}}^{2}=\sigma _{Y}^{2}+\sigma _{{\hat{Y}}}^{2}-2~cov\left( {Y,\hat{Y}} \right). The above inequality establishes an upper bound on RER under VAR for arbitrary distributions of Y~and ~\hat{Y}.

 

Critical Value of k when RER=0

Since VaR and the first two moments of the above distributions are known for any empirical distribution, one may solve for k in this implicit equation to obtain the critical value where RER is completely eliminated. Setting RER=0 obtains

 {{k}^{*}}\ge -\frac{{{{\mu }_{X}}}}{{{{\sigma }_{X}}}}

Since {{\sigma }_{X}}>0 and  ~k>0, we have the following corollary: there exists a k* such that RER=0 if  {{\mu }_{X}}<0.

 {{k}^{*}}\ge \frac{{~{{\mu }_{{\hat{Y}}}}-{{\mu }_{Y}}}}{{\sqrt{{\sigma _{Y}^{2}+\sigma _{{\hat{Y}}}^{2}-2~cov\left( {Y,\hat{Y}} \right)~}}}}

Thus, we may explicitly solve for the number of standard deviations away from the mean of the error distribution ( X=Y-\hat{Y}) that ensures RER is equal to zero.

The figures below demonstrate the sharpness of this upper bound for a normal distribution with respect to standard deviations away from the mean, k, and confidence level, 1-p.

1 2

 

 

 

 

 

 

 

 

 

The plots indicate that as volatility of the distribution increases (green to black curves), as does the tightness of the bound. Furthermore, since the normal distribution has support on -\infty \le X\le \infty , as  k\to \infty or k\to 1, the error between RER and the upper bound approaches infinity. An equivalent relationship holds for when  p\to 0 or ~p\to 1.

 

Limiting Case where k \right 1

In the case where an arbitrary error distribution has support on ~\left[ {A,B} \right] where  -\infty \le A,B\le \infty we can deduce the following:

 \lim_{k \right 1 }\,\left| {~Va{{R}_{{1-\frac{1}{{{{k}^{2}}}}}}}\left( X \right)-{{\mu }_{X}}+k{{\sigma }_{X}}} \right|

=\left| {~Va{{R}_{0}}\left( X \right)-{{\mu }_{X}}+{{\sigma }_{X}}} \right|=\left| {A-{{\mu }_{X}}+{{\sigma }_{X}}} \right|

Since

 \lim_{k \right 1 }\,Va{{R}_{{1-\frac{1}{{{{k}^{2}}}}}}}\left( X \right)=Va{{R}_{0}}\left( X \right)=A

 

Reference

[1] Bignozzi, Valeria and Tsanakas, Andreas, Parameter Uncertainty and Residual Estimation Risk (September 29, 2014). This is a preprint of an article accepted for publication in the Journal of Risk and Insurance, (c) 2004 the American Risk and Insurance Association.. Available at SSRN: http://ssrn.com/abstract=2158779 orhttp://dx.doi.org/10.2139/ssrn.2158779