In continuation of some previous posts on residual estimation risk (RER), we establish an upper bound for RER when the risk measure is VaR for any arbitrary error distribution , where the error distribution is defined as the difference between an actual loss distribution and a loss estimator (see [1] for more details).

**Asymmetric Error Distribution**

For an arbitrary distribution we can establish an upper bound on RER under VAR. This result follows from Chebyshevâ€™s Inequality which assigns an upper bound on the number of extreme observations that can exist in an arbitrary distribution. Mathematically, if is a random variable with finite mean and finite non-zero variance , then for positive

This places an upper bound on the probability of observing values above and below.Â Equivalently, this provides us with the range of observations that are expected to occur within k standard deviations of the mean. Following from the above inequality,

Thus,

By setting and in the definition of from equation (3), and taking the infimum overÂ Â of both sides we obtain

.

This result places an upper bound on RER under VAR for any arbitrary error distribution where the mean and standard deviation are known.

**Symmetric Error Distribution**

If the error distribution is symmetric, we may obtain a sharper result.Â Thus,

By setting and in the definition of $latex VaR$ from equation (3), and taking the infimum overÂ Â of both sides we obtain

.

This result places an upper bound on RER under VAR for any arbitrary symmetric error distribution where the mean and standard deviation are known.

To demonstrate the improvement in sharpness from the asymmetric and symmetric cases, suppose . Due to the monotonicity of the risk measure, it is obvious that . In other words, the confidence level of VAR improves from 50% to 75% when the error distribution is symmetric.

If we instead have the mean and standard deviation of the actual and estimated distributions, then we may decompose the upper bound into these components via

.

Since and . The above inequality establishes an upper bound on RER under VAR for arbitrary distributions of and.

**Critical Value of when **

Since and the first two moments of the above distributions are known for any empirical distribution, one may solve for in this implicit equation to obtain the critical value where RER is completely eliminated. Setting obtains

Since and , we have the following corollary: there exists a such that if .

Thus, we may explicitly solve for the number of standard deviations away from the mean of the error distribution () that ensures RER is equal to zero.

The figures below demonstrate the sharpness of this upper bound for a normal distribution with respect to standard deviations away from the mean, , and confidence level, .

The plots indicate that as volatility of the distribution increases (green to black curves), as does the tightness of the bound. Furthermore, since the normal distribution has support on , as or , the error between RER and the upper bound approaches infinity. An equivalent relationship holds for when or .

**Limiting Case where **

In the case where an arbitrary error distribution has support on where we can deduce the following:

Since

**Reference**

[1]Â Bignozzi, Valeria and Tsanakas, Andreas, Parameter Uncertainty and Residual Estimation Risk (September 29, 2014). This is a preprint of an article accepted for publication in the Journal of Risk and Insurance, (c) 2004 the American Risk and Insurance Association.. Available at SSRN: http://ssrn.com/abstract=2158779 orhttp://dx.doi.org/10.2139/ssrn.2158779