P&L exceeding VaR limits is not an uncommon phenomenon. In fact, (accurate) VaR models should be designed such that there is some expectation of P&L exceeding the VaR (the number of breaches being proportionate to the level of confidence). However, in practice, whenever P&L exceeds VaR it is prudent to investigate why the breach has occurred.

For OSFI-regulated financial institutions, it is not only prudent, but a regulatory requirement. In addition to breaches being used to assess the quality of the VaR model, they are more directly used to quantify the market risk capital charge (see Chapter 9 of CAR guidelines). Thus, it is important for an institution to produce effective and accurate VaR models for their traded risk products/portfolios.

When an institution investigates a VaR breach, they may use *Risk Factor Back-Testing* to determine the source of the breach. Simply put, risk factor back-testing is a hypothesis test, where a risk factor of a VaR model is compared against its historical distribution to determine the extremity of the observation. This can be repeated for multiple observations in order to construct a Bernoulli sequence for Binomial testing.

The value of traded risk products (e.g. bonds, swaps) depend on a set of variables, such as FX spot prices, interest rates, time, etc...these variables determine the value (and risk) of a product at any point-in-time. By isolating each variable, we can test whether a recent change in the value, and hence VaR of a product, can be attributed to a certain movement in one of the underlying variables.

The following code performs risk factor back-testing by specifying a risk factor input file in ascending chronology ("USDCAD.csv", where column #2 "PX_LAST" contains the factor values), the period to build the historical distribution (window_train), the values used to compare against the historical distribution (window_valid), and the percentile bands to compare for the binomial test. In the example below, the 25 most recent % changes in USD/CAD spot rate data is compared against its 500-day historical distributrion.

##################################################################### # MARKET RISK FACTOR BACKTESTING # ##################################################################### #this script specifies a predefined portion of the spot data and #compares the most recent observations against this distribution; #the oldest "pct_dist" of observations are used to construct #the distribution ##################################################################### ############: GLOBAL VARS directory = 'C:\\' filename='USDCAD.csv' spot_name="PX_LAST" window_valid=10 window_train=500 pct_min=0.01 pct_max=0.99 #############: MAIN #INPUT setwd(directory) fx_data <- read.csv(filename) fx_chg <- data.frame(Date=fx_data$Date[-1], rel_chg=diff(fx_data[,spot_name])/fx_data[,spot_name][-1]) fx_chg_train <- fx_chg[(nrow(fx_chg)-window_train-window_valid+1):(nrow(fx_chg)-window_valid),] fx_chg_valid <- fx_chg[(nrow(fx_chg)-window_valid+1):nrow(fx_chg),] #CALCULATE THE BREACHES sum=0 BINOMIAL_COUNT <- rep(0, nrow(fx_chg_valid)) for (i in 1:nrow(fx_chg_valid)){ min_val <- quantile(fx_chg_train[,2], pct_min) max_val <- quantile(fx_chg_train[,2], pct_max) if (max_val < fx_chg_valid[i,2] || min_val > fx_chg_valid[i,2]){BINOMIAL_COUNT[[i]]=1} } BINOMIAL_COUNT pbinom(0:length(BINOMIAL_COUNT), length(BINOMIAL_COUNT), 1-(pct_max-pct_min)) #OUTPUT final_table <- data.frame(Date=fx_chg_valid[,1],BINOMIAL_COUNT) hist(fx_chg_train[,2], breaks=20, col='lavender') abline(v=fx_chg_valid[,2], col='blue') abline(v=min_val, col='red') abline(v=max_val, col='red')

The output is a vector of 25 zeros and ones (where a one indicates a breach of the bounds), the cumulative probability of observing n breaches in m=25 trials, and a histograms with all of the recent 25-day values plotted with respect to the bounds.

In summary, only one day exceeded the bounds of the FX spot distribution (corresponding to a cum. prob of 91.14%). This 25-day sequence of spot rates is considered to be appropriate and not unlike historical conditions. If we were investigating a VaR model over the past 25 days this was the outcome, then there is not sufficient evidence to suggest that the market is producing observations that are inconsistent with the VaR model (constructed using historical simulation on the same 500 day training period).