In: Finance
Because VaR has certain limitations, managers will often backtest their VaR models. In addition, there are measures that can be used as supplements to the regular VaR measure (i.e., supplement the information provided by VaR).
List and describe three complementary measures that can be used as supplements to VaR.
Value at risk (VaR) is a statistic that measures and quantifies the level of financial risk within a firm, portfolio or position over a specific time frame. This metric is most commonly used by investment and commercial banks to determine the extent and occurrence ratio of potential losses in their institutional portfolios.
Risk managers use VaR to measure and control the level of risk exposure. One can apply VaR calculations to specific positions or whole portfolios or to measure firm-wide risk exposure.
Understanding Value at Risk (VaR)
VaR modeling determines the potential for loss in the entity being assessed and the probability of occurrence for the defined loss. One measures VaR by assessing the amount of potential loss, the probability of occurrence for the amount of loss, and the timeframe.
For example, a financial firm may determine an asset has a 3% one-month VaR of 2%, representing a 3% chance of the asset declining in value by 2% during the one-month time frame. The conversion of the 3% chance of occurrence to a daily ratio places the odds of a 2% loss at one day per month.
Using a firm-wide VaR assessment allows for the determination of the cumulative risks from aggregated positions held by different trading desks and departments within the institution. Using the data provided by VaR modeling, financial institutions can determine whether they have sufficient capital reserves in place to cover losses or whether higher-than-acceptable risks require them to reduce concentrated holdings.
Example of Problems with Value at Risk (VaR) Calculations
There is no standard protocol for the statistics used to determine asset, portfolio or firm-wide risk. For example, statistics pulled arbitrarily from a period of low volatility may understate the potential for risk events to occur and the magnitude of those events. Risk may be further understated using normal distribution probabilities, which rarely account for extreme or black-swan events.
The assessment of potential loss represents the lowest amount of risk in a range of outcomes. For example, a VaR determination of 95% with 20% asset risk represents an expectation of losing at least 20% one of every 20 days on average. In this calculation, a loss of 50% still validates the risk assessment.
The financial crisis of 2008 that exposed these problems as relatively benign VaR calculations understated the potential occurrence of risk events posed by portfolios of subprime mortgages. Risk magnitude was also underestimated, which resulted in extreme leverage ratios within subprime portfolios. As a result, the underestimations of occurrence and risk magnitude left institutions unable to cover billions of dollars in losses as subprime mortgage values collapsed.
The Idea Behind VAR
The most popular and traditional measure of risk is volatility. The main problem with volatility, however, is that it does not care about the direction of an investment's movement: stock can be volatile because it suddenly jumps higher. Of course, investors aren't distressed by gains.
For investors, the risk is about the odds of losing money, and VAR is based on that common-sense fact. By assuming investors care about the odds of a really big loss, VAR answers the question, "What is my worst-case scenario?" or "How much could I lose in a really bad month?"
Now let's get specific. A VAR statistic has three components: a time period, a confidence level and a loss amount (or loss percentage). Keep these three parts in mind as we give some examples of variations of the question that VAR answers:
You can see how the "VAR question" has three elements: a relatively high level of confidence (typically either 95% or 99%), a time period (a day, a month or a year) and an estimate of investment loss (expressed either in dollar or percentage terms).
Methods of Calculating VAR
Institutional investors use VAR to evaluate portfolio risk, but in this introduction, we will use it to evaluate the risk of a single index that trades like a stock: the Nasdaq 100 Index, which is traded through the Invesco QQQ Trust. The QQQ is a very popular index of the largest non-financial stocks that trade on the Nasdaq exchange.1
There are three methods of calculating VAR: the historical method, the variance-covariance method, and the Monte Carlo simulation.
1. Historical Method
The historical method simply re-organizes actual historical returns, putting them in order from worst to best. It then assumes that history will repeat itself, from a risk perspective.
As a historical example, let's look at the Nasdaq 100 ETF, which trades under the symbol QQQ (sometimes called the "cubes"), and which started trading in March of 1999.2 If we calculate each daily return, we produce a rich data set of more than 1,400 points. Let's put them in a histogram that compares the frequency of return "buckets." For example, at the highest point of the histogram (the highest bar), there were more than 250 days when the daily return was between 0% and 1%. At the far right, you can barely see a tiny bar at 13%; it represents the one single day (in Jan 2000) within a period of five-plus years when the daily return for the QQQ was a stunning 12.4%.
Notice the red bars that compose the "left tail" of the histogram. These are the lowest 5% of daily returns (since the returns are ordered from left to right, the worst are always the "left tail"). The red bars run from daily losses of 4% to 8%. Because these are the worst 5% of all daily returns, we can say with 95% confidence that the worst daily loss will not exceed 4%. Put another way, we expect with 95% confidence that our gain will exceed -4%. That is VAR in a nutshell. Let's re-phrase the statistic into both percentage and dollar terms:
You can see that VAR indeed allows for an outcome that is worse than a return of -4%. It does not express absolute certainty but instead makes a probabilistic estimate. If we want to increase our confidence, we need only to "move to the left" on the same histogram, to where the first two red bars, at -8% and -7% represent the worst 1% of daily returns:
2. The Variance-Covariance Method
This method assumes that stock returns are normally distributed. In other words, it requires that we estimate only two factors - an expected (or average) return and a standard deviation - which allow us to plot a normal distribution curve. Here we plot the normal curve against the same actual return data:
The idea behind the variance-covariance is similar to the ideas behind the historical method - except that we use the familiar curve instead of actual data. The advantage of the normal curve is that we automatically know where the worst 5% and 1% lie on the curve. They are a function of our desired confidence and the standard deviation.
Confidence | # of Standard Deviations (σ) |
95% (high) | - 1.65 x σ |
99% (really high) | - 2.33 x σ |
The blue curve above is based on the actual daily standard deviation of the QQQ, which is 2.64%. The average daily return happened to be fairly close to zero, so we will assume an average return of zero for illustrative purposes. Here are the results of plugging the actual standard deviation into the formulas above:
Confidence | # of σ | Calculation | Equals |
95% (high) | - 1.65 x σ | - 1.65 x (2.64%) = | -4.36% |
99% (really high) | - 2.33 x σ | - 2.33 x (2.64%) = | -6.15% |
3. Monte Carlo Simulation
The third method involves developing a model for future stock price returns and running multiple hypothetical trials through the model. A Monte Carlo simulation refers to any method that randomly generates trials, but by itself does not tell us anything about the underlying methodology.
For most users, a Monte Carlo simulation amounts to a "black box" generator of random, probabilistic outcomes. Without going into further details, we ran a Monte Carlo simulation on the QQQ based on its historical trading pattern. In our simulation, 100 trials were conducted. If we ran it again, we would get a different result--although it is highly likely that the differences would be narrow. Here is the result arranged into a histogram (please note that while the previous graphs have shown daily returns, this graph displays monthly returns):
To summarize, we ran 100 hypothetical trials of monthly returns for the QQQ. Among them, two outcomes were between -15% and -20%; and three were between -20% and 25%. That means the worst five outcomes (that is, the worst 5%) were less than -15%. The Monte Carlo simulation, therefore, leads to the following VAR-type conclusion: with 95% confidence, we do not expect to lose more than 15% during any given month.