Oct 23, 2022

Why VaR fails and actuaries can do better

Perhaps the most important challenge of an actuary is to develop and train the capability to explain complex matters in a simple way. One of the best examples of practicing this 'complexity reduction ability' has been given by David Einhorn, president of Greenlight Capital. In a nutshell David explains with a simple example why VaR models fail. Take a look at the next excerpt of David's interesting article in Point-Counterpoint.

Why Var fails
A risk manager’s job is to worry about whether the bank is putting itself at risk in unusual times - or, in statistical terms, in the tails of the distribution. Yet, VaR ignores what happens in the tails. It specifically cuts them off. A 99% VaR calculation does not evaluate what happens in the last1%. This, in my view, makes VaR relatively useless as a risk management tool and potentially catastrophic when its use creates a false sense of security among senior managers and watchdogs.
VaR is like an airbag that works all the time, except when you have a car accident
By ignoring the tails, VaR creates an incentive to take excessive but remote risks.
Example
Consider an investment in a coin flip. If you bet $100 on tails at even money, your VaR to a 99% threshold is $100, as you will lose that amount 50% of the time, which obviously is within the threshold. In this case, the VaR will equal the maximum loss.

Compare that to a bet where you offer 127 to 1 odds on $100 that heads won’t come up seven times in a row. You will win more than 99.2% of the time, which exceeds the 99% threshold. As a result, your 99% VaR is zero, even though you are exposed to a possible $12,700 loss.

In other words, an investment bank wouldn’t have to put up any capital to make this bet. The math whizzers will say it is more complicated than that, but this is the idea. Now we understand why investment banks held enormous portfolios of “super-senior triple A-rated” whatever. These securities had very small returns. However, the risk models said they had trivial VaR, because the possibility of credit loss was calculated to be beyond the VaR threshold. This meant that holding them required only a trivial amount of capital, and a small return over a trivial capital can generate an almost infinite revenue-to-equity ratio. VaR-driven risk management encouraged accepting a lot of bets that amounted to accepting the risk that heads wouldn’t come up seven times in a row. In the current crisis, it has turned out that the unlucky outcome was far more likely than the backtested models predicted. What is worse, the various supposedly remote risks that required trivial capital are highly correlated; you don’t just lose on one bad bet in this environment, you lose on many of them for the same reason. This is why in recent periods the investment banks had quarterly write-downs that were many times the firm-wide modelled VaR.

The Real Risk Issues
What. besides the 'art of simple communication', can we - actuaries - learn from David Einhorn? What David essentially tries to tell us, is that we should focus on the real Risk Management issues that are in the x% tail and not on the other (100-x)%. Of course, we're inclined to agree with David. But are we actuaries truly focusing on the 'right' risks in the tail? I'm afraid the answer to this question is most often: No! Let's look at a simple example that illustrates the way we are (biased) focusing on the wrong side of the VaR curve.

Example Longevity
For years (decades) now, longevity risk has been structurally underestimated. Yes, undoubtedly we have learned some of our lessons. Today's longevity calculations are not (anymore) just based on simple straight-on mortality observations of the past. Nevertheless, in our search to grasp, analyze and explain the continuous life span increase, we've got caught in an interesting but dangerous habit of examining more and more interesting details that might explain the variance of future developments in mor(t)ality rates. As 'smart' longevity actuaries and experts, we consider a lot of sophisticated additional elements in our projections or calculations. Just a small inventory of actuarial longevity refinement:
  • Difference in mortality rates: Gender, Marital or Social status, Income or Health related mortality rates
  • Size: Standard deviation, Group-, Portfolio-size
  • Selection effects, Enhanced annuities
  • Extrapolation: Generation tables, longitudinal effects, Autocorrelation, 'Heat Maps'
X-Tails In our increasing enthusiasm to capture the longevity monster, we got engrossed in our work. As experienced actuaries we know the devil is always in the De-Tails, however the question is: In which details? We all know perfectly well that probably the most essential triggers for longevity risk in the future, can not be found in our data. These triggers depend on the effect of new developments like :

It's clear that investigating and modeling the soft risk indicators of extreme longevity is no longer a luxury, as also an exploding increase in lifespan of 10-20% in the coming decades seems not unlikely. By stretching our actuarial research to the medical arena, we would be able to develop new (more) future- and shock-proof longevity models and stress tests. Regrettably, we don't like to skate on thin ice..... Ostrich Management If we - actuaries - would take longevity and our profession as 'Risk Manager' more seriously, we would warn the world about the global estimated (financial) impact of these medical developments on Pension- and Health topics. We would advise on which measures to take, in order to absorb and manage this future risk. Instead of taking appropriate actions, we hide in the dark, maintaining our belief in Fairy-Tails. As unworldly savants, we joyfully keep our eyes on the research of relative small variances in longevity, while neglecting the serious mega risks ahead of us. This way of Ostrich Management is a worrying threat to the actuarial profession. As we are aware of these kinds of (medical) future risks, not including or disclaiming them in our models and advice, could even have a major liability impact. In order to be able to prevent serious global loss, society expects actuaries to estimate and advise on risk, instead of explaining afterward what, why and how things went wrong, what we 'have learned' and what we 'could or should' have done. This way of denying reality reminds me of an amusing Jewish story of the Lost Key...

The lost Key
One early morning, just before dawn, as the folks were on their way to the synagogue for the Shaharit (early morning prayer) they notice Herscheleh under the lamp post, circling the post and scanning the ground. “Herschel” said the rabbi “What on earth are you doing here this time of the morning?” “I lost my key” replied Herscheleh “Where did you lose it?” inquired the rabbi “There” said Herscheleh, pointing into the darkness away from the light of the lamp post. “So why are looking for your key in here if you lost it there”? persisted the puzzled rabbi. “Because the light is here Rabbi, not there” replied Herschel with a smug.





Let's conclude with a quote, that - just as this blog- probably didn't help either:

Risk is not always apparent,
but its invisibility is no longer an excuse for ignoring it.

-- Bankers Trust on risk management, 1995 --


Interesting additional links:


Apr 16, 2017

All Models are Wrong

A 2016 paper by James Stuart (et al) from the 'Institute & Faculty of Actuaries' about 'Ersatz Models' (Substitute Models) states....

Al models are deliberate simplifications of the real world. Attempts to demonstrate a model’s correctness can be expected to fail, or apparently to succeed because of test limitations, such as insufficient data.

We can explain this using an analogy involving milk. Cows’ milk is a
staple part of European diets. For various reasons some people avoid it, preferring substitutes, or ersatz milk, for example made from soya. In a chemical laboratory, cows’ milk and soya milk are easily distinguished.
Despite chemical differences, soya milk physically resembles cows’ milk in many ways - colour, density, viscosity for example. For some purposes, soya milk is a good substitute, but other recipes will produce acceptable results only with cows’ milk. The acceptance criteria for soya milk should depend on how the milk is to be used.



In the same way, with sufficient testing, we can always distinguish an
ersatz model from whatever theoretical process drives reality. We should be concerned with a more modest aim: whether the ersatz model is good enough in the aspects that matter, that is, whether the modelling objective has been achieved.

The Model Problem
The paper starts with stories of models gone bad. Can our proposed
generated data tests prevent a recurrence?



The Model Risk Working party has explained how model risks arise not only from quantitative model features but also social and cultural aspects relating to how a model is used. When a model fails, a variety of narratives may be offered to describe what went wrong. There may be disagreements between experts about the causes of any crisis, depending on who knew, or could have known, about model limitations. Possible elements include:
  • A new risk emerged from nowhere and there is nothing anyone could have done to anticipate it - sometimes called a “black swan”.
  • The models had unknown weaknesses, which could have been revealed by more thorough testing.
  • Model users were well acquainted with model weaknesses, but these were not communicated to senior management accountable for the business
  • Everyone knew about the model weaknesses but they continued to take excessive risks regardless.
Ersatz testing can address some of these, as events too rare to feature in actual data may still occur in generated data. Testing on generated data can also help to improve corporate culture towards model risk, as:
  • Hunches about what might go wrong are substantiated by objective analysis. While a hunch can be dismissed, it is difficult to suppress objective evidence or persuade analysts that the findings are irrelevant.
  • Ersatz tests highlight many model weaknesses, of greater or lesser importance. Experience with generated data testing can de-stigmatise test failure and so reduce the cultural pressure for cover-ups.
We recognise that there is no mathematical solution to determine how extreme the reference models should be. This is essentially a social decision.

Corporate cultures may still arise where too narrow a selection of reference models is tested, and so model weaknesses remain hidden.

  1. Source: Ersatz Model Tests
    https://www.actuaries.org.uk/documents/ersatz-model-tests-0
    (Conclusions : page 35)
  2. Model Risk Working Party
    https://www.actuaries.org.uk/documents/sessional-paper-model-risk-daring-open-black-box
  3. The skinny kids are all drinking full cream milk
    http://heffalumpgeneration.co.za/the-skinny-kids-are-all-drinking-full-cream-milk/


Jan 1, 2017

Happy Risk New Year 2017

Happy New Year to all Actuary-Info readers.


The year 2017 will be the another year that'll empower us to develop new insights on risk management. Driven by economic turbulence and desperate rule-based regulation we will probably keep trying to capture, control and even eliminate risk, instead of trying to understand and anticipate risk.

Dutch Insurance Merger
At the end of 2016, two large Dutch insurers - Nationale Nederlanden (NN) and Delta Lloyd (DL) - decided to go ahead with their merger. Formally it's called a take over of DL by NN.

Driven by a declining DL-Solvency-II rate and supported by the concerned Dutch regulator (DNB), DL now finds shelter within NN. Besides the take over price of € 2.5 billion, NN Group faces a decline in solvency ratio from around 250% (pre-merger; Q2 2016) to 185 percent 
(post-merger, Q3 2016).

A strong merger (background) driver is
DL's expectation: "Delta Lloyd's 4Q16 Solvency II ratio is to be adversely affected by the LAC-DT review by DNB, the possible removal of the risk margin benefit of the longevity hedge and adverse longevity developments."

However, keep in mind that 'all' life insurance companies with 'long tails' have a serious (business case) problem. A problem that's not only solvable with money (capital), but necessitates the formulation of a new strategy that goes beyond just "cost control".

As low interest rates will continue and Solvency-II requirements will only increase, more mergers of life companies (with long tail risks) are to be expected.

When is a merger the right solution?
Although a merger often looks like a perfect solution for 'the problem', it not always is.....


Several studies estimate the failure rate of mergers and acquisitions somewhere between 70% and 90% (at least more than 50%).

The most common general merger fallacies and attention points are addressed in a McKinsey presentation:
 


When a merger or take over is considered, first check the next key-points from a risk perspective :

1. Strategy: Bigger is not always better
It's surprising how inherently correct analyzes always lead to 'bigger' is 'better', while we know that "bigger" contributes to 'too big to fail ',' decreasing cost efficiency ',' less flexibility (less agile) and 'less innovative capacity ' (like Fintech applications).

For successful mergers or take-overs, 
just applying traditional capital management (and Solvency II rules) just isn't enough. In all cases a well defined checked and supported 'new strategy' (plan) including a strong 'business case', are a first requirement.  
   


Always investigate these (adverse) merger effects and concept new strategy in the due diligence phase of a merger.


2. Increasing complexity effects
Is the change in complexity (IT, communication, products, distribution channels, etc.) measured and addressed in the merger/'take-over? If the complexity increases beyond certain  levels, targeted cost reductions may not be met. Often these costs are underestimated.

Always try to measure and address complexity
 in the due diligence phase of a merger.

3. Consistency 
Always check upon the consistency of (financial) analyses. If certain (actuarial) analyses, audits or valuation methods are only applied (one-sided) for the to be acquired company and not  for the acquiring company, consistency clearly fails and merger conclusions are probably biased.

Whether it's a "takeover" or 'merger', or how the power in the board is managed, doesn't really matter. Both companies should be compared on the same basis.
Always check on consistency in the due diligence phase of a merger.

Finally
Success with risk is on your merger-table in 2017 !! 


Links/Sources:
- Bigger is better wine glass
The Big Idea: The New M&A Playbook
Mergers and Acquisitions failure rates and perspectives on why they fail
http://tinyurl.com/NNDLtakeover
NN Group and Delta Lloyd agree on recommended transaction
DNB esearch; Bikker; Is there an optimal pension fund size?
DNB examination into complex IT environments
70% of Transformation Programs Fail - McKinsey
McKinsey: Where mergers go wrong