Find the Mistake
Robert Waldmann
Alex Tabarrok wrote
In Too Big To Save Robert Pozen gives a clever example, based on an excellent paper by Coval, Jurek and Stafford, which explains both the lure of structured finance and why the model exploded so quickly.
Suppose we have 100 mortgages that pay $1 or $0. The probability of default is 0.05. We pool the mortgages and then prioritize them into tranches such that tranche 1 pays out $1 if no mortgage defaults and $0 otherwise, tranche 2 pays out $1 if 1 or fewer mortgages defaults, $0 otherwise. Tranche 10 then pays out $1 if 9 or fewer mortgages default and $0 otherwise. Tranche 10 has a probability of defaulting of 2.82 percent. A fortiori tranches 11 and higher all have lower probabilities of defaulting. Thus, we have transformed 100 securities each with a default of 5% into 9 with probabilities of default greater than 5% and 91 with probabilities of default less than 5%.
The quoted claim is false. I explain why after the jump.
In his post, Tabarrok assumes that the correlation between payments on any two mortgages must be zero. The calculation is correct only if the correlation is zero. This crucial assumption was not stated. The claim is false as written.
One way in which structured finance almost destroyed the world economy was very close to this but not so very very extreme. CDO designers and raters assumed that they could estimate correlations between default on different bonds assuming that default occurred if a latent variable were below zero and that the latent variables were jointly normal. There was no justification for the assumption of joint normality. Then they assumed that the CDS market was efficient (schizzo-finance) so they could estimate the correlations of the latent variables.
This is less extreme than just assuming that correlations are zero without even stating the assumption. However, it was extreme enough to cause a disaster.
Amazingly, after the disaster based on casual assumptions about correlation, Alex Tabarrok makes a much more extreme assumption and doesn’t even state it.
Note there are no ellipses in my quote of Tabarrok. That is the post in full from the first word to the false claim of 2.82%.
I am sincerely shocked and appalled.
update: There is another problem with the example. Given independence of defaults across mortgages, the probability of default of the 10th tranche is dramatically different if the probability of a default of each mortgage is 6% not 5%. However, the probability of default of the 12 tranche is tiny in either case and the probability of default of the 8th tranche is large in either case.
The example illustrates the properties of 10, 5 and 6 not the effect of 5 vs 6.
A small difference in the probability of default for uncorrelated default risks matters a lot for a small interval of tranches. For probabilities other than 5 and 6 the change from very small to high probability would occur at a range different from 8th to 12th but only for a small range.
The market value of such small ranges was not large enough that serious misspricing of them (in the range from safe to worthless) could bring down the financial system.
Similarly the tranches of pools of 10th tranches are very different but tranches of pools of 12th tranches would be safe in either case and the 10th tranche of a pool of 8th tranches would be risky in either case. Again pools of 10th tranches of pools weren’t important.
You just can’t make huge unexpected losses out of a tiny change in the mean and variance of the flow of money from debtors unless people make huge side bets and there are huge unexpected gains too.
In contrast, underestimating the correlation of default across mortgages can lead to a major underestimate in the variance of the flow of money from debtors to the financial system. That error, althoug much less extreme than the unstated independence assumption in the example, was an important part of the crisis.
The example ignores the important issue and emphasizes a very unimportant issue.
It looks like you would need to have some sort of multivariate distribution of correlated binary variables to get the probability of defaults correct. What happend also in 2008 is that the correlation between these binary variables increased.
For notation you have a sequence {Xi} of {0,1}-valued random variables the the probability that one of them defalted would be an equation that gave
P[Xn=1|Xo=xo, X1=x1,…,Xn-1=xn-1] = pj
This is laid out in “Discrete Multivariate Distributions” by Johnson, Kotz, and Balakrishnan.
But lets get real, equations like this are too much for the MBAs at investment banks, or guys that worked their way up from the bottom, and also too much for most of the fallen physicists that they hire to both understand and explain to the former types of people. I’ve never seen one of these functions estimated for seen a used to generate random variables.
Ha, the only place that things like that are used is in videogames (seriously, they’re used to update player skill ratings that are used to match players up on Xbox Live).
The point about the correlation increasing is a good one. You could do analysis of the historical correlations between mortgages in different parts of the country etc, and probably create an instrument that would have very low correlation 99.99% of the time, but when that swan comes to your pond, all bets are off.
It’s the Dragon King theory, when circumstances are just right, the extreme events become far more frequent, and might even be the norm.
First, linky – http://www.marginalrevolution.com/marginalrevolution/2010/05/the-dark-magic-of-structured-finance.html#comments
The balance of Tabarrok’s post talks about the susceptibility to change in the default rate. Isn’t a change in the overall default rate what happened, not single instruments that had some kinds of connections between the component mortgages?
Sure, there were some super garbage & designed to fail instruments, but in general, I haven’t heard about anybody building them out of say, 500 loans to GM employees.
Downpuppy: “The balance of Tabarrok’s post talks about the susceptibility to change in the default rate. Isn’t a change in the overall default rate what happened, not single instruments that had some kinds of connections between the component mortgages? “
There are multiple ways of being correlated. Consider a run of flips of a fair coin of 100 heads. What is the probability that the next flip will come up heads? Of course it is 1/2. Now consider a run of flips of a coin of 100 heads. Do you think it is 1/2? The point is that the run is evidence that the coin is not fair.
In the real world assumptions of independence rarely hold. The state of the economy can provide a causal mechanism for correlation of otherwise unrelated defaults.
The “it” in “Do you think it is 1/2?” is, OC, the probability that the next flip will come up heads.
Agree it omits dependence but it does illustrate CDO leverage. “Shocked and appalled” is a bit dramatic.
They assumed that default of asset i occured if a latent variable y_i fell below zero and that the latent variables were jointly normal. The problem is that the joint normal is a veyr special distribution in which the expected value of y_i conditional on y_j is linear in y_j so the correlation of large changes and small changes is the same. Asset prices include uncorrelated tiny changes due to price pressure as someone is taking or liquidating a position. These tell us nothing about the risk of joint default of different assets, yet they were the basis for the design of CDOs and all those AAA ratings of toxic waste. google “Gaussian copula”
the correct calculations aren’t just beyond the knowledge of financiers. A joint distribution can not be estimated with a finite amount of data without imposing parametric restrictions (not necessarily super strong like jointly normal). Even if the specification is valid, the estimate is not exact so one must integrate over a posterior over the parameters to get a valid estimate of the probability of default.
Anyway we agree that there is no way that anyone involved could price or rate CDOs. They just decided to make assumptions so that they could come up with a conclusion. This is what economists do (assume we have a can opener). At least we usually don’t destroy the world financial system.
The rule should be very simple. If you have to make an assumption you don’t believe to reach a conclusion, don’t accept that conclusion as the truth. For some reason this truism is very heterodox.
That the state of the economy, or bad assumptions about loan quality, could change the default rate was Tabarroks jumping off point, not something he overlooked.
A little further meditation. It seems to me that the lender’s control over the profitability of a mortgage pool lies in the selection of collateral, interest rate, and loan term. Once the loan is made, the future cash flows for each loan in the pool and for their sum over time has been more or less locked in.
That being the case, manipulations like tranching the pool can only affect how the returns from the pool are divied up. If one believes in efficient markets, it doesn’t matter how the returns are partitioned since the pricing of the components will be set by the invisible hand of the market and the return from the pool — no matter how subdivided — will total up the same for any possible partitioning.
So, I conclude that probably all this tranching and risk management really amounts more or less to a con job based on assumed inefficiency of markets.
CDOs (including synthetics) and other combinations of mortgage securities and indices of mortgage securities used models to determine their pricing and expected returns. All models contain a finite set of explicit and implicit assumptions. No set of assumptions will reflect real world events under all future scenarios. CDOs in addition to normal market price risk of expected cashflows also have modeling risk. All pricing based on models will contain modeling risk. No pricing model will accurately predict outcomes under all circumstances.
Traded securities prices reflect investor cashflow expectations of those who think it is fair valued, over valued and under valued. Private placement underwritings, such as CDOs, do not have a trading market price. A potential investor who thinks a CDO underwriting is too risky or overpriced will walk away from the deal, not participate, and have little if any effect on valuation. In a trading scenario some overvalue investor will sell their holdings or find ways to short and will affect valuations.
In a trading market, there is a tension between investors who believe a security is over or under valued. In CDOs, the tension between over and under value investors did not exist and all pricing relied on a single valuation model. Investors, who believed they were overvalued or too risky, did not participate in the deals and did not influence the pricing of the deals.
It does not take a PhD in math or structured finance to understand that it is naïve to rely solely on models of future real world events. Investors in CDOs failed to account for modeling risk of the pricing and expected return.
One of the reasons that investors overlooked modeling risk was the desire to invest in highly credit rated securities. Regulators (Basel capital requirements among others) created the appetite for safe credit rated securities. The regulators embedded the credit ratings in their oversight criteria for capital, solvency and safety of financial institutions.
While many are blaming the credit rating agencies, the credit raters also will use models and under some set of scenarios, their models will fail.
The solution is not to blame the model of CDOs or that credit raters fail to accurately rate the securities. The solution is to remove credit raters from the regulatory process of overseeing financial institutions. Removal of the credit raters from the regulatory process would have reduced the appetite for these securities by financial institutions. A lower appetite would mean fewer CDOs, fewer loans with poor credit scores, and a reduced investor appetite in general for home mortgages
Milton Recht,
The solution is to remove credit raters from the regulatory process of overseeing financial institutions.
What are the regulators supposed to base their oversight on? Do they have enough people with the right kind of skills to do their own oversight. I don’t think so. And as side point they don’t have the skills to provide oversight of offshore drilling either. The well functioning economy with competent government regulators making everything problem free is a fantasy and a waste of time.
***What are the regulators supposed to base their oversight on? Do they have enough people with the right kind of skills to do their own oversight. I don’t think so.***
You might be right, but that doesn’t mean that they can’t/won’t do better than the rating agencies do/did. At least the regulators don’t necessarily have a built in conflict of interest. Ideally, we’d require the credit raters — whoever they are — to underwrite a meaningful amount of insurance on each item they rate. If their rating turns out to be wrong, it costs them money
Did they screw up the latent variable? I think oil price should have been a factor since it makes long drives to your far-away McMansion and heating it much more expensive. The oil price run up was like raising household taxes by $2-3K per year.
With the equation that I gave above, P[Xn=1|Xo=xo, X1=x1,…,Xn-1=xn-1] = pj, I not sure how you would implement it. If you say that the probablility single mortgage security default is dependent how many other securities have defaulted then I’m not sure how you do a simulation. In setting probabilities i=1 to n, when you start on 1 in the simulation you don’t yet know if your simulation set the other securites to having defaulted or not.
Going back to the problem rather than using a binary sequence the default could be modeled from a beta distribution using copulas with a strong correlation between them. If nobody believes the model then they probably should not use it. A price of a security is based on what someone’s willing to pay for it and not its intrinsic worth. Its up to people in the market to decide if they want to buy and sell things they don’t totally understand.
On screwing up the financial system this seems to be a false charge that is starting to take traction, not from a basis in fact but rather as an example of the big lie technique. No individual is responsible for the lie but the lie and Myth seem to be growing anyway as an exercise in collective myth making. It reminds me of blaming the entire depression on the 1929 stock market crash.
VtCodger,
Forecasts are always wrong. I don’t see anyone going after Romer with her botched employment forecast used to sell the stimulus bill. The healthcare bill cost estimates are also likely to be wildly off and we just made a big decision based on them. Same with the initial cost estimated for the Iraq war. I don’t see the rating agencies doing any worse than anyone else.
Ratings
Seems to me I’ve seen presentations where correlations between defaults were all zero. All this ex post sophisticated econometric reasoning was utterly irrelevant to predicted loss rates. A first-semester stat student could easily critique ’em.
Interesting WaltFrench. The post and thread are buried, so I fear you won’t be back, but I am very curious by what sort of “presentations” you have in mind. Were the presenters trying to explain high finance to students or the general public or where they presenting to investment bankers and/or other financial operators who actually had some control over money ? If you are discussing presentations which actually had something to do with actual finance, then I am impressed that the system didn’t collapse sooner.
Robert,
Your observation is right that the thought example provided omits correlation, and correlation assumptions were baked into attempts to model and rate the CDOs. But one thing that’s been hard for me (and, I suspect, many others) to visualize has been the way structured finance creates acute sensitivity to leverage and input assumptions. How could so-called AAA securities become worthless? The cartoon model takes us a step along the way, so I think it has some utility.
The biggest headscratcher is around the fundamental assumption that house prices could only rise. I bought my first house in the UK in 1990 for #56K, and sold it three years later for #42K. The whole securitization chain was built on the premise that borrowers could refinance when the teaser rates ended, and we know how that movie turned out.
Min, your example makes no sense. You can have a biased coin and still have each throw be independent and 100 throws come up with heads on an unbiased coin has a non-zero probability – actually the same as 100 throws with alternatively head then tail then head, would you assume after such a series the coin was biased or that somehow independent events were correlated?
And there is only one way to be correlated….