In: Physics
Thinking back to The Signal and the Noise, what does calibration have to do with prediction? In particular, why is it a problem if we “calibrate” with what Silver calls “retrocasts” or “postdictions”?
Prediction failures due to using out-of-sample data:
1.Home owners predicted prices would continue to rise (there had never before been such a large boom in the US)
2.ratings agencies and banks failed to predict correlated default rates (they had never rated such novel and complex)
3.hardly anyone predicted that a US housing crisis could cause a global financial crisis (the financial system had never been so highly leveraged before)
4.Economists and policy makers failed to predict the severity of the impact (financial crises causes more severe and long-lasting impacts than over economic crises)
Complex models can exhibit non-linear effects - small mistakes can cause huge mis-predictions.
Large amounts of data can exacerbate over-confidence. Intuitively feels like error should shrink with so much data, but we forget eg model error or sample error.
Nate Silvers strategy in general is to pick areas where the bar is so low that improving it is easy. Worth thinking about, in terms of future direction. What areas are sorely lacking in skills that I have?
Principle 1: Think probabilistically. There is noise, and you must account for it.
Principle 2: Todays forecast is the first forecast for the rest of your life. Don’t be afraid to change predictions in the face of new information.
Principle 3: Look for consensus. Being the lone dissenter who is proved right is rare. If you prediction is different from that of other similarly informed forecasters, you should worry.
Beware of magic bullets. Reality is complicated. Any model that claims to boil it down to a few variables should be regarded with suspicion.
Quantitative methods are not magical. Don’t forget model error and sample error. Unknown unknowns can kill you.