What the World Cup Tells Us About Investing Models

The Farce of World Cup and Investment Models
Most of the models for predicting the winner of the World Cup got it wrong. The same is true of models used to make investment decisions.
Bloomberg, July 16, 2014

 

 

 

 

“All models are wrong; some are useful.” — George E. P. Box

 

The quote above comes from George Box. He was a brilliant statistician and professor, who thought long and hard about the use and misuse of statistics.

I was reminded of Box this weekend while watching the thrilling World Cup final between Germany and Argentina. (If you didn’t find Germany’s 1-0 win thrilling, that simply means you don’t understand soccer). From Goldman Sachs to fivethirtyeight, just about every major modeler with the temerity to forecast the outcome of the Cup got it wrong. Not merely wrong, but wildly so. Give credit to Macquarie for choosing Germany to win (me too!), but getting almost everything else wrong. (I did even worse).

There are trillions of dollars invested based on models. Many of the world’s biggest hedge funds, pension funds and foundations are highly dependent upon some form of modeling to put their capital to work. What does it say about the world of investing that nearly all of these folks in the business of modeling markets and economies were so far afield when they tried to predict the outcome of the Word Cup? I believe it is more than just a matter of sports being different from investing.

A look at models and the process of creating one can give us some useful information. The trouble is, we expect them to foretell the future. As we have explained recently, that never can happen.

If you are asking that question, then you don’t understand the purpose and construction of statistical models. Allow me to clarify: All models are by definition wrong.

Models are only approximations. They are a numerical depiction of a tiny slice of a complex universe. Perhaps a better perspective can be had if investors, strategists and economists began by asking: “How wrong is this model?”

Over the years, I have learned a few things about models. I offer the following observations for your consideration:

• The Illusion of precision: Models are imprecise. Whenever I see a forecast written out to two decimal places, I always think of the old joke: Economists like to use decimals to the hundredths to prove they have a sense of humor. But it also makes me wonder if there is a misunderstanding of the limitations of the data.

• Direction and magnitude matter more: Forget the exact numbers, and instead concentrate on these two elements. When reviewing the output of any model, look at it over time. Does it get the general direction correct? Are the magnitude measures more or less in line with what we see in the real world? Looked at this way, the nonfarm payroll reports are much better indicators for the job market than people give them credit for.

• Models are of limited utility: Here we get to the core of the problem: They can only do so much. We run into issues when we think they are going to solve an especially complex problem. I have heard from former Federal Reserve analysts that if the central bank can’t model something, then it doesn’t exist. I shudder to think how absurd that viewpoint is and its implications for policy.

• Context can be problematic: This is more troubling than it looks. What I mean by this is that forcing everything into an intellectual or ideological framework may create further errors. Once everything is viewed through an imperfect lens — and they are all imperfect — the output will be similarly imperfect.

• Narrative leads to errors: This is the corollary of the context issue. Everything seems to be part of a story, and how that story is told often leads to critical errors. Phrases with a mathematical component to them — stall speed, muddle through, Minsky moment, escape velocity, etc. — can lead to lead to rich tales filled with emotional resonance. In model creation, this can be a disaster.

• Confusing correlation with causation: The oldest statistical foible in the book. Look no further than the Fed’s obsession with the wealth effect — consumers spend more because they either are, or perceive themselves to be, better off — for a classic correlation error.

All of which brings us back to the World Cup. The models used were doomed from the start, as they tried to predict the precise outcome of the event. Unless you are modeling the accurate and precise world of physics, that is a recipe for failure.

When it comes to modeling, there’s physics, and then there is everything else.

Place investing and economics in the everything-else file.

 

 

~~~

I originally published this at Bloomberg, July 16, 2014. All of my Bloomberg columns can be found here and here

 

 

 

Print Friendly, PDF & Email

What's been said:

Discussions found on the web:

Posted Under