*“Salil Mehta is a two-time Administration executive, leading Treasury/TARP’s analytics team, as well as PBGC’s policy, research, and analysis, as well as their first risk analysis function. Salil is the creator of one of the most popular free statistics blogs, Statistical Ideas.”*

~~~

In now what’s an annual ritual for Wall Street, their market “strategists” come out to the major media outlets and celebrate what they think is a prophetic view of the market’s future. Year after year though, what we see is a generally tight, and optimistic view of the next year’s market level. Of course markets doesn’t always go up by 9% or so every year (in the past two decades it has been roughly **half** of this!), which begs the question where is the value in these strategists’ forecasts? Given that many on Wall Street have been dishing out their views to the media for nearly two decades (and includes multiple business cycles), we can now scrutinize more closely their forecasts and examine the quality of these firms individually (as opposed to only as a group for a single year). What we see in this article is that in some random circumstances, a predicting firm may be just ok, but many of the times the firms’ prediction results over the past 18 years are generally slightly worse than if had you flipped a coin about your own fixed guess as to where markets are headed, over that entire time. We laboriously put together 186 public forecasts, culled from 19 years of media coverage of their “annual market call” story. And the data is now available for your own public consumption here. What you’ll see below is that these optimistic predictions haven’t changed immediately after the dot-com bubble burst of 2000. Though after the 2008 financial crisis strategist predictions have generally been brought down slightly, and run even **more closely versus each other** versus before the 2008 crisis. Of course there isn’t much risk for the forecaster in making such lousy calls. For example, of the 29 forecasting firms included in the study here, 13 provided forecasts in at least one of these disastrous market years: 2001, 2002, 2008. And **not one** of these 13 firms ever envisioned a **down** market year in any of these 3 years. If not bailed out through TARP, all of the banks employing them did blow up however, but even then sometimes the analyst continued singing optimistic market praises at the new acquiring bank. If Wall Street prophets were able to provide an explicit confidence interval about each of their forecasts, with the risk of being fired if the market falls outside of their own forecast interval, would this whole game immediately come to an end? Let’s now dig into these publicly tabulated strategist calls, from mostly the Barron’s reports.

It’s helpful to first simply draw out the raw data used for this analysis. We do so in the diagram below, and use continuous returns for all of our calculations for greater analytical potential. We see below that in December 2014, the 10 strategists surveyed predicted that the S&P would rise (and on average by 7%) through December 2015 (and there is standard deviation of 4% among these 10 strategists). We show this with “X” data about 0.07. The market of course, instead fell 1% during 2015, which we show with the blackened-in “O” data at -0.01. So in this example, all of the market forecasts were more optimistically biased than actual reality.

Across the entire 18 years of predictions shown (the 19^{th} year of data for 2016 was removed here since the actual results are not yet known), analysts have provided an annual forecast of 9% each year, with a typical annual deviation (**even** for the annual averages among all firms) of ~8%. Meanwhile the actual returns over these 18 years have instead been **half** of that (~4.5%), and with a standard deviation of more than **double** at 19%! The easier part of probability theory therefore tells us that the bias in Wall Street forecasters is about 4.5% (9%-4.5%). Further in the realm of probability theory, the variance about these data is an issue from the start: to have the deviations in predictions be less than ½ the deviations in the primary target price itself. In other words, if one wants to get credit for “predicting” at least ½ of the price move for investors, then their target needs to move at least ½ of the actual price move itself! Here the market is typically gyrating 19% a year, but the target price is swinging only 8%.

For technical details, we assembled this data set mostly using the Barron’s published data going back as far back in time as continuously available. For a couple years, when Barron’s data wasn’t easily available, the market prediction made in USA Today’s similar surveys were instead utilized. One can also see that for the first few years of this data (starting back in 1998), the survey forecasts had substituted the parallel Dow for the S&P. Over the 19 years, the number of analysts shown had also fluctuated from the average of 10 analysts per year. Sometimes the answer format varied, showing the evolution of wild ingenuity on the part of inappropriate strategists. For example, early on some analysts were eager to provide mid-year projections as well (taken to the extreme, why not offer mid-nanosecond projections?) Other times analysts tinkered with their style, by providing an S&P price target to the fraction of a point (which is idiotic given that their error is anywhere from 200 to 400 points on average), and once providing a narrow and inaccurate range. These novelties -in the face of no ability **at all** to predict market crashes- are something else. They show that strategists (more so than the media) have shown themselves to be busy weaving authoritatively-sounding yet alien economic logic, while completely out of touch with their own impotence to divine random noise.

It’s marvelous to see that the financial media has also tried to be even-handed in their approach to reporting on these Wall Street strategist surveys, even historically when they carried far more rock-star status then they now do. The late 1990s news even from Barron’s, evidenced alongside their market prediction cover story, how wrong these analysts have been, **collectively**. Nothing has changed today, except here in this article we can study in great detail the track record and novelties of individual forecasting firms.

Another part of probability theory involves looking at the standard deviation (** σ**) of this “error”, to make sure that forecasters add value by providing tight errors (the gap between what they envision, and reality). Here we can go to the basics of variance math:

*Variance (Error)*

*= Variance (Prediction – Actual)*

*= Variance (Prediction) + Variance (Actual) – 2ρσ _{Prediction}σ_{Actual}*

*= 0.045 ^{2} + 0.19^{2} – 2ρ(0.045)(0.19)*

If forecasters tended **at worst** to have the same forecasts year after year (not even moving up or down with the market), then we’d expect the standard deviation of the forecasting errors to be **no worse** than the price swings of the market itself, or:

*√(0 ^{2} + 0.19^{2} – 2*0*0*0.19) = 19%*

But see, below, a reflection of the Wall Street forecasting errors:

We see that the errors (**either** individually or for the annual averages) of ~21% are even greater than the 19% typical market price moves! We see from the previous formulae that this worsening of error risk is only possible, **if** the firm’s strategist calls actually tend to move in the **wrong direction** (** ρ**<0) from the actual market change! This regrettable truth is what we see in this article happens to a

**small degree**. In the graphic below we show all 3 of the distributions shown above: predictions, actuals, and errors. Except below we will collapse the information along the horizontal axis (the time series component), and show all 3 distributions alongside one another. Given the averages and standard deviations we noted above, one can get a sense for how the historic data skews (an extreme tilt in just one direction).

It’s also interesting that the most optimistic forecast from the past 18 years was nearly **7% more optimistic** than the best market year of nearly 25%. On the other side, the most cataclysmic of 176 forecasts was -22% and that was still **nearly 26% more optimistic**than the worst market year.

So it is now fair play to ask, if these results shown are biased due to some possible disproportionate distribution of the 186 predictions, into the market crash years where the strategists fail so badly. As an extreme hypothetical example, say 9 firms gave predictions each year except for 2008 where 15 firms made a prediction, hence the weighted average results are disproportionately influenced by the high number of calls during an ugly year. As a general matter, shouldn’t analysts be given a break, if more of them were unfortunately quizzed for their views right before a **highly ill-timed market crash**? No. Hard working savers worldwide, heeding market calls, don’t sympathetically get a financial bailout for extreme market losses. Just because they happened in a bad market year. Instead their accounts are forever saddled with the scars of those losses.

Recall that we have described both the individual forecasters, as well as an annual average among the firms. So the bleak answer about how such an adjustment for the distribution of weightings, to soften the blow of extreme market years, should be mathematically obvious.

None-the-less, for our probability edification let’s explore standardizing our data and results, based on the variance of the deviations in a given year. The results are shown below.

We see now that the wild gyrations in the markets have been reined in, along with the corresponding errors for those years. But conversely, the greater portion of times when the market was far less volatile than during these rare crash years, a counter-balancing penalty is shown. And those returns and associated errors we see are therefore amplified. Note how the predictions and the errors are now >25% (again **either** on an individual or an annual average basis) and so much greater than that of the market’s typical 19% swings.

As a technical note, in 3 of the years (2005, 2011, 2015) the market returns were essentially flat (defined as __+__3%) and so those years were removed from the standardizing analysis. Also one thing that may be nice to note here is that unlike with our raw (non-standardized) results earlier on, in these standardized results immediately above, we have some **positive** autocorrelation among analyst **prognostications**. Where, once market strategists become more bullish, they continue to be **slightly** more bullish in the subsequent year. That’s not a great sign, and for this article we are looking for any set of **great** signs.

Now that we have explored the overall distribution of Wall Street strategist forecasts assuming a variety of weightings, let’s discuss their performance during **just the riskiest market years** and see to what degree that impacts our analysis. A Wall Street strategist after all might want you to believe that they can only tell you when the market is going up (by pure chance), but not when it is going down. Not bad, huh? But it is as we’ll show in a moment.

At the start of this article, we stated that the worst 3 years in the market over the past 18 years, were 2001, 2002, and 2008. With mind-numbing “returns” of -14%, -27%, -49%, respectively. If we simply banish these worst years from our minds and from our analysis, then the “typical” market return will **of course** rise above that of the Wall Street strategist’s views. Hurray, we suddenly swing to the opposite bias (generally **underestimating** market returns by now 3%). Would you feel comfortable simply casting aside those 3 market prediction years as some sort of freak event never to return? Out of 18 years of performance history, 3 seems like a high portion of pardons.

In the table below we show the math of positive and negative market years, and on a strategist’s capacity to be even **directionally **correct in their calls. The table below is **both** out of 144 individual targets (of the 186 calls the 10 associated with 2016 year-end and an additional 32 associated with flat years were obviously removed), and the averages out of 15 prior years (the 3 flat years were removed).

From this we see that Wall Street strategists typically guess an “up” year over 95% of the time, but during the past 18 years the market has been up **only** 73% of the time. That’s a large **22 percentage point** difference towards the extreme. Such excessive bullish calls mean that within the 73% of the years that where the market is up, over 95% of the time the analyst rightly called the direction. While that’s high, it is also not higher than the 95% of the time (agnostic to market direction) that they make market up-calls. In other words they are making market up calls at a rate of 95%, regardless of what might happen to the market! Now the high level of 95% up-calls when the market is up only 73% of the time, exposes a more crucial error on the flip side. That within the 27% of the time the market is down, <5% of the time did the analyst successfully admonish you of the direction! That’s of course because over 95% of the time when the market is about to fall, the naïve strategist is still dishing out a rosy forecast.

Leaving some of these important probability ideas aside, we recognize that investors ultimately care about their differentiated returns by following the calls of these popular strategists. Is it possible that despite the desolate statistics shown thus far, that investors’ portfolios over time were wise to follow all (or at least some) of these Wall Street strategists? We track the firms here. And generally the broader culture and company pressures that the strategists (and the economists too who before the age of GDPNow went right into the financial crisis declaring the economy may just be “softening”) are subjected to stays the same, even when new revolving faces of young strategists inevitably show up over the course of such a long history. I personally provided published work for one of the more well-known strategists in here, earlier in my career, and can appreciate the burdensome corporate culture aspect of this work.

Let’s look at the 9 firms, who have provided the most price targets to the media, from the data we gathered. JP Morgan provided the most target prices, at 16. And firms providing the 8^{th} and 9^{th} most targets are the now defunct banks of Bear Stearns and Lehman Brothers, at 8 forecasts each. What does it say that these firms that in the shorter amount of time that they were around -within the past couple decades- have provided **so many** optimistic price targets? In only **one** case -JP Morgan- did the collective price targets (consecutively strung together), give investors a more conservative sense of where the markets were actually going. For JP Morgan for example, we contrasted their 16 previous price targets (so not including the 17^{th} data for 2016) with the actual returns for the matching 16 years in the market.

On a technical aside, the lengthiest streaks among these 29 firms is JP Morgan and Morgan Stanley, tied at 14 consecutive years each. The 7^{th} lengthiest streak is UBS, at 6 years (followed by 7 firms tied at 5 years each). In order to perform serial correlation analysis at an individual firm level, one should only look at the lengthy streaks in the data, and not cluster together the disparate published target dates. There is some positive correlation of course between those firms with the most price targets in the past 19 years, and those firms that have the lengthiest streak of price targets given annually for these once revered media cover stories.

As a final integral chunk of this analysis, we need to take a deeper look at the standard deviation of the analyst’s errors, in order to get a sense for whether the disheartening results (or the ok results in the case of JP Morgan) are a result of chance or is there some nugget in which an investor can reliably use. And here we see that Prudential has among the lowest 9% volatility in errors (they still overestimate the market by a large amount, but their also 9% error volatility is narrow relative to others). On the other hand we noticed in the chart above that JP Morgan has the most attractive (among the 9 firms with the most price targets sampled by the press) error, **but this is only on average**. Their year-to-year gyrations in their error (which we’ll show below at 19%) is extraordinarily high.

These variations in errors matter. Since over a long course of forecasts, we would expect that there may be value in picking winning from losing strategists, **possibly if** the dispersion in seers was widely scattered. And here we see that the typical long-term forecasts indeed have **higher** deviations (relative to one another) versus the deviations from the market. So there is some point to at least**probing** what it means to find a better forecasting firm versus another, in terms of whether such differences are due to karma or something greater.

And the overall conclusions is that while a firm such as JP Morgan and Prudential barely perform in a distinguished way, the other firms are statistically **very poor** in their predictions. In fact, they offer **negative value** to you. There is only a 1 in 10 or so chance that any strategist would be as bullish as we see in the data, by luck alone. And these mediocre results for JP Morgan and Prudential fall in line with that, and almost certainly concludes that the **entire system** of Wall Street predictions is still buoyant. And this includes the faulty exercise of trying to be clever by **taking the average** **of 10 or so analysts in a year** and thinking that is closer to market reality. When instead it is still just as certainly overly bullish.

The only **value** in an immense net of incorrect analysts is to evaluate the range of forecasts, and **know** that the correct answer is**likely** somewhere closer to **one of the two ends** of the forecast distribution rather than the center of it! Even better if it is from the same analyst who is again correct in the outer-end calls, since it is less likely that such an aberration is to be a product of arbitrary luck. Unfortunately for the Wall Street strategists studied here however, there is not much of that, even for JP Morgan.

To show how bad things are with these upwardly partisan and very erroneous predictions (many times in the **outright wrong direction**), it is possible to outperform the performance of a Wall Street strategist **only** by having picked a general market rise call each year (say **randomly selecting anything** between 0% through 9%). You can even go through the extra flaky effort of flipping a coin beyond that, if you like, and adjusting your annual call slightly. And it’s telling that your outperformance would be on a raw, and **also** on a risk-adjusted basis (adjusted based upon the error **σ** we have discussed previously).

These days the strategists and the investment community seem to understand that their abilities are at least somewhat hallow, and thus they tend to push out the same bullish forecasts, except couch it in a lot of defensive political (or is it regulatory?) posture about how even if they don’t know where the markets are going, they and their banks are still handily beating the market for their client, but just in other **less transparent** ways. Having nothing to do with the honestly incorrect market forecasts they are each year happy to give. Such as sector or factor rotation!

Fancy. But can it be true that while they now claim to have really great skills in just knowing where all the sector components are going, but have really bad skills in knowing at all where the market (the sum of all of these sector components) is going? Could such “sector rotation skills” indoctrination even be practically useful in those 27% of the years where the market crashed hard at a macro level. Where even the “good” stocks in all sectors got whacked, as they should? Of course this is more creative game-playing by Wall Street in order to keep the music booming. The fact is (as anyone could see in this article) from the transparent analysis of their publicly paraded market calls, strategists have human fallibilities at plentiful interlocking levels. If they can’t be trusted to pick the stock market level better than a coin flipper, then how could they be trusted to pick out things just as random (e.g., short-term outlooks, sector rotation, long-term projections, etc.) The market provides random noise in all of these, and it’s foolish to act as if one can generally see into the future. **For 18 years**, the market has given you only a 4.5% annualized return, but instead if you bullishly trusted Wall Street strategists with your money over this lengthy period, they would have you thinking you would be grown your portfolio at an average of 9% a year. That’s a long period of time to be so ill-advised. And the 9 firm charts above gives you a starting sense (and in fact an underestimation) of what your massive wealth shortfall -from such a percentage error looks like- after compounded over 18 years.

And now for 2016 **all** 10 market strategists once again give you another bullish estimate, and averaging to a market rise of 8%. And 10 trading days into the year we are in fact more than 8% changed on the year, except it’s sadly **all to the downside**!

Source: Statistical Ideas