Robert P. Seawright is the Chief Investment & Information Officer for Madison Avenue Securities, a boutique broker-dealer and investment advisory firm headquartered in San Diego, California. Bob is also a columnist for Research magazine, a Contributing Editor at Portfolioist as well as a contributor to the Financial Times, The Big Picture, The Wall Street Journal’s MarketWatch, Pragmatic Capitalism, and ThinkAdvisor. He blogs at Above the Market
~~~
Investment Belief #2: Smart Investing is Reality-Based
Anytime is a good time to talk baseball. I’ve done it pretty much my whole life. If you’re watching a game, its pace is perfectly conducive to discussing (arguing about) players, managers, strategy, tactics, the standings, the pennant races, the quality of ballpark peanuts, and pretty much anything else. In the off-season, the “hot stove league” allows for myriad possible conversations (arguments) about how to make one’s favorite team better. And now that spring training camps have opened, baseball talk about the upcoming season and its prospects has officially begun again in earnest. The coming of Spring means the return of hope — maybe this will finally be the year (Go Padres!) — which of course means talking (arguing) about it.
Our neighborhood quarrels about the National Pastime when I was a kid were incessant and invigorating, and didn’t have to include the vagaries of team revenues and revenue-sharing, player contracts, free agency and the luxury tax, as they do now. We could focus on more important stuff. Who should be the new catcher? Who should we trade for? Do we have any hot phenoms? Who’s the best player? The best pitcher? The best hitter? The best third baseman? Who belongs in the Hall of Fame? Which team will win at all this year? How do the new baseball cards look? Is the new Strat-O-Matic edition out yet?
Early on, my arguments were rudimentary and, truth be told, plenty stupid. They were ideological (the players on my team were always better than those on your team), authority-laden (“The guy in the paper says…”), narrative-driven (“Remember that time…”), overly influenced by the recent (“Did you see what Jim Northrup did last night?”) and loaded with confirmation bias.
Quickly I came to realize that it’s really hard to change an entrenched opinion, and not just because I was arguing with dopes. Slowly it became clear that if I wanted to have at least a chance of winning my arguments, I needed to argue for a position that was reality-based. I needed to bring facts, data and just-plain solid evidence to the table if I wanted to make a reasonable claim to being right, much less of convincing anyone. Arguments and beliefs that are not reality-based are bound to fail, and to fail sooner rather than later.
Fortunately, I had weapons for this fight, even as a kid. I pulled out my baseball cards and poured over the data from previous seasons on the back. I subscribed to the then unquestioned “Bible of baseball,” The Sporting News – my local paper didn’t even carry box scores, much less comprehensive statistics — to get current data and information. I kept checking out The Baseball Encyclopedia and other books from my local library and studied them intently. And I paid attention to what real experts said.
The good news was that there was knowledge to be had. Baseball has more statistics than any other sport, after all. Clearly, a .320 hitter is better than a .220 hitter, a 20-win Cy Young Award winner is better than a journeyman who goes 2-7, and 40 home runs is better than 10.
But the bad news was the knowledge base’s remarkable limitations. It didn’t take any great insight to figure out that a pitcher on a good team ought to have better stats than one on a poor team, or that right-handed power hitters for the Red Sox had a significant advantage over their counterparts on the Yankees on account of their respective stadium configurations, or that fielding percentage alone didn’t tell much about a player’s defensive value. RBI opportunities would obviously impact RBI totals. Players at the top of the order would score more runs. Pitchers in big ballparks would benefit therefrom. My Dad always insisted that “a walk’s as good as a hit.” But was he right? Examining issues and concepts like these and what they actually meant with respect to players, wins and losses was simply not possible with the information that was then available to me.
Moreover, I was disappointed at how difficult it was to get down to the “real nitty-gritty.” It was easy to show that Johnny Bench was better than Joe Azcue. But objectively differentiating between great players like Tony Perez and Brooks Robinson, or any set of roughly equivalent players – always bound to be exceedingly difficult — remained essentially impossible (but Brooks was a bit better, if you want to know). The tools simply weren’t available to make the fine distinctions that are required. There was more than enough basis to argue (there always is), but the body of available evidence and analysis remained tiny and limited. Good, solid, objective conclusions were few and hard to come by.
In a fascinating bit of historical serendipity, these problems — that had seemed insoluble to me as a kid – began to be met head-on due to the work of a really smart security guard. In 1977, Stokely-Van Camp (the pork & beans people) night watchman Bill James created a 68-page “book” of mimeographed sheets stapled together that he called The Bill James Baseball Abstract. He marketed it via small classified ads in the back pages of The Sporting News. This seemingly minor event was actually the dawning of a new era in baseball research. James’ approach, which he termed sabermetrics in reference to the Society for American Baseball Research (SABR), set out to analyze baseball scientifically through the use of objective evidence in an attempt to determine why teams win or lose. Along the way, he got to debunk traditional baseball dogma, “baseball’s Kilimanjaro of repeated legend and legerdemain.”
This approach began inauspiciously, but slowly built a following. James even got a major publishing deal after a few years. Sabermetrics allowed the young professional adult me to build better arguments and beliefs about baseball by marshalling far more pertinent facts and data and creating from them better informed argument and beliefs, even if and as “baseball people” ignored it.
The Baseball Abstract books by James — which I read as carefully as I read the backs of baseball cards as a kid — were the modern predecessors to the sports analytics movement, which has since spread to all major sports. More importantly, this analysis eventually went mainstream and came to be used by the teams themselves, allowing all of us who engaged in such arguments to begin ”checking our work” by seeing if and how what would come to be known as “moneyball” worked in the real world. James’ innovations — such as runs created, range factor, and defensive efficiency rating — were significant. More importantly, his approach has led to many other noteworthy sabermetric developments, some proprietary to individual teams, who now even have entire departments dedicated to data analysis.
On account of the success of Moneyball (both the book and the movie), the Bill James approach was put on display by Michael Lewis for everyone to see. As a consequence, “baseball people” could no longer readily ignore it. Moneyball focused on the 2002 season of the Oakland Athletics, a team with one of the smallest budgets in baseball. Sabermetrics was then pretty much the sole province of stats geeks. At the time, the A’s had lost three of their star players to free agency because they could not afford to keep them. A’s General Manager Billy Beane, armed with reams of performance and other statistical data, his interpretation of which, heavily influenced by James, was rejected by “traditional baseball men,” including many in his own organization (and also armed with three terrific — according to both traditional and newfangled measures — young starting pitchers), used that data to assemble a team of seemingly undesirable players on the cheap that proceeded to win 103 games and the division title. After that, sabermetrics went mainstream in a hurry, despite some very prominent detractors. Winning does that. In fact, the “Curse of the Bambino” was lifted (the Red Sox won the 2004 World Series) largely on account of sabermetrics and nearly every team today emphasizes its use.
By just 2006, Time magazine even named James as one of the 100 most influential people in the world. Then Red Sox (and current Cubs) General Manager Theo Epstein commented to Time about James’ impact on the game. “The thing that stands out for me is Bill’s humility. He was an outsider, self-publishing invisible truths about baseball while the Establishment ignored him. Now 25 years later, his ideas have become part of the foundation of baseball strategy.” The bottom line is that baseball analysis has become far more objective, relevant and useful because of Bill James. Most importantly, the James approach works (if not as well as some might hope — there are other factors involved, as James concedes).
The crucial lesson of Moneyball relates to Beane being able to find value via underappreciated player assets (some assets are cheap for good reason) by way of an objective, disciplined, data-driven (Jamesian) process. In other words, as Lewis explains, “it is about using statistical analysis to shift the odds [of winning] a bit in one’s favor.” Beane sought out players that he could obtain cheaply because their actual (statistically verifiable) value was greater than their generally perceived value. Despite the now widespread use of James’ approach, smart baseball people are still finding underappreciated value (more here) and lunkheads are still making big mistakes. Data-driven — reality-based – analysis is crucial because it works. Indeed, it works far better than any alternative approach.
The Oxford English Dictionary defines the scientific method as “a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.” What it means is that we observe and investigate the world and build our knowledge base on account of what we learn and discover, but we check our work at every point and keep checking our work. It is inherently experimental. In order to be scientific, then, our inquiries and conclusions need to be based upon empirical, measurable evidence. Science is inherently reality-based.
The scientific method can and should be applied to traditional science as well as to all types of inquiry and study about the nature of reality. The great physicist Richard Feynman even applied such experimentation to hitting on women. To his surprise, he learned that he (at least) was more successful by being aloof than by being polite or by buying a woman he found attractive a drink.
Science progresses not via verification (which can only be inferred) but by falsification (which, if established and itself verified, provides relative certainty only as to what is not true). That makes it unwieldy. Thank you, Karl Popper. In our business and in baseball, as in science generally, we need to build our processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the facts and data allow. Yet the markets demand action. Running a baseball team requires action. There is nothing tentative about them. That’s the conundrum we face.
In essence, as he had always intended, what Bill James did was to make baseball more scientific. He did so using the common scientific tools of investigation, reason, observation, induction and testing with an attitude of skepticism. The bottom line is that Bill James tested a variety of traditional baseball dogmas and found them wanting and demonstrably so.
Because the scientific approach works, it seems as though it would be readily adopted and employed in every arena in which it might work. That it so often isn’t and, more pertinently, isn’t in much of the investment world, is partly a function of human nature and partly a function of the nature of the investment business. I’ll start with us.
We love stories. They help us to explain, understand and interpret the world around us. They also give us a frame of reference we can use to remember the concepts we take them to represent. Perhaps most significantly, we inherently prefer narrative to data — often to the detriment of our understanding because, unfortunately, our stories are also often steeped in error.
In the context of the markets, as elsewhere, we all like to think that we carefully gather and evaluate facts and data before coming to our conclusions and telling our stories. But we don’t.
Instead, we tend to suffer from confirmation bias and thus reach a conclusion first. Only thereafter do we gather facts, but even so we tend to do so to support our pre-conceived conclusions. We then take our selected “facts” (or thereafter examine any alleged new facts) and cram them into our desired narratives, because narratives are so crucial to how we make sense of reality. Keeping one’s analysis and interpretation of the facts reasonably objective – since analysis and interpretation are required for data to be actionable – is really, really hard even in the best of circumstances.
That difficulty is exacerbated because we simply aren’t very good with data and probability. In this experiment involving giving electric shocks to subjects, scientists found people were willing to pay up to $20 to avoid a 99 percent chance of a painful electric shock. On its face, that seems reasonable. However, those same subjects would also be willing to pay up to $7 to avoid a mere one percent chance of the same shock. It turned out that the subjects had only the vaguest concept of what the math means and represents. They were pretty much only thinking about the shock.
Nassim Taleb calls our tendency to create false and/or unsupported stories in an effort to legitimize our pre-conceived notions the “narrative fallacy.” That fallacy threatens our analysis and judgment constantly. Therefore, while we may enjoy the stories and even be aided by them (we love hearing about Babe Ruth “calling his shot,” for example), we should put our faith in the actual data, especially because the stories are so often in conflict. Our interpretations of the data need to be constantly reevaluated too. As mathematician John Allen Paulos noted in The New York Times: “There is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled.”
How badly are we beguiled? Let’s take a look at a bit of the data.
We all live in an overconfident, Lake Wobegon world (“where all the women are strong, all the men are good-looking, and all the children are above average”). We are only correct about 80 percent of the time when we are “99 percent sure.” Despite the experiences of anyone who has gone to college, fully 94 percent of college professors believe they have above-average teaching skills. Since 80 percent of drivers say that their driving skills are above average, I guess none of them are on the freeway when I am. While 70 percent of high school students claim to have above-average leadership skills, only two percent say they are below average, no doubt taught by above average math teachers. In a finding that pretty well sums things up, 85-90 percent of people think that the future will be more pleasant and less painful for them than for the average person.
Our overconfident tendencies as well as other cognitive and behavioral shortcomings (more on them in our next installment of this series) are now well-known of course and obvious in others if not to ourselves (there’s that bias blind spot thing again). Even (especially?!) experts get it wrong far too often.
For example, Milton Friedman called Irving Fisher “the greatest economist the United States has ever produced.” However, Fisher made a statement in 1929 that all but destroyed his credibility for good. Three days before the famous Wall Street crash he claimed that “stocks have reached what looks like a permanently high plateau.”
Sadly, mistakes like that are anything but uncommon. Y2K anyone? Or how about the book by James Glassman and Kevin Hassett forecasting Dow 36,000? Philip Tetlock’s excellent Expert Political Judgment: How Good Is It? How Can We Know? examines why experts are so often wrong in sometimes excruciating detail. Even worse, when wrong, experts are rarely held accountable and they rarely admit it. They insist that they were just off on timing, or blindsided by an impossible-to-predict event, or almost right, or wrong for the right reasons. Tetlock even goes so far as to claim that the better known and more frequently quoted experts are, the less reliable their guesses about the future are likely to be, largely due to overconfidence. As John Kenneth Galbraith famously pointed out, we have two classes of forecasters: those who don’t know and those who don’t know they don’t know.
What that means is that as a matter of fact and belief I need to be committed to evidence-based investing. Likewise with the evaluation of baseball players. That requires being data-driven – ideology alone is not enough. Being sold on a story isn’t enough. A good idea isn’t enough. A good investment strategy, like the proper evaluation of baseball players and teams, will be – must be – supported by the data. Reality must rule.
But being truly data-driven also requires that we go no further than the data allows. Honoring our limitations is particularly difficult because we so readily “see” more than is really there. As Charlie Munger said to Howard Marks, “none of this is easy, and anybody who thinks it is easy is stupid.” And a key reason investing is so hard is that the data tells us so much less than we’d like it too (similarly, that may well explain why MLB teams today value prospects more highly than current major-leaguers of similar stature and age, a huge change from when I was a kid).
Good investing, like good player evaluation, demands humility. We need to be humble so as to be able to recognize our mistakes and correct our errors. We need to remember that we don’t know everything (or even necessarily all that much). And we need to be able to recognize what and when we just don’t know. There is always a good deal of randomness to factor in.
In a world where the vast majority of funds and managers underperform, there is little evidence that “experienced investors” really do understand data, math, evidence or the fundamentals of investing. As I have argued before, “all in all, we suck at math. It isn’t just the ‘masses’ either — it’s the vast majority of us and often even alleged experts. Thus an analyst [or MLB GM] who understands math and utilizes it correctly will have a major advantage.”
Most investors, even professional investors, are frequently motivated by hope, fear, greed, ego, recency, narrative and ideology. Baseball GMs too. We would all be far better off if our processes were reality-based and data-driven at every point. If only our human make-up didn’t make it so difficult for us to do that. And if only business realities weren’t in on the conspiracy.
To quote Tadas Viskanta (and myself), investing successfully is really hard. But we can see generally what works and what doesn’t work. That we see and don’t do (or try to do) what works is partly due to poor analysis and partly due to cognitive biases that limit our success, but it’s also partly a commercial judgment. In the words of Upton Sinclair, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Sadly, scientific reasoning isn’t often practiced (or is often ignored) in the investment world because there isn’t always money in it. The money follows and is in magical thinking (“You can outperform in every environment!”). Since the primary goal of the vast majority of people in the investment industry is to make money (irrespective of what they tell their clients and customers), magical thinking carries the day.
For example, it is clear that high fees are a huge drag on returns and hurt consumers. But they benefit us. In fact, the higher they are the better it is for us (at least in the short-term). So we generally make our fees as high as we can get away with. Closet indexing keeps assets sticky while doing right (value, small, concentration, low beta, momentum, low fees, etc.) risks underperformance for significant periods and thus losing assets. We want sticky asserts, so….
Ignoring the facts we know is practically and effectively no different from not knowing. Ignoring the conclusions the data suggests are true isn’t an analytical problem. It’s much worse than that. It’s a moral problem. Otherwise, how could such dreadful investment management performance be so commonplace?
The Wall Street culture is craven, as I know from first-hand knowledge. Even worse (in the words of Jack Bogle), our industry is organized around “salesmanship rather than stewardship,” and is “the consummate example of capitalism gone awry.” In the real world, money trumps reality far too often.
Even when objectively analyzed and appropriately utilized, numbers alone never tell the whole story, however. Consider, for instance, the school of industrial management that was spawned by Frederick Winslow Taylor (“scientific management”) over a century ago. Taylor claimed that it was possible to improve worker productivity through a scientific evaluation process. This process, among other components, included the measurement of each worker’s physical movements and speed in which they were undertaken in the production process in order to assess the optimal length of time every step should take and thus the optimal output expectation set for each worker. These expectations could then be used to link worker output and pay via a piece rate system.
The purported benefits of scientific management, however, proved to be spurious and the school was supplanted by another — one that emphasized the human relations of production (thus “Human Resources” departments). Accordingly, excessive obsession with quantification at the expense of human relations met with failure.
The best data analysis won’t necessarily mean winning the pennant. Data isn’t everything, but good investing and good baseball management are impossible over the long haul today without its careful analysis and use. Good analytics is necessary if not sufficient for ongoing baseball and investment success, as the philosophers would have it. Luck can work in the near-term.
When I was a kid in the 1960s, obsessing over baseball players and stats, plenty of people were telling me to question everything, but the implicit (and erroneous) suggestion was that I reject everything. Instead, I suggest honoring the past without being bound by it. Sabermetrics doesn’t eliminate the need for old-fashioned scouting. Consistent with Robert Hagstrom’s idea that investing is the last liberal art, we should always explore and learn, combine thoughts from multiple sources and disciplines, try to think nimbly because the need for new approaches is ongoing, and we should test and re-test our ideas. I think that idea applies to baseball too.
If we are going to succeed, we’re going to have to ask questions and keep asking questions. Data won’t give us all the answers, but all of our good, objective answers — upon which we should build our investment (and baseball) beliefs — will be consistent with the data. Thus our processes should be data-driven at every point. Smart investing is reality-based. As James Thurber (and later Casey Stengel) would have it, “You could look it up.”
To quote Tadas Viskanta (and myself), investing successfully is really hard. But we can see generally what works and what doesn’t work. That we see and don’t do (or try to do) what works is partly due to poor analysis and partly due to cognitive biases that limit our success, but it’s also partly a commercial judgment. In the words of Upton Sinclair, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Sadly, scientific reasoning isn’t often practiced (or is often ignored) in the investment world because there isn’t always money in it. The money follows and is in magical thinking (“You can outperform in every environment!”). Since the primary goal of the vast majority of people in the investment industry is to make money (irrespective of what they tell their clients and customers), magical thinking carries the day.
For example, it is clear that high fees are a huge drag on returns and hurt consumers. But they benefit us. In fact, the higher they are the better it is for us (at least in the short-term). So we generally make our fees as high as we can get away with. Closet indexing keeps assets sticky while doing right (value, small, concentration, low beta, momentum, low fees, etc.) risks underperformance for significant periods and thus losing assets. We want sticky asserts, so….
Ignoring the facts we know is practically and effectively no different from not knowing. Ignoring the conclusions the data suggests are true isn’t an analytical problem. It’s much worse than that. It’s a moral problem. Otherwise, how could such dreadful investment management performance be so commonplace?
The Wall Street culture is craven, as I know from first-hand knowledge. Even worse (in the words of Jack Bogle), our industry is organized around “salesmanship rather than stewardship,” and is “the consummate example of capitalism gone awry.” In the real world, money trumps reality far too often.
Even when objectively analyzed and appropriately utilized, numbers alone never tell the whole story, however. Consider, for instance, the school of industrial management that was spawned by Frederick Winslow Taylor (“scientific management”) over a century ago. Taylor claimed that it was possible to improve worker productivity through a scientific evaluation process. This process, among other components, included the measurement of each worker’s physical movements and speed in which they were undertaken in the production process in order to assess the optimal length of time every step should take and thus the optimal output expectation set for each worker. These expectations could then be used to link worker output and pay via a piece rate system.
The purported benefits of scientific management, however, proved to be spurious and the school was supplanted by another — one that emphasized the human relations of production (thus “Human Resources” departments). Accordingly, excessive obsession with quantification at the expense of human relations met with failure.
The best data analysis won’t necessarily mean winning the pennant. Data isn’t everything, but good investing and good baseball management are impossible over the long haul today without its careful analysis and use. Good analytics is necessary if not sufficient for ongoing baseball and investment success, as the philosophers would have it. Luck can work in the near-term.
When I was a kid in the 1960s, obsessing over baseball players and stats, plenty of people were telling me to question everything, but the implicit (and erroneous) suggestion was that I reject everything. Instead, I suggest honoring the past without being bound by it. Sabermetrics doesn’t eliminate the need for old-fashioned scouting. Consistent with Robert Hagstrom’s idea that investing is the last liberal art, we should always explore and learn, combine thoughts from multiple sources and disciplines, try to think nimbly because the need for new approaches is ongoing, and we should test and re-test our ideas. I think that idea applies to baseball too.
If we are going to succeed, we’re going to have to ask questions and keep asking questions. Data won’t give us all the answers, but all of our good, objective answers — upon which we should build our investment (and baseball) beliefs — will be consistent with the data. Thus our processes should be data-driven at every point. Smart investing is reality-based. As James Thurber (and later Casey Stengel) would have it, “You could look it up.”
______________
This post is the third in a series on Investment Beliefs. Such stated beliefs can suggest a framework for decision-making amidst uncertainty. More specifically, one’s beliefs can provide a basis for strategic investment management, inform priorities, and be used to ensure an alignment of interests among all relevant stakeholders.
Originally posted at Above the Markets