“Investing is the art of using imperfect information to make probabilistic assessments about an inherently unknowable future.”
Over the years, I have defined investing using various combinations of these words (leave the many behavioral elements and cognitive biases that might derail your thinking for another time).
The very same process of making probabilistic assessments in deploying capital can be applied to other areas of interest. Indeed, it is an excellent exercise to apply these skills to challenges outside of finance. I am not suggesting dabbling in high-skill, high-risk fields, but rather, engaging in the intellectual rigor of trying to reach a defendable conclusion about perplexing issues. Identifying logic errors elsewhere can be useful.
A good example: Comparing how various regional Covid-related policies — lockdowns, mask mandates, testing, etc. — impacted states’ success in managing the pandemic. I have seen this in hot take on California versus Florida, debates about when it was too soon to open or too late to lockdown. Outdoor gatherings or indoor dining?
Variations of these arguments are endless, but the errors we see repeated were very consistently similar. (I cannot figure out why my Libertarian friends have abandoned all rationality on this, but that too is a discussion for another time).
I have read a lot of hot takes — some good, some bad, some awful — but in each of these, I could not help but play “Spot the logical fallacy.” There is plenty of data on this, especially infections/hospitalizations/deaths as they have been tracked. Those 3 metrics are just the starting point.
Let’s see if we can identify some useful errors that might also apply to other fields, including investing:
Counterfactual: An all-time favorite. Too few people bother to ask this very simple question: But for variable X, how might the outcome have been different? Thus, if a state did not have aggressive lockdowns, but had lower comparable metrics rates, what might have that state’s metrics been? (This is the difference between Relative and Absolute performances).
It always surprises me when this simple mental model gets overlooked.
Causation (despite or lack thereof): When a state does X, how does that affect people’s behavior? What does the local populace do in the face of lockdowns (or not), re-openings (or not), mask mandates (or not). As it turned out, behavioral variables were very large; a substantial amount of human behaviors during the pandemic was independent of government guidance.
Meaning, actual causation was far more tenuous than widely assumed; it may be an error to assume simple correlation when comparing edicts to outcomes.
Measurement Error: Any infectious disease with a 50% asymptomatic occurrence rate is going to be difficult to assess. There have been credible estimates of infection that were 2X, 3X, 5X, even 9X of reported infections. These studies all contain broad assumptions, but we should be able to agree that of the three broad metrics, only hospitalizations were close to accurate.
Note: We have no reason to assume (barring fraud) the underreporting was uniform and could vary in dramatic ways within and between states.
Fraud: New York underreported the deaths of elderly living in assisted living facilities; Florida purposefully underreported infections and deaths. Other states like North Dakota have suspect reporting policies. “Where there is smoke, there is often fire” leads to the conclusion that the actual metrics in these states (and others) is far worse than reported.
Assuming uniformity: The larger states like California, Texas, or Florida have a variety of geographies, weather, social norms, communities, etc. It is helpful to compare within each state the best and worst counties, to learn why some government actions were useful and others were ineffectual. Assuming a uniformity that was not valid could easily mislead one into reaching a faulty conclusion.
Single vs Multiple Variables: Controlling for a single variable (masks, school closing, social gatherings) ignores the complexity of this issue. And yet, this is a popular line of reporting. The world is a far more complex a place than that.
If you are going to consider a single variable, I might suggest infection rates during the first few weeks of the pandemic relative to if a region has or is near an international airport. Beyond that, over-simplifying complexity is sheer folly.
We can discuss other elements like randomness or ignoring climate differences or variable quality of health care systems between regions or so many other things. It is a big complex problem, and over-simplification is not especially useful.
I do not want to suggest you can use logic and deductive reasoning to solve any problem or even generate alpha. I am suggesting that when we ignore basic logic or fail to recognize reasoning errors, you can not only forget about alpha but also fail to achieve beta.
However, if we are intelligent about risk, if we understand what we do not know, and if we recognize the limits of our ability to forecast the future, we can at least begin to improve our process and avoid costly errors — regardless of the field we may be working in.