Fascinating discussion over at Edge about the nature of expert judgment and what it means. (It careens into a little HedgeHog and Fox arguments). This has an enormous relevancy for investors.
Here’s the intro, plus a link to the video.
How to Win at Forecasting
There’s a question that I’ve been asking myself for nearly three decades now and trying to get a research handle on, and that is why is the quality of public debate so low and why is it that the quality often seems to deteriorate the more important the stakes get?
About 30 years ago I started my work on expert political judgment. It was the height of the Cold War. There was a ferocious debate about how to deal with the Soviet Union. There was a liberal view; there was a conservative view. Each position led to certain predictions about how the Soviets would be likely to react to various policy initiatives.
One thing that became very clear, especially after Gorbachev came to power and confounded the predictions of both liberals and conservatives, was that even though nobody predicted the direction that Gorbachev was taking the Soviet Union, virtually everybody after the fact had a compelling explanation for it. We seemed to be working in what one psychologist called an “outcome irrelevant learning situation.” People drew whatever lessons they wanted from history.
There is quite a bit of skepticism about political punditry, but there’s also a huge appetite for it. I was struck 30 years ago and I’m struck now by how little interest there is in holding political pundits who wield great influence accountable for predictions they make on important matters of public policy.
Source: Edge
~~~
Philip E. Tetlock is Annenberg University Professor at the University of Pennsylvania (School of Arts and Sciences and Wharton School). He is author of Expert Political Judgment: How Good Is It? How Can We Know? Which describes a twenty-year study in which 284 experts in many fields, including government officials, professors, and journalists and ranging from Marxists to free-marketeers, were asked to make 28,000 predictions about the future. He found they were only slightly more accurate than chance, and worse than simple extrapolation algorithms. The book has received many awards, including the 2006 Woodrow Wilson Award from the American Political Science Association and the 2008 Grawemeyer Award for Ideas Improving World Order.
What's been said:
Discussions found on the web: