Listen now
The opening arguments of the book suggest that forecasting accuracy has not been consistently tracked over time unlike in other fields which was a big surprise to me. How can anyone expect to become better at making predictions without the feedback data on the accuracy of past predictions? This resonates well with me as I have seen through my investments in the consumer internet space how important data driven product design is and how constant iteration that is based on actual data is the key to success.
Co-author of the book Philip Tetlock is a Professor in Psychology and Organizational Behavior who ran forecasting tournaments from 1984 to 2003. These tournaments were focused on politics and current events, and the purpose was to analyze and track forecasting accuracy among different groups of people. The book is basically a summary of his observations from this study and his other work on the topic. Many of the findings presented are based on results from the "Good Judgment Project" (GJP), a research collaborative co-led by Tetlock. This project used more than 10,000 carefully selected volunteers making predictions about geopolitical and economic events over a 1-4-year timespan, resulting in more than one million individual forecasts giving a lot of credibility on the findings.
What makes a good prediction?
The data showed that the average expert was roughly as accurate as a dart-throwing chimpanzee – one might just as well have been guessing the outcomes without putting any real effort into the forecasts. However, as in most cases, it turns out it is not that black and white. There were two distinct groups of experts; one group that performed worse than random guessing, and another group that performed better. What did this second group do differently?
The key to accurate forecasting was – to my surprise – not explained by specific skills like math or raw power like high intelligence or endless hours thrown at the problem (perhaps to my relief). More important traits explaining success appeared to be open-mindedness, curiosity, carefulness, and the ability to be self-critical. Most interestingly people who believed they could become better – and constantly updated their forecasts – were significantly better at making accurate predictions. In fact, this tendency was a 3x better predictor of success than raw intelligence.
The traits of a Superforecaster
The better-performing forecasters did a number of things: broke large issues down into sub-problems, explored similarities and differences between their views and others, were especially careful not to overreact or underreact to new evidence, balanced under- and overconfidence, expressed their judgments on a scale of probability that was as fine-grained as possible, updated forecasts when new information emerged, and were able to change their minds. Each of these behaviors requires and deserves a closer look, and this is only a partial list, but the book does a great job going into detail on each of these key traits that “superforecasters” exhibit. Taking an example from the list, one of the worse decisions I have done in venture investing is highly associated with the inability to change my mind and underreacting as new information emerges. We were evaluating an investment into an innovative tech company that had many things going for it. Late in the investment process I discovered very negative background information on the CEO but underreacted on the new information and I was not able to change my mind. The investment was completed but never met our expectations due to leadership.
Not surprisingly, the best forecasters were also very good at managing their cognitive biases. We all have biases, and many times they are unconscious. My own example of a poor investment decision above is related to a bias called Belief Perseverance. That is the tendency for people to hold their beliefs as true, even when there is ample evidence to discredit the belief. When faced with evidence that contradicts their beliefs, people may choose to discredit, dismiss, misinterpret, or place little significance on the contradictory information. One bias I was not familiar with from before was highlighted a few times in the book called "attribute substitution." This occurs when an individual makes a judgment (about a target attribute) that is computationally complex, and instead substitutes this to a more easily calculated heuristic attribute. In other words, we sometimes subconsciously substitute the (difficult) question at hand with a similar (easier to answer) question, and make our predictions based on that question instead, without necessarily realizing that we’ve made any substitution at all. Not only does this lead to cognitive biases, but the forecaster may be totally unaware that there are any biases at the root of their predictions
To summarize, be curious, careful, self-critical and keep an open mind. You can become better at making predictions and decisions but make sure you keep a check on cognitive biases. All in all, a highly recommended read for any decision maker in any industry – I enjoyed it a lot.