My top 10 highlights from the book: The Signal and the Noise: The Art and Science of prediction by Nate Silver
The need for prediction arises not necessarily because the world itself is uncertain, but because understanding it fully is beyond our capacity. We can never make perfectly objective predictions. They will always be tainted by our subjective point of view. Prediction is important because it connects subjective and objective reality.
Risk, as first articulated by the economist Frank H. Knight in 1921 is something that you can put a price on. Uncertainty, on the other hand, is risk that is hard to measure.
Whenever you have a large number of candidate variables applied to a rarely occurring phenomenon, there is the risk of overfitting your model and mistaking the noise in the past data for a signal. “With four parameters I can fit an elephant,” the mathematician John von Neumann once said of this problem. “And with five I can make him wiggle his trunk.”
The idea behind frequentism is that uncertainty in a statistical problem results exclusively from collecting data among just a sample of the population rather than the whole population. The frequentist approach toward statistics seeks to wash its hands of the reason that predictions most often go wrong: human error. It views uncertainty as something intrinsic to the experiment rather than something intrinsic to our ability to understand the real world. The frequentist method also implies that, as you collect more data, your error will eventually approach zero: this will be both necessary and sufficient to solve any problems.
The bigger problem, however, is that the frequentist methods—in striving for immaculate statistical procedures that can’t be contaminated by the researcher’s bias—keep him hermetically sealed off from the real world. These methods discourage the researcher from considering the underlying context or plausibility of his hypothesis, something that the Bayesian method demands in the form of a prior probability. Thus, you will see apparently serious papers published on how toads can predict earthquakes, or how big-box stores like Target beget racial hate
Making predictions based on our beliefs is the best (and perhaps even the only) way to test ourselves. One of the nice characteristics of the Bayesian perspective is that, in explicitly acknowledging that we have prior beliefs that affect how we interpret new evidence, it provides for a very good description of how we react to the changes in our world.
Absolutely nothing useful is realized when one person who holds that there is a 0 percent probability of something argues against another person who holds that the probability is 100 percent. Many wars—like the sectarian wars in Europe in the early days of the printing press—probably result from something like this premise.
One property of Bayes’s theorem, in fact, is that our beliefs should converge toward one another—and toward the truth—as we are presented with more evidence over time. A heuristic approach to problem solving consists of employing rules of thumb when a deterministic solution to a problem is beyond our practical capacities.
Some theorists have proposed that we should think of the stock market as constituting two processes in one. There is the signal track, the stock market of the 1950s that we read about in textbooks. This is the market that prevails in the long run, with investors making relatively few trades, and prices well tied down to fundamentals. It helps investors to plan for their retirement and helps companies capitalize themselves. Then there is the fast track, the noise track, which is full of momentum trading, positive feedbacks, skewed incentives and herding behavior. Usually it is just a rock-paper-scissors game that does no real good to the broader economy—but also perhaps also no real harm. It’s just a bunch of sweaty traders passing money around. However, these tracks happen to run along the same road, as though some city decided to hold a Formula 1 race but by some bureaucratic oversight forgot to close one lane to commuter traffic. Sometimes, like during the financial crisis, there is a big accident, and regular investors get run over.
How good we think we are at prediction and how good we really are may even be inversely correlated.