On Predictions and improving one's rationality
There’s no chance that the iPhone is going to get any significant market share. No chance.
– Steve Ballmer, Microsoft CEO (2007)
At the time, Microsoft’s shares were worth $30 USD, while Apple’s shares were worth about $9 USD – although anecdotal, it is funny to note that they are today respectively worth $55 and $105 USD, with Apple worth almost twice as much as Microsoft.
Predictions are an integral part of making decisions: they serve the purpose of guiding us through uncertainty. Steve Ballmer’s prediction had the form of a horrible one: it was an absolute statement. Of course, we don’t know what Steve Ballmer was really thinking at the time; he might have been terrified at the idea of the iPhone becoming successful, but just hid it well during the interview. It is obvious that he had a very strong incentive, as one of the largest Microsoft shareholders, to say that competitors will fail – but what’s more troubling is that he might actually have believed what he said.
Let us make an observation: random processes tend to follow normal (Gaussian) distributions, while biological systems can exhibit both normal and power law distributions.
Because I have the luxery of being able to speak my mind (contrary to Steve Ballmer) and since I don’t want to look like a fool (like Steve Ballmer did), I used plenty of adverbs to attenuate the strength of my previous statement (i.e. “tend to”, “can exhibit”).
While using weakening adverbs lengthen our statements, they protect us from saying utterely wrong things. We should in fact try to give a more probabilistic nature to our opinions and predictions
.
It is useful to be aware of the type of distribution we are dealing with before making a prediction. That is, if you were to make a prediction about a random or, at least, pseudo-random process; your best bet is probably to simply pick the middle value – which is more likely to become the median. On the other hand, if you had to make a prediction about a biological phenomena exhibiting network effects, you should pick either one of the extremes that you think is more likely and then weight it against similar phenomena (if there are 10 independent outcomes whose probability is hard to estimate, start by assigning them a 10% probability and then run multiple scenario updating probabilities accordingly).
Notwithstanding how crude this heuristic is, it should at least serve as a reminder that it is crucial to minimize biases from partial information, and instead focus on first principles (i.e. which kind of distribution we are likely dealing with and what the baseline is). Having done this, you can proceed to fine-tuning your predictions and weighting the different factors you consider will have an impact on the result; but this should come second, not first. Then, make sure that your prediction is inherently probabilistic and doesn’t contain any unnecessary absolute statement.
Let us make a small list of principles to abide by:
- No event that doesn’t break the rules of logic a priori has probability 0% – although it could be as low as
- Incompatible events reduce each other’s likelyhood.
- If the process is akin to a dice roll within a given range, assume a normal distribution. Pick the center as median.
- If the process is likely to exhibit a network effect (or cumulative effect), consider the possibility of exponential growth or decay (). Pick an extreme and weight it against comparable data.
- Predictions should be phrased in such a way that they reflect the nature of the probability distribution we believe fits the data the best. Sentences such as “within the range [a,b]” should be favored over picking exact points in general.
- Prediction is a human endevour – it is not purely mathematical, or even physical. Be aware of the stance you take and compare the risk times the cost of being wrong to the likelyhood times the payoff of being right.
- There are different kinds of predictions: truth-focused, impact-focused and blackswan-focused – the first aims at correctly assessing an event’s likelyhood, the second tries to either change the course of an event while the third takes a bet on an unlikely event where the payout of being right is larger than the cumulative cost of being wrong multiple times (think of economists predicting crashes: huge payout for calling a crisis, little to no costs for being wrong).
- To assess someone’s prediction, try to categorize it in one of the three categories given in point 7
- If truth is your goal, avoid absolute statements. Only apodictical propositions can be safely stated in an absolute way.
- Whenever there are two competing predictions, favor the simplest (incompetence over maliciousness, normal cases over extremes ones, first principles over anecdotes, natural over surnatural) – which is essentially Occam’s razor.
- Predictions should be written down and verified a posteriori and serve as a windows into one’s cognitive biases – this might be an excellent way to improve one’s rationality.
This list may come in handy as predictions become more prevalent, in the… future. Oh, did I just make an impact and blackswan focused prediction about predictions? This might have gotten unnecessarily meta here.
Anyway, I am excited to soon be able to participate and improve my rationality thanks to Ethereum and blockchain-based predictions markets like Augur and GroupGnosis!