Is it too early to calculate Elon Musk’s calibration? Tesla seems like a success by now, and you could argue that SpaceX is as well. That’s at least two 10% predictions that came out true, so he’ll need to fail with 18 similar companies if he wants Scouts to take his opinion seriously...
I think this is a reason why focusing on ‘calibration’ is sort of a mistake? Like, look, the thing it’s doing is making the probabilities you say useful to other people / explicit EV calculations that you do yourself. It’s one of the skills in the toolbox; you also want good concepts, you also want accuracy, and so on.
Looking at the early section on motivational advice, I was reminded of Antifragile (my review, Scott’s review). Motivational advice which assures success if one believes hard enough and encourages people to try for things despite long odds doesn’t look like it helps those individuals. If this advice is widely spread and followed, who benefits? Possibly society as a whole. If individuals in general overestimate their chances of success, try, and largely fail, then there’s a much larger pool to select from, and hopefully the best successes are better than they otherwise would be. Deceptive advice transfers antifragility from individuals to the system.
On the same subject, I’ve long felt a disdain for that sort of motivational rhetoric as trite, but I’m still not sure why. The connection to self deception provided by Galef is one possible explanation. Has anyone else experienced something similar, or have an explanation for why that might be the case?
Chapter 8: Motivation Without Deception
Is it too early to calculate Elon Musk’s calibration? Tesla seems like a success by now, and you could argue that SpaceX is as well. That’s at least two 10% predictions that came out true, so he’ll need to fail with 18 similar companies if he wants Scouts to take his opinion seriously...
EDIT: This was a joke.
It could also be due to selection bias.
I think this is a reason why focusing on ‘calibration’ is sort of a mistake? Like, look, the thing it’s doing is making the probabilities you say useful to other people / explicit EV calculations that you do yourself. It’s one of the skills in the toolbox; you also want good concepts, you also want accuracy, and so on.
Looking at the early section on motivational advice, I was reminded of Antifragile (my review, Scott’s review). Motivational advice which assures success if one believes hard enough and encourages people to try for things despite long odds doesn’t look like it helps those individuals. If this advice is widely spread and followed, who benefits? Possibly society as a whole. If individuals in general overestimate their chances of success, try, and largely fail, then there’s a much larger pool to select from, and hopefully the best successes are better than they otherwise would be. Deceptive advice transfers antifragility from individuals to the system.
On the same subject, I’ve long felt a disdain for that sort of motivational rhetoric as trite, but I’m still not sure why. The connection to self deception provided by Galef is one possible explanation. Has anyone else experienced something similar, or have an explanation for why that might be the case?