I wrote about how rationality made me better at Mario Kart which I linked to from here a while ago. In short, it’s a reminder to think about evidence sources and think about how much you should weigh each.
More recently, I’ve been watching The International, a Dota 2 competition. Last night I was watching yet another game where I wasn’t at all sure who would win. That said, I thought Team Liquid might win (p = 60%). When I saw Team Secret win a minor skirmish (teamfight) against Team Liquid, I made a new prediction of “Team Secret will win (p = 75%)”. However, my original guess was correct: Team Secret eventually won that game.
I then thought about the current metagame and how, this year, any team can go from “winning” to “lost” with only a small error or two, and the outcome of any individual skirmish doesn’t matter much.
I then imagined Bart Simpson repeatedly writing “I WILL NOT MAKE LARGE UPDATES BASED ON THE OUTCOME OF A SINGLE TEAMFIGHT” on a large blackboard and stopped making that mistake.
I think the major takeaway I’ve gotten from reading The Sequences is the vocabulary around updating beliefs, by varying amounts, based on evidence.
Vocabulary is big. What I’m about to say is anecdotal, but I think having the words to express a concept make that concept a LOT more readily available when its relevant. Thanks for the response!
I thought Team Liquid might win (p = 60%). When I saw Team Secret win a minor skirmish (teamfight) against Team Liquid, I made a new prediction of “Team Secret will win (p = 75%)”. However, my original guess was correct: Team Secret eventually won that game.
I think you mean “Team Liquid eventually won the game” here, since that seems to have been your original guess.
Also, it would be interesting to see how the Dota Plus win probabilities at, say 15 minutes into the match, hold up against the actual wins/losses in the games. On the one hand, it seems very difficult to have good predictions in a game like Dota where things can turn around at the drop of a hat, but on the other hand we have OpenAI Five claiming 85% win chance just at the end of the drafting phase.
Over on the “too small” end of the spectrum…
I wrote about how rationality made me better at Mario Kart which I linked to from here a while ago. In short, it’s a reminder to think about evidence sources and think about how much you should weigh each.
More recently, I’ve been watching The International, a Dota 2 competition. Last night I was watching yet another game where I wasn’t at all sure who would win. That said, I thought Team Liquid might win (p = 60%). When I saw Team Secret win a minor skirmish (teamfight) against Team Liquid, I made a new prediction of “Team Secret will win (p = 75%)”. However, my original guess was correct: Team Secret eventually won that game.
I then thought about the current metagame and how, this year, any team can go from “winning” to “lost” with only a small error or two, and the outcome of any individual skirmish doesn’t matter much.
I then imagined Bart Simpson repeatedly writing “I WILL NOT MAKE LARGE UPDATES BASED ON THE OUTCOME OF A SINGLE TEAMFIGHT” on a large blackboard and stopped making that mistake.
I think the major takeaway I’ve gotten from reading The Sequences is the vocabulary around updating beliefs, by varying amounts, based on evidence.
Vocabulary is big. What I’m about to say is anecdotal, but I think having the words to express a concept make that concept a LOT more readily available when its relevant. Thanks for the response!
I think you mean “Team Liquid eventually won the game” here, since that seems to have been your original guess.
Also, it would be interesting to see how the Dota Plus win probabilities at, say 15 minutes into the match, hold up against the actual wins/losses in the games. On the one hand, it seems very difficult to have good predictions in a game like Dota where things can turn around at the drop of a hat, but on the other hand we have OpenAI Five claiming 85% win chance just at the end of the drafting phase.