I think this is a bad excercise or test of rationality skill. First, it’s massively time-consuming, as a LOT has been written about it. Second (though perhaps more important), there’s no reasonable scoring rubric (so not good as a test), and no feedback loop to improve on (so not good as an excercise).
I have, in fact, followed the topic—I used to play poker at semi-professional levels (played in big games and cashed in many small and medium tourneys, net positive over many years, never actually devoted the energy to make it a big part of my income), and still have close friends in the biz (organizers, authors, and players). There is a consensus among those I know well enough to have a positive opinion on their honesty and epistemology, but it’s complex enough that it’s not a very good topic for abstract rationality practice.
More standard prediction contests would seem strictly superior for testing and practice. Pick some metaculus medium-term predictions, make individual bets, then discuss reasoning and make new bets. Practice crux-finding and input metrics you can use to resolve actual work disagreements.
I used to be quite an active and profitable trader on PredictIt. I’ve also looked into this incident a bit myself. I think the rationality skills needed to do well in prediction contests are important, but different, than the kind needed to investigate a question like this, the Amanda Knox case, or the Sabatini incident.
I think this is a bad excercise or test of rationality skill. First, it’s massively time-consuming, as a LOT has been written about it. Second (though perhaps more important), there’s no reasonable scoring rubric (so not good as a test), and no feedback loop to improve on (so not good as an excercise).
I have, in fact, followed the topic—I used to play poker at semi-professional levels (played in big games and cashed in many small and medium tourneys, net positive over many years, never actually devoted the energy to make it a big part of my income), and still have close friends in the biz (organizers, authors, and players). There is a consensus among those I know well enough to have a positive opinion on their honesty and epistemology, but it’s complex enough that it’s not a very good topic for abstract rationality practice.
More standard prediction contests would seem strictly superior for testing and practice. Pick some metaculus medium-term predictions, make individual bets, then discuss reasoning and make new bets. Practice crux-finding and input metrics you can use to resolve actual work disagreements.
I used to be quite an active and profitable trader on PredictIt. I’ve also looked into this incident a bit myself. I think the rationality skills needed to do well in prediction contests are important, but different, than the kind needed to investigate a question like this, the Amanda Knox case, or the Sabatini incident.