Say I have a shuffled deck of cards. You say the probability that the top card is the Ace of Spades is 1⁄52. I show you the top card, it is the 5 of diamonds. I then ask, knowing what you know now, what probability you should have given.
I picked a card analogy, and you picked a dice one. I think the card one is better in this case, for weird idiosyncratic reasons I give below that might just be irrelevant to the train of thought you are on.
Cards vs Dice: If we could reset the whole planet to its exact state 1 week before the election then we would I think get the same result (I don’t think quantum will mess with us in one week). What if we do a coarser grained reset? So if there was a kettle of water at 90 degrees a week before the election that kettle is reset to contain the same volume of water in the same part of my kitchen, and the water is still 90 degrees, but the individual water molecules have different momenta. For some value of “macro” the world is reset to the same macrostate but not the same microstate, it had 1 week before election day. If we imagine this experiment I still think Trump wins every (or almost every) time, given what we know now. For me to think this kind of thermal-level randomness made a difference in one week it would have to have been much closer.
In my head things that change on the coarse-grained reset feel more like unrolled dice, and things that don’t more like facedown cards. Although in detail the distinction is fuzzy: it is based on an arbitrary line between micro an macro, and it is time sensitive, because cards that are going to be shuffled in the future are in the same category as dice.
EDIT: I did as asked, and replied without reading your comments on the EA forum. Reading that I think we are actually in complete agreement, although you actually know the proper terms for the things I gestured at.
EDIT: I did as asked, and replied without reading your comments on the EA forum. Reading that I think we are actually in complete agreement, although you actually know the proper terms for the things I gestured at.
Cool, thanks for reading my comments and letting me know your thoughts!
I actually just learned the term “aleatory uncertainty” from chatting with Claude 3.5 Sonnet (New) about my election forecasting in the last week or two post-election. (Turns out Claude was very good for helping me think through mistakes I made in forecasting and giving me useful ideas for how to be a better forecaster in the future.)
I then ask, knowing what you know now, what probability you should have given.
Sounds like you might have already predicted I’d say this (after reading my EA Forum comments), but to say it explicitly: What probability I should have given is different than the aleatoric probability. I think that by becoming informed and making a good judgment I could have reduced my epistemic uncertainty significantly, but I would have still had some. And the forecast that I should have made (or what market prices should have been is actually epistemic uncertainty + aleatoric uncertainty. And I think some people who were really informed could have gotten that to like ~65-90%, but due to lingering epistemic uncertainty could not have gotten it to >90% Trump (even if, as I believe, the aleatoric uncertainty was >90% (and probably >99%)).
I think this question is maybe logically flawed.
Say I have a shuffled deck of cards. You say the probability that the top card is the Ace of Spades is 1⁄52. I show you the top card, it is the 5 of diamonds. I then ask, knowing what you know now, what probability you should have given.
I picked a card analogy, and you picked a dice one. I think the card one is better in this case, for weird idiosyncratic reasons I give below that might just be irrelevant to the train of thought you are on.
Cards vs Dice: If we could reset the whole planet to its exact state 1 week before the election then we would I think get the same result (I don’t think quantum will mess with us in one week). What if we do a coarser grained reset? So if there was a kettle of water at 90 degrees a week before the election that kettle is reset to contain the same volume of water in the same part of my kitchen, and the water is still 90 degrees, but the individual water molecules have different momenta. For some value of “macro” the world is reset to the same macrostate but not the same microstate, it had 1 week before election day. If we imagine this experiment I still think Trump wins every (or almost every) time, given what we know now. For me to think this kind of thermal-level randomness made a difference in one week it would have to have been much closer.
In my head things that change on the coarse-grained reset feel more like unrolled dice, and things that don’t more like facedown cards. Although in detail the distinction is fuzzy: it is based on an arbitrary line between micro an macro, and it is time sensitive, because cards that are going to be shuffled in the future are in the same category as dice.
EDIT: I did as asked, and replied without reading your comments on the EA forum. Reading that I think we are actually in complete agreement, although you actually know the proper terms for the things I gestured at.
Cool, thanks for reading my comments and letting me know your thoughts!
I actually just learned the term “aleatory uncertainty” from chatting with Claude 3.5 Sonnet (New) about my election forecasting in the last week or two post-election. (Turns out Claude was very good for helping me think through mistakes I made in forecasting and giving me useful ideas for how to be a better forecaster in the future.)
Sounds like you might have already predicted I’d say this (after reading my EA Forum comments), but to say it explicitly: What probability I should have given is different than the aleatoric probability. I think that by becoming informed and making a good judgment I could have reduced my epistemic uncertainty significantly, but I would have still had some. And the forecast that I should have made (or what market prices should have been is actually epistemic uncertainty + aleatoric uncertainty. And I think some people who were really informed could have gotten that to like ~65-90%, but due to lingering epistemic uncertainty could not have gotten it to >90% Trump (even if, as I believe, the aleatoric uncertainty was >90% (and probably >99%)).