So far, we have a decisive argument against classical reference machine and Copenhagen.
My problem with this type of probabilistic proof is that it predictably leads to wrong results for some (maybe very few) observers. The simplest example is the Doomsday Argument: Assume everybody reasons like that, then very early/late people who apply this argument will arrive at the wrong conclusion that the end is near/far.
Then the problem is that you can’t make bets and check your calibration, not that some people will arrive at the wrong conclusion, which is inevitable with probabilistic reasoning.
My problem with this type of probabilistic proof is that it predictably leads to wrong results for some (maybe very few) observers. The simplest example is the Doomsday Argument: Assume everybody reasons like that, then very early/late people who apply this argument will arrive at the wrong conclusion that the end is near/far.
Seems like a general issue with Bayesian probabilities? Like, I’m making a argument at >1000:1 odds ratio, it’s not meant to be 100%.
No? With normal probabilities, I can make bets and check my calibration. That’s not possible here.
Then the problem is that you can’t make bets and check your calibration, not that some people will arrive at the wrong conclusion, which is inevitable with probabilistic reasoning.