Well yeah and I could trivially “defeat” any argument of yours by declaring my prior for it to be very low. My priors for the future are broadly distributed because the world we are in would seem very weird to a hunter-gatherer, so I think it’s likely that the world of 6,000 years from now will seem very weird to us. Heck, World War II would probably sound pretty fantastic if you described it to Columbus.
Priors can’t go arbitrarily high before the sum over incompatible propositions becomes greater than 1.
If we were to copy your brain a trillion times over and ask it to give your “broadly distributed” priors for various mutually incompatible and very specific propositions, the result should sum to 1 (or less than 1 if its non exhaustive), which means that most propositions should receive very, very low priors. I strongly suspect that it wouldn’t be even remotely the case—you’ll be given a proposition, then you can’t be sure it’s wrong “because the world of future would look strange”, and so you give it some prior heavily biased towards 0.5 , and then over all the propositions, the summ will be very huge .
When you’re making very specific stuff up about what the world of 6000 years from now will look like, it’s necessarily quite unlikely that you’re right and quite likely that you’re wrong, precisely because that future would seem strange. That the future is unpredictable works against specific visions of the future.
Well yeah and I could trivially “defeat” any argument of yours by declaring my prior for it to be very low. My priors for the future are broadly distributed because the world we are in would seem very weird to a hunter-gatherer, so I think it’s likely that the world of 6,000 years from now will seem very weird to us. Heck, World War II would probably sound pretty fantastic if you described it to Columbus.
I’ll let you have the last word :)
Priors can’t go arbitrarily high before the sum over incompatible propositions becomes greater than 1.
If we were to copy your brain a trillion times over and ask it to give your “broadly distributed” priors for various mutually incompatible and very specific propositions, the result should sum to 1 (or less than 1 if its non exhaustive), which means that most propositions should receive very, very low priors. I strongly suspect that it wouldn’t be even remotely the case—you’ll be given a proposition, then you can’t be sure it’s wrong “because the world of future would look strange”, and so you give it some prior heavily biased towards 0.5 , and then over all the propositions, the summ will be very huge .
When you’re making very specific stuff up about what the world of 6000 years from now will look like, it’s necessarily quite unlikely that you’re right and quite likely that you’re wrong, precisely because that future would seem strange. That the future is unpredictable works against specific visions of the future.