Thank you for your clear response. How about another example? If somebody offers to flip a fair coin and give me $11 if Heads and $10 if Tails then I will happily take this bet. If they say we’re going to repeat the same bet 1000 times then I will take this bet also and I expect to gain and unlikely to lose a lot. If instead they show me five unfair coins and say they are weighted from 20% Heads to 70% Heads then I’ll be taking on more risk. The other three could be all 21% Heads or all 69% Heads but if I had to pick then I’ll pick Tails because if I know nothing about the other three and I know nothing about if the other person wants me to make or lose money then I’d figure the other three are randomly biased within that range (even though I could be playing a loser’s game for 1000 rounds with flips of those coins if each time one of the coins is selected randomly to flip, but it’s still better than picking Heads). Is this the situation we’re discussing?
I think I’m happy to say that in this example, you’re warranted in reasoning like: “I have no information about the biases of the three coins except that they’re in the range [0.2, 0.7]. The space ‘possible biases of the coin’ seems like a privileged space with respect to which I can apply the principle of indifference, so there’s a positive motivation for having a determinate probability distribution about each of the three coins centered on 0.45.”
But many epistemic situations we face in the real world, especially when reasoning about the far future, are not like that. We don’t have a clear, privileged range of numbers to which we can apply the principle of indifference. Rather we have lots of vague guesses about a complicated web of things, and our reasons for thinking a given action could be good for the far future are qualitatively different from (hence not symmetric with) our reasons for thinking it could be bad. (Getting into the details of the case for this is better left for top-level posts I’m working on, but that’s the prima facie idea.)
Thank you for your clear response. How about another example? If somebody offers to flip a fair coin and give me $11 if Heads and $10 if Tails then I will happily take this bet. If they say we’re going to repeat the same bet 1000 times then I will take this bet also and I expect to gain and unlikely to lose a lot. If instead they show me five unfair coins and say they are weighted from 20% Heads to 70% Heads then I’ll be taking on more risk. The other three could be all 21% Heads or all 69% Heads but if I had to pick then I’ll pick Tails because if I know nothing about the other three and I know nothing about if the other person wants me to make or lose money then I’d figure the other three are randomly biased within that range (even though I could be playing a loser’s game for 1000 rounds with flips of those coins if each time one of the coins is selected randomly to flip, but it’s still better than picking Heads). Is this the situation we’re discussing?
I think I’m happy to say that in this example, you’re warranted in reasoning like: “I have no information about the biases of the three coins except that they’re in the range [0.2, 0.7]. The space ‘possible biases of the coin’ seems like a privileged space with respect to which I can apply the principle of indifference, so there’s a positive motivation for having a determinate probability distribution about each of the three coins centered on 0.45.”
But many epistemic situations we face in the real world, especially when reasoning about the far future, are not like that. We don’t have a clear, privileged range of numbers to which we can apply the principle of indifference. Rather we have lots of vague guesses about a complicated web of things, and our reasons for thinking a given action could be good for the far future are qualitatively different from (hence not symmetric with) our reasons for thinking it could be bad. (Getting into the details of the case for this is better left for top-level posts I’m working on, but that’s the prima facie idea.)