If probability makes sense at all, then “I believe that the odds are 2:1, but I *know* that in a minute I’ll believe that it’s 1:1” destroys it as a coherent formalisation of beliefs. Should the 2:1 you force their future copy to stick with 2:1 rather than 1:1? If not, why do they think their own beliefs are right?
Which interpretation of probability do you use? I go with standard subjective bayesianism: Probabilities are your credences are your degrees of belief.
So, there’s nothing contradictory or incoherent about believing that you will believe something else in the future. Trivial case: Someone will brainwash you in the future and you know this. Why do you think your own beliefs are right? First of all, why do I need to answer that question in order to coherently have those beliefs? Not every belief can be justified in that way. Secondly, if I follow SSA, here’s my justification: “Well, here are my priors. Here is my evidence. I then conditionalized on the evidence, and this is what I got. That future version of me has the same priors but different evidence, so they got a different result.” Why is that not justification enough?
Yes, it’s weird when you are motivated to force your future copy to do things. Perhaps we should do for probability what we did for decision theory, and talk about agents that have the ability to irrevocably bind their future selves. (Isn’t this basically what you think we should do?)
But it’s not incoherent or senseless to think that yes, I have credence X now and in the future I will have credence Y. Just as it isn’t incoherent or senseless to wish that your future self would refuse the blackmail even though your future self would actually decide to give in.
Yes, it’s weird when you are motivated to force your future copy to do things
If you couple these probability theories with the right decision theories, this should never come up. FNC yields the correct answer if you use a decision theory that lets you decide for all your identical copies (but not the ones who has had different experiences), and SIA yields the correct answer if you assume that you can’t affect the choices of the rest of your copies.
If probability makes sense at all, then “I believe that the odds are 2:1, but I *know* that in a minute I’ll believe that it’s 1:1” destroys it as a coherent formalisation of beliefs. Should the 2:1 you force their future copy to stick with 2:1 rather than 1:1? If not, why do they think their own beliefs are right?
Which interpretation of probability do you use? I go with standard subjective bayesianism: Probabilities are your credences are your degrees of belief.
So, there’s nothing contradictory or incoherent about believing that you will believe something else in the future. Trivial case: Someone will brainwash you in the future and you know this. Why do you think your own beliefs are right? First of all, why do I need to answer that question in order to coherently have those beliefs? Not every belief can be justified in that way. Secondly, if I follow SSA, here’s my justification: “Well, here are my priors. Here is my evidence. I then conditionalized on the evidence, and this is what I got. That future version of me has the same priors but different evidence, so they got a different result.” Why is that not justification enough?
Yes, it’s weird when you are motivated to force your future copy to do things. Perhaps we should do for probability what we did for decision theory, and talk about agents that have the ability to irrevocably bind their future selves. (Isn’t this basically what you think we should do?)
But it’s not incoherent or senseless to think that yes, I have credence X now and in the future I will have credence Y. Just as it isn’t incoherent or senseless to wish that your future self would refuse the blackmail even though your future self would actually decide to give in.
If you couple these probability theories with the right decision theories, this should never come up. FNC yields the correct answer if you use a decision theory that lets you decide for all your identical copies (but not the ones who has had different experiences), and SIA yields the correct answer if you assume that you can’t affect the choices of the rest of your copies.