After reading this I feel that how one should deal with anthropics strictly depends on goals. I’m not sure exactly which cognitive algorithm does the correct thing in general, but it seems that sometimes it reduces to “standard” probabilities and sometimes not. May I ask what does UDT say about all of this exactly?
Suppose you’re rushing an urgent message back to the general of your army, and you fall into a deep hole. Down here, conveniently, there’s a lever that can create a duplicate of you outside the hole. You can also break open the lever and use the wiring as ropes to climb to the top. You estimate that the second course of action has a 50% chance of success. What do you do?
Obviously, if the message is your top priority, you pull the lever, and your duplicate will deliver that message. This succeeds all the time, while the wire-rope only has 50% chance of working.
Agree.
after pulling the lever, do you expect to be the copy on the top or the copy on the bottom?
Question without meaning per se, agree.
what if the lever initially creates one copy—and then, five seconds later, creates a million? How do you update your probabilities during these ten seconds?
Before pulling the lever, I commit to do the following.
For the first five seconds, I will think (all copies of me will think) “I am above”. This way, 50% of all my copies will be wrong.
For the remaining five seconds, I will think (all copies of me will think) “I am above”. This way, one millionth of all my copies will be wrong.
If each of my copies was receiving money for distinguishing which copy he is, then only one millionth of all my copies would be poor.
This sounds suspiciously like updating probabilities the “standard” way, especially if you substitute “copies” with “measure”.
UDT can update in that way, in practice (you need that, to avoid Dutch Books). It just doesn’t have a position on the anthropic probability itself, just on the behaviour under evidence update.
After reading this I feel that how one should deal with anthropics strictly depends on goals. I’m not sure exactly which cognitive algorithm does the correct thing in general, but it seems that sometimes it reduces to “standard” probabilities and sometimes not. May I ask what does UDT say about all of this exactly?
Agree.
Question without meaning per se, agree.
Before pulling the lever, I commit to do the following.
For the first five seconds, I will think (all copies of me will think) “I am above”. This way, 50% of all my copies will be wrong.
For the remaining five seconds, I will think (all copies of me will think) “I am above”. This way, one millionth of all my copies will be wrong.
If each of my copies was receiving money for distinguishing which copy he is, then only one millionth of all my copies would be poor.
This sounds suspiciously like updating probabilities the “standard” way, especially if you substitute “copies” with “measure”.
UDT can update in that way, in practice (you need that, to avoid Dutch Books). It just doesn’t have a position on the anthropic probability itself, just on the behaviour under evidence update.