If you have a 50% chance of existing in a universe with trillions of trillions observers, and a 50% chance of existing in a universe with merely a trillion of observers, would you take a bet at trillion to one odds that you’re in the first one?
Oh, okay. Is there any formal reason to think that modeling indifference as the average of altruism and hate is better than just going with the probabilities (modeling indifference as selfishness)?
There is a huge debate as to what the probabilities should be. I have strong arguments as to what the correct path is for altruism, so I’m trying to extend it into a contentious area.
Seems too arbitrary to succeed. “Anti-SB” is constructed solely to cancel out SB when they’re averaged, so it’s no surprise that it works that way.
Besides, the structure of the presumptuous philosopher doesn’t seem like an average between those two structures. PP claims to know the probability that he’s in world (2) (where world (1) has 1 person and (2) has 2). How would you turn this into a decision problem? You say you’ll give him a dollar if he guesses right. Following your rules, he adds indexical utilities. So guessing (2) will give hims $2, and guessing (1) will give hims $1. This structure is identical to the sleeping-beauty problem.
My rules only apply to copies from the same person. It’s precisely because he doesn’t care about the other ‘copies’ of himself that the presumptuous philospher is different from sleeping beauty.
EDIT: I should note I’m talking about inserting the “presumptuous philosopher” program into the sleeping beauty situation. Your first sentence seems to imply the original problem, and then your second sentence is back to the sleeping beauty problem.
Still edit: It seems that you are solving the wrong problem. You’re solving the decision problem where you give someone in the universe a dollar and show that an indifferent philosopher is indifferent to living in a universe where 10 other people get a dollar vs. a universe where 1 other person gets a dollar. But he was designed to be indifferent to whether other people get a dollar, so it’s all good. However, that doesn’t appear to have any bearing on probabilities. You can only get probabilities from decisions if you also have a utility function that uses probabilities, and then you can run it in reverse to get probabilities from utility functions. However, indifferent philosopher is indifferent! He doesn’t care what other people do. His utility function cannot be solved for probabilities, because all the terms that would depend on them are “0.”
If you have a 50% chance of existing in a universe with trillions of trillions observers, and a 50% chance of existing in a universe with merely a trillion of observers, would you take a bet at trillion to one odds that you’re in the first one?
The SIA odds seem to imply that you should.
Oh, okay. Is there any formal reason to think that modeling indifference as the average of altruism and hate is better than just going with the probabilities (modeling indifference as selfishness)?
There is a huge debate as to what the probabilities should be. I have strong arguments as to what the correct path is for altruism, so I’m trying to extend it into a contentious area.
Seems too arbitrary to succeed. “Anti-SB” is constructed solely to cancel out SB when they’re averaged, so it’s no surprise that it works that way.
Besides, the structure of the presumptuous philosopher doesn’t seem like an average between those two structures. PP claims to know the probability that he’s in world (2) (where world (1) has 1 person and (2) has 2). How would you turn this into a decision problem? You say you’ll give him a dollar if he guesses right. Following your rules, he adds indexical utilities. So guessing (2) will give hims $2, and guessing (1) will give hims $1. This structure is identical to the sleeping-beauty problem.
My rules only apply to copies from the same person. It’s precisely because he doesn’t care about the other ‘copies’ of himself that the presumptuous philospher is different from sleeping beauty.
EDIT: I should note I’m talking about inserting the “presumptuous philosopher” program into the sleeping beauty situation. Your first sentence seems to imply the original problem, and then your second sentence is back to the sleeping beauty problem.
Still edit: It seems that you are solving the wrong problem. You’re solving the decision problem where you give someone in the universe a dollar and show that an indifferent philosopher is indifferent to living in a universe where 10 other people get a dollar vs. a universe where 1 other person gets a dollar. But he was designed to be indifferent to whether other people get a dollar, so it’s all good. However, that doesn’t appear to have any bearing on probabilities. You can only get probabilities from decisions if you also have a utility function that uses probabilities, and then you can run it in reverse to get probabilities from utility functions. However, indifferent philosopher is indifferent! He doesn’t care what other people do. His utility function cannot be solved for probabilities, because all the terms that would depend on them are “0.”