Your proposed solution seems to introduce some arbitrary structure to the decision algorithm. Specifically, there is a large number of alternatives to CDP (“if N correlated decisions are needed to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility”) that all give the same solutions. For example, suppose we replace it with:
if N correlated decisions are needed to gain some utility, then the decision maker highest in the decision tree is assigned full responsibility for it
Then the expected return for the driver at X is (1-p)x + [p(1-p)y + p^2 z] and the expected return for the driver at Y is 0, and the sum is still R. Or we can replace CDP with:
if N correlated decisions are needed to gain some utility, then the decision maker lowest in the decision tree is assigned full responsibility for it
Then the expected return for the driver at X is (1-p)x and the expected return for the driver at Y is [(1-p)y + pz] (with probability of existence p), and the total is still R.
Basically, how you assign responsibility is irrelevant, as long as the total adds up to 1. So why pick equal assignment of responsibility?
is not an expected utility computation (since the probabilities 1 and p don’t sum up to 1). Nor does it compute or use “the probability that I’m at X” so it’s no better than UDT in satisfying the epistemic intuition that demands that probability.
At this point, it seems to me that “contribution”/”responsibility” is not a useful concept. It’s just adding complexity without any apparent benefit. Do you agree with this assessment? If not, what advantage do you see in your proposal over UDT?
Your proposed solution seems to introduce some arbitrary structure to the decision algorithm. Specifically, there is a large number of alternatives to CDP (“if N correlated decisions are needed to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility”) that all give the same solutions.
Ah, so someone noticed that :-) I didn’t put it in, since it all gave the same result in the end, and make the whole thing more complicated than needed. For instance, consider these set-ups:
1) Ten people: each one causes £10 to be given to everyone in the group.
2) Ten people: each one causes £100 to be given to themselves.
3) Ten people: each one causes £100 to be given to the next person in the group.
Under correlated decision making, each set-up is the same (the first is CDP, the others are harder). I chose to go with the simplest model—since they’re all equivalent.
1 ((1-p)x + [p(1-p)y + p2z]/2) + p [(1-p)y + pz]/2 is not an expected utility computation (since the probabilities 1 and p don’t sum up to 1).
It is a sum of expected utilities—the expected utility you gain from driver 1, plus the expected utility you gain from driver 2 (under CDP).
At this point, it seems to me that “contribution”/”responsibility” is not a useful concept. It’s just adding complexity without any apparent benefit. Do you agree with this assessment? If not, what advantage do you see in your proposal over UDT?
The starting point was Eliezer’s recent post on outlawing anthropics, where he seemed ready to throw out anthropic reasoning entrierly, based on the approch he was using. The above style of reasoning correctly predicts your expected utility dependent on your decision. Similarly, this type of reasoning solves the Anthropic Trilemma.
If UDT does the same, then my approach has no advantage (though some people may find it conceptually easier (and some, conceptually harder)).
Your proposed solution seems to introduce some arbitrary structure to the decision algorithm. Specifically, there is a large number of alternatives to CDP (“if N correlated decisions are needed to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility”) that all give the same solutions. For example, suppose we replace it with:
if N correlated decisions are needed to gain some utility, then the decision maker highest in the decision tree is assigned full responsibility for it
Then the expected return for the driver at X is (1-p)x + [p(1-p)y + p^2 z] and the expected return for the driver at Y is 0, and the sum is still R. Or we can replace CDP with:
if N correlated decisions are needed to gain some utility, then the decision maker lowest in the decision tree is assigned full responsibility for it
Then the expected return for the driver at X is (1-p)x and the expected return for the driver at Y is [(1-p)y + pz] (with probability of existence p), and the total is still R.
Basically, how you assign responsibility is irrelevant, as long as the total adds up to 1. So why pick equal assignment of responsibility?
Also, your proposed computation
1 * ((1-p)x + [p(1-p)y + p2z]/2) + p * [(1-p)y + pz]/2
is not an expected utility computation (since the probabilities 1 and p don’t sum up to 1). Nor does it compute or use “the probability that I’m at X” so it’s no better than UDT in satisfying the epistemic intuition that demands that probability.
At this point, it seems to me that “contribution”/”responsibility” is not a useful concept. It’s just adding complexity without any apparent benefit. Do you agree with this assessment? If not, what advantage do you see in your proposal over UDT?
Ah, so someone noticed that :-) I didn’t put it in, since it all gave the same result in the end, and make the whole thing more complicated than needed. For instance, consider these set-ups:
1) Ten people: each one causes £10 to be given to everyone in the group.
2) Ten people: each one causes £100 to be given to themselves.
3) Ten people: each one causes £100 to be given to the next person in the group.
Under correlated decision making, each set-up is the same (the first is CDP, the others are harder). I chose to go with the simplest model—since they’re all equivalent.
It is a sum of expected utilities—the expected utility you gain from driver 1, plus the expected utility you gain from driver 2 (under CDP).
The starting point was Eliezer’s recent post on outlawing anthropics, where he seemed ready to throw out anthropic reasoning entrierly, based on the approch he was using. The above style of reasoning correctly predicts your expected utility dependent on your decision. Similarly, this type of reasoning solves the Anthropic Trilemma.
If UDT does the same, then my approach has no advantage (though some people may find it conceptually easier (and some, conceptually harder)).