Your proposed solution seems to introduce some arbitrary structure to the decision algorithm. Specifically, there is a large number of alternatives to CDP (“if N correlated decisions are needed to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility”) that all give the same solutions.
Ah, so someone noticed that :-) I didn’t put it in, since it all gave the same result in the end, and make the whole thing more complicated than needed. For instance, consider these set-ups:
1) Ten people: each one causes £10 to be given to everyone in the group.
2) Ten people: each one causes £100 to be given to themselves.
3) Ten people: each one causes £100 to be given to the next person in the group.
Under correlated decision making, each set-up is the same (the first is CDP, the others are harder). I chose to go with the simplest model—since they’re all equivalent.
1 ((1-p)x + [p(1-p)y + p2z]/2) + p [(1-p)y + pz]/2 is not an expected utility computation (since the probabilities 1 and p don’t sum up to 1).
It is a sum of expected utilities—the expected utility you gain from driver 1, plus the expected utility you gain from driver 2 (under CDP).
At this point, it seems to me that “contribution”/”responsibility” is not a useful concept. It’s just adding complexity without any apparent benefit. Do you agree with this assessment? If not, what advantage do you see in your proposal over UDT?
The starting point was Eliezer’s recent post on outlawing anthropics, where he seemed ready to throw out anthropic reasoning entrierly, based on the approch he was using. The above style of reasoning correctly predicts your expected utility dependent on your decision. Similarly, this type of reasoning solves the Anthropic Trilemma.
If UDT does the same, then my approach has no advantage (though some people may find it conceptually easier (and some, conceptually harder)).
Ah, so someone noticed that :-) I didn’t put it in, since it all gave the same result in the end, and make the whole thing more complicated than needed. For instance, consider these set-ups:
1) Ten people: each one causes £10 to be given to everyone in the group.
2) Ten people: each one causes £100 to be given to themselves.
3) Ten people: each one causes £100 to be given to the next person in the group.
Under correlated decision making, each set-up is the same (the first is CDP, the others are harder). I chose to go with the simplest model—since they’re all equivalent.
It is a sum of expected utilities—the expected utility you gain from driver 1, plus the expected utility you gain from driver 2 (under CDP).
The starting point was Eliezer’s recent post on outlawing anthropics, where he seemed ready to throw out anthropic reasoning entrierly, based on the approch he was using. The above style of reasoning correctly predicts your expected utility dependent on your decision. Similarly, this type of reasoning solves the Anthropic Trilemma.
If UDT does the same, then my approach has no advantage (though some people may find it conceptually easier (and some, conceptually harder)).