I think the moral-uncertainty version of the problem is fatal unless you make further assumptions about how to resolve it, such as by fixing some arbitrary intertheoretic-comparison weights (which seems to be what you’re suggesting) or using the parliamentary model.
Regardless of whether the problem can be resolved, I confess that I don’t see how it’s related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can’t be rescaled.)
Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can’t go anywhere from there.
These strike me as just completely unrelated problems.
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there’s no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There’s nothing wrong with using relative measurements, and using absolute measurements doesn’t resolve the problem. (It hides from the problem, but that’s not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn’t reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn’t reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.
I think most thinkers on this topic wouldn’t think of those weights as arbitrary (I know you and I do, as hardcore moral anti-realists), and they wouldn’t find it prohibitively difficult to introduce those weights into the calculations. Not sure if you agree with me there.
I do agree with you that you can’t do moral weight calculations without those weights, assuming you are weighing moral theories and not just empirical likelihoods of mental capacities.
I should also note that I do think intertheoretic comparisons become an issue in other cases of moral uncertainty, such as with infinite values (e.g. a moral framework that absolutely prohibits lying). But those cases seem much harder than moral weights between sentient beings under utilitarianism.
Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in this post (e.g. see section 1.1.1.1).
I think the moral-uncertainty version of the problem is fatal unless you make further assumptions about how to resolve it, such as by fixing some arbitrary intertheoretic-comparison weights (which seems to be what you’re suggesting) or using the parliamentary model.
Regardless of whether the problem can be resolved, I confess that I don’t see how it’s related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can’t be rescaled.)
Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can’t go anywhere from there.
These strike me as just completely unrelated problems.
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there’s no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There’s nothing wrong with using relative measurements, and using absolute measurements doesn’t resolve the problem. (It hides from the problem, but that’s not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn’t reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn’t reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.
I think most thinkers on this topic wouldn’t think of those weights as arbitrary (I know you and I do, as hardcore moral anti-realists), and they wouldn’t find it prohibitively difficult to introduce those weights into the calculations. Not sure if you agree with me there.
I do agree with you that you can’t do moral weight calculations without those weights, assuming you are weighing moral theories and not just empirical likelihoods of mental capacities.
I should also note that I do think intertheoretic comparisons become an issue in other cases of moral uncertainty, such as with infinite values (e.g. a moral framework that absolutely prohibits lying). But those cases seem much harder than moral weights between sentient beings under utilitarianism.
Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in this post (e.g. see section 1.1.1.1).