Regardless of whether the problem can be resolved, I confess that I don’t see how it’s related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can’t be rescaled.)
Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can’t go anywhere from there.
These strike me as just completely unrelated problems.
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there’s no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There’s nothing wrong with using relative measurements, and using absolute measurements doesn’t resolve the problem. (It hides from the problem, but that’s not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn’t reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn’t reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.
Regardless of whether the problem can be resolved, I confess that I don’t see how it’s related to the original two-envelopes problem, which is a case of doing incorrect expected-value calculations with sensible numbers. (The contents of the envelopes are entirely comparable and can’t be rescaled.)
Meanwhile, it seems to me that the elephants problem just comes about because the numbers are fake. You can do sensible EV calculations, get (a + b/4) for saving two elephants versus (a/2 + b/2) for saving one human, but because a and b are mostly-unconstrained (they just have to be positive), you can’t go anywhere from there.
These strike me as just completely unrelated problems.
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there’s no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There’s nothing wrong with using relative measurements, and using absolute measurements doesn’t resolve the problem. (It hides from the problem, but that’s not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn’t reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn’t reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.