I did some figuring and it looks like I came to the same conclusion.
I’ve only skimmed this and small portions of the links about the two-envelopes thing. As the original mathematical exercise, it’s kind of fun to construct a probability distribution where it’s always advantageous to switch envelopes. But Wiki says:
Suppose E ( B | A = a ) > a for all a. It can be shown that this is possible for some probability distributions of X (the smaller amount of money in the two envelopes) only if E ( X ) = ∞.
Which seems probably true. And comparing infinities is always a dangerous game. Though you can have finite versions of the situation (e.g. 1/10th chance of each of “$1, $2”, “$2, $4″, …, “$512, $1024”) where switching envelopes is advantageous in all cases except one.
Anyway, onto the moral version from Tomasik’s article. I tried stating it in terms of utility.
Suppose (helping) a human is worth 1 util. In the first scenario (to which we give probability 0.5), an elephant is worth 1⁄4 as much as a human, so 0.25 utils, so two elephants are worth 0.5 utils. In the second scenario (also probability 0.5), an elephant is worth the same as a human, so 1 util, and two elephants are worth 2 utils. Then the expected-value calculation for helping the human is: “E(h) = 0.5 * 1 + 0.5 * 1 = 1”, while for the elephants it’s “E(2e) = 0.5 * 0.5 + 0.5 * 2 = 1.25″, and thus E(h) = 1 < E(2e) = 1.25, so helping the elephants is better.
On the other hand, if we decide that an elephant is worth 1 util, then our calculations become:
e = 1 u. .5: h = 4 u, 2e = 2 u. .5: h = 1 u, 2e = 2 u.
E(h) = 2.5 u E(2e) = 2 u -> prefer h.
This reproduces the “always advantageous to switch” problem.
The trouble is that our unit isn’t consistent between the two scenarios. The mere information about the ratio between h and e doesn’t fix an absolute value that can be compared across the two worlds; we could scale all values in one world up or down by an arbitrary factor, which can make the calculation go either way. To illustrate, let’s assign absolute values. First, let’s suppose that h is worth $100 in all worlds (you might imagine h = “hammer”, e = “earbuds” or something):
.5: h = $100, 2e = $50 .5: h = $100, 2e = $200 “E(h) = $100, E(2e) = $125” -> prefer 2e.
Next, let’s imagine that h is worth $100 in the first world, but $1 in the second world:
.5: h = $100, 2e = $50 .5: h = $1, 2e = $2 “E(h) = $50.5, E(2e) = $26” -> prefer h.
We see that giving h a much bigger value in one world effectively gives that world a much bigger weight in the expected-value calculation. The effect is similar to if you gave that world a much higher probability than the other.
And we see that Tomasik’s original situation amounts to, the first time around, having “h = $100, 2e = $50 or $200”, and, the second time, having “h = $50 or $200, 2e = $100″.
So picking the right consistent cross-universal unit is important, and is the heart of the problem… Finally looking back at your post, I see that your first sentence makes the same point. :-)
Now, I’ll remark: It could be that, in one world, everyone has much less moral worth—or their emotions are deadened or something—and therefore your calculations should care more about the other world. Just how if, in world A, picking option 1 gets you +$500, whereas in world B, picking option 2 gets you +$0.5, then you act like you’re in world A and don’t care about world B because A is more important, in all situations except where B is >1000x as likely as A.
It is possible that the value of human life or happiness or whatever should in fact be considered worth a lot more in certain worlds than others, and that this co-occurs with moral worth being determined by brain cell count rather than organism count (or vice versa). But whatever the cross-world valuation is, it must be explicitly stated, and hopefully justified.
I did some figuring and it looks like I came to the same conclusion.
I’ve only skimmed this and small portions of the links about the two-envelopes thing. As the original mathematical exercise, it’s kind of fun to construct a probability distribution where it’s always advantageous to switch envelopes. But Wiki says:
Which seems probably true. And comparing infinities is always a dangerous game. Though you can have finite versions of the situation (e.g. 1/10th chance of each of “$1, $2”, “$2, $4″, …, “$512, $1024”) where switching envelopes is advantageous in all cases except one.
Anyway, onto the moral version from Tomasik’s article. I tried stating it in terms of utility.
Suppose (helping) a human is worth 1 util. In the first scenario (to which we give probability 0.5), an elephant is worth 1⁄4 as much as a human, so 0.25 utils, so two elephants are worth 0.5 utils. In the second scenario (also probability 0.5), an elephant is worth the same as a human, so 1 util, and two elephants are worth 2 utils. Then the expected-value calculation for helping the human is: “E(h) = 0.5 * 1 + 0.5 * 1 = 1”, while for the elephants it’s “E(2e) = 0.5 * 0.5 + 0.5 * 2 = 1.25″, and thus E(h) = 1 < E(2e) = 1.25, so helping the elephants is better.
On the other hand, if we decide that an elephant is worth 1 util, then our calculations become:
This reproduces the “always advantageous to switch” problem.
The trouble is that our unit isn’t consistent between the two scenarios. The mere information about the ratio between h and e doesn’t fix an absolute value that can be compared across the two worlds; we could scale all values in one world up or down by an arbitrary factor, which can make the calculation go either way. To illustrate, let’s assign absolute values. First, let’s suppose that h is worth $100 in all worlds (you might imagine h = “hammer”, e = “earbuds” or something):
Next, let’s imagine that h is worth $100 in the first world, but $1 in the second world:
We see that giving h a much bigger value in one world effectively gives that world a much bigger weight in the expected-value calculation. The effect is similar to if you gave that world a much higher probability than the other.
And we see that Tomasik’s original situation amounts to, the first time around, having “h = $100, 2e = $50 or $200”, and, the second time, having “h = $50 or $200, 2e = $100″.
So picking the right consistent cross-universal unit is important, and is the heart of the problem… Finally looking back at your post, I see that your first sentence makes the same point. :-)
Now, I’ll remark: It could be that, in one world, everyone has much less moral worth—or their emotions are deadened or something—and therefore your calculations should care more about the other world. Just how if, in world A, picking option 1 gets you +$500, whereas in world B, picking option 2 gets you +$0.5, then you act like you’re in world A and don’t care about world B because A is more important, in all situations except where B is >1000x as likely as A.
It is possible that the value of human life or happiness or whatever should in fact be considered worth a lot more in certain worlds than others, and that this co-occurs with moral worth being determined by brain cell count rather than organism count (or vice versa). But whatever the cross-world valuation is, it must be explicitly stated, and hopefully justified.