suppose that every person who does thing X has a small probability of causing bad effect Y for everyone that negates all the benefits of X: for example, perhaps 0.01% of people would cause a global pandemic killing everyone if they learned enough about biology. Then, the expected value of X happening can be high when it happens a little (because you probably get the good effects and not the bad effects Y), but low when it happens a lot (because you almost certainly get bad effect Y, and the tiny probability of the good effects isn’t worth it).
I don’t get the math on this. Suppose I have N balls in an urn, and pulling out all but one of them has value A and the final one has value −B. Then the expected value of drawing one ball is N−1NA−BN. Assuming that B>(N−1)A (which is how I gloss “causing bad effect Y for everyone that negates all the benefits of X”), isn’t this already negative EV?
I see how it works if you have some selection effect that’s getting degraded or something like that.
Nope: what I mean is that if you draw K balls, and one of them has value -B, the overall value is -B, but if all instead have value A, the overall value is KxA.
No. The probability of K balls all being the good balls (assuming you’re drawing with replacement) is ((N-1)/N)^K. So the expected value is ((N-1)/N)^K x (KxA) - (1 - ((N-1)/N)^K) x B
A low probability of large bad effects can swamp a high probability of small good effects, but it doesn’t have to, so you can have the high probability of small good effects dominate.
Let me be concrete: imagine you have a one in a hundred chance of a bad outcome of utility −100 (where if it happens all good effects get wiped out), and with the rest of the probability you get a good outcome of utility 2 (and the utility of these good outcomes stacks with how many times they happen). Then the expected utility of doing this once is 2 x 0.99 − 100 x 0.01 = 0.98 > 0, but the expected utility of doing it one thousand times is 2 x 1000 x (0.99 ^ 1000) − 100 x (1 − 0.99^1000) = 2000 x 0.000043 − 100 x 0.999957 = 0.086 − 99.9957 < 0.
True, but this doesn’t apply to the original reasoning in the post—he assumes constant probability while you need increasing probability (as with the balls) to make the math work.
Or decreasing benefits, which probably is the case in the real world.
My comment involves a constant probability of the bad outcome with each draw, and no decreasing benefits. I think this is a good exposition of this portion of the post (which I wrote), if you assume that each unit of bio progress is equally good, but that the goods don’t materialize if we all die of a global pandemic:
Suppose instead that the benefits of thing X grow proportionally to how much it happens: for example, maybe every person who learns about biology makes roughly the same amount of incremental progress in learning how to cure disease and make humans healthier. Also suppose that every person who does thing X has a small probability of causing bad effect Y for everyone that negates all the benefits of X: for example, perhaps 0.01% of people would cause a global pandemic killing everyone if they learned enough about biology.
I don’t get the math on this. Suppose I have N balls in an urn, and pulling out all but one of them has value A and the final one has value −B. Then the expected value of drawing one ball is N−1NA−BN. Assuming that B>(N−1)A (which is how I gloss “causing bad effect Y for everyone that negates all the benefits of X”), isn’t this already negative EV?
I see how it works if you have some selection effect that’s getting degraded or something like that.
Nope: what I mean is that if you draw K balls, and one of them has value -B, the overall value is -B, but if all instead have value A, the overall value is KxA.
Yeah, but the expected value would still be K(N−1NA−BN).
No. The probability of K balls all being the good balls (assuming you’re drawing with replacement) is ((N-1)/N)^K. So the expected value is ((N-1)/N)^K x (KxA) - (1 - ((N-1)/N)^K) x B
OK, that’s fair, I should have written down the precise formula rather than an approximation. My point though is that your statement
is wrong because a low probability of large bad effects can swamp a high probability of small good effects in expected value calculations.
A low probability of large bad effects can swamp a high probability of small good effects, but it doesn’t have to, so you can have the high probability of small good effects dominate.
Let me be concrete: imagine you have a one in a hundred chance of a bad outcome of utility −100 (where if it happens all good effects get wiped out), and with the rest of the probability you get a good outcome of utility 2 (and the utility of these good outcomes stacks with how many times they happen). Then the expected utility of doing this once is 2 x 0.99 − 100 x 0.01 = 0.98 > 0, but the expected utility of doing it one thousand times is 2 x 1000 x (0.99 ^ 1000) − 100 x (1 − 0.99^1000) = 2000 x 0.000043 − 100 x 0.999957 = 0.086 − 99.9957 < 0.
OK, that makes sense.
True, but this doesn’t apply to the original reasoning in the post—he assumes constant probability while you need increasing probability (as with the balls) to make the math work.
Or decreasing benefits, which probably is the case in the real world.
Edit: misred the previous comment, see below
My comment involves a constant probability of the bad outcome with each draw, and no decreasing benefits. I think this is a good exposition of this portion of the post (which I wrote), if you assume that each unit of bio progress is equally good, but that the goods don’t materialize if we all die of a global pandemic: