This calculation was badly wrong. You allowed for the possibility that cryonically suspended people would wind up living very long and happy lives through future technology (which provided the great majority of the QALYs), but not for the possibility that African children saved from malaria would, many of whom would live another 60-70 years and could benefit from any big wave of technological change, life extension, etc. If you selectively count the biggest QALY benefits for only some interventions but not others you will get seriously misleading results.
Also it omitted all the compounding effects of more people in Africa over coming decades and centuries.
It’s a back-of-the-envelop calculation on vast unknowns. I wrote it up because it seemed pointless to try making a decision if we weren’t even going to involve numbers. I happily concede that it is deeply speculative.
First, given it’s a back-of-the-envelop calculation, I assume that anything LESS than a 100% difference (2:1 value ratio) can effectively be treated as within the margin of error. So if the ratio I got was 1.5 : 1, I’d still say they were approximately equal. I can’t off-hand defend this intuition, beyond that it’s sloppy math so we have to assume I made at least a few mistakes.
Calculating for your life-extension black swan, the math works out to a ratio of 27X + 1 : 1, where X = the chance of such a radical life extension event (i.e. within the next 50 years, the entire world is effectively immortal). At 4%, that’s about a 2:1 ratio, so the point where I’d call it a significant difference. At 33%, it’s a 10:1 ratio, which is the point where I’d concede it’s clearly the correct decision. I personally assume this is < 1%, which means it doesn’t affect the result.
(Math note: 27X+1 is because of the 28:1 cost ratio with cryonics. If the black swan occurs, we’re 28 times more efficient. If it doesn’t occur, our original equation says we’re still equally efficient. Thus we get a ratio of (28 X) + (1 (1-X)) : 1, which simplifies to 27X+1 : 1.)
(Stylistic note: For back of the envelope calculations, as you can see, even astounding events like “radical life extension” require fairly solid odds before they affect things. We can reasonably ignore anything 1% or less, since there’s probably other black swans pointed the other direction, and we need at least a 2:1 difference before we stop calling it “approximately equal” :))
As to compounding effects, well… yeah, I have to concede that QALY doesn’t cover that. If you have research that says this should specifically bias our calculations one way or another, that’s useful information. Otherwise I’d just have to conclude insufficient information, or assume that any given QALY compounds approximately identically to any other QALY.
From here, while you may well still disagree with me, I think my methods and assumptions are open enough that you can just plug in your own numbers and get your own results.
Feel free to post your own revised estimates if you do disagree, but I’d appreciate you actually running the numbers first—it’s much easier to discuss if I can tell we disagree because you think the black swan has 25% odds and I think it’s <1%.
It’s also a good way to notice how even radical black swans still require decent odds before they affect the calculations at all, which is important to understand when using a tool like this (I still sometimes find it surprising myself! Which is why I like doing these :))
Cryonics working requires most of the same “black swan,” and providing large lifespans is heavily correlated with large lifespans for ordinary people. The chance of cryonic organizations failing increases with time (among other things, their financials are rickety, and there have been past failures), so the much of the chance of cryonics working depends on big technological advances happening within a century.
I would say that conditional on cryonics working (which is a major thing to condition on), the chance of the “black swan” (which I would define more precisely) is over 10%, which is enough for it to do better on that front. And the black swan can occur even if the cryopreservation process doesn’t store the key information, which would further increase its advantage.
If we assume cryonics requires advances within the century, it’s still true that those advances are more likely to come LATER than sooner. Cryonics means I survive whether the advance comes tomorrow or the day before the company would have thrown in the towel.
So the odds still favor the cryonaut over the African kids, because the cryonaut has longer for that advance to occur. Also, the cryonaut is someone who has the resources and culture to invest in a long-shot like cryonics DESPITE it being unpopular and fringe, whereas the African kids are unlikely to do any such thing. The African Kids only survive if there’s a massive world-wide change like the Singularity or Friendly AI.
Keep in mind, we live in a society where millions die of starvation simply because we’re inefficient at distributing food—we have enough to go around, it just doesn’t end up where it’s needed. We’re talking a VERY radical change, and it needs to be before most of the kids are already dead (if it happens in exactly 60 years, statistically most of the kids are probably already dead)
This calculation was badly wrong. You allowed for the possibility that cryonically suspended people would wind up living very long and happy lives through future technology (which provided the great majority of the QALYs), but not for the possibility that African children saved from malaria would, many of whom would live another 60-70 years and could benefit from any big wave of technological change, life extension, etc. If you selectively count the biggest QALY benefits for only some interventions but not others you will get seriously misleading results.
Also it omitted all the compounding effects of more people in Africa over coming decades and centuries.
It’s a back-of-the-envelop calculation on vast unknowns. I wrote it up because it seemed pointless to try making a decision if we weren’t even going to involve numbers. I happily concede that it is deeply speculative.
First, given it’s a back-of-the-envelop calculation, I assume that anything LESS than a 100% difference (2:1 value ratio) can effectively be treated as within the margin of error. So if the ratio I got was 1.5 : 1, I’d still say they were approximately equal. I can’t off-hand defend this intuition, beyond that it’s sloppy math so we have to assume I made at least a few mistakes.
Calculating for your life-extension black swan, the math works out to a ratio of 27X + 1 : 1, where X = the chance of such a radical life extension event (i.e. within the next 50 years, the entire world is effectively immortal). At 4%, that’s about a 2:1 ratio, so the point where I’d call it a significant difference. At 33%, it’s a 10:1 ratio, which is the point where I’d concede it’s clearly the correct decision. I personally assume this is < 1%, which means it doesn’t affect the result.
(Math note: 27X+1 is because of the 28:1 cost ratio with cryonics. If the black swan occurs, we’re 28 times more efficient. If it doesn’t occur, our original equation says we’re still equally efficient. Thus we get a ratio of (28 X) + (1 (1-X)) : 1, which simplifies to 27X+1 : 1.)
(Stylistic note: For back of the envelope calculations, as you can see, even astounding events like “radical life extension” require fairly solid odds before they affect things. We can reasonably ignore anything 1% or less, since there’s probably other black swans pointed the other direction, and we need at least a 2:1 difference before we stop calling it “approximately equal” :))
As to compounding effects, well… yeah, I have to concede that QALY doesn’t cover that. If you have research that says this should specifically bias our calculations one way or another, that’s useful information. Otherwise I’d just have to conclude insufficient information, or assume that any given QALY compounds approximately identically to any other QALY.
From here, while you may well still disagree with me, I think my methods and assumptions are open enough that you can just plug in your own numbers and get your own results.
Feel free to post your own revised estimates if you do disagree, but I’d appreciate you actually running the numbers first—it’s much easier to discuss if I can tell we disagree because you think the black swan has 25% odds and I think it’s <1%.
It’s also a good way to notice how even radical black swans still require decent odds before they affect the calculations at all, which is important to understand when using a tool like this (I still sometimes find it surprising myself! Which is why I like doing these :))
Cryonics working requires most of the same “black swan,” and providing large lifespans is heavily correlated with large lifespans for ordinary people. The chance of cryonic organizations failing increases with time (among other things, their financials are rickety, and there have been past failures), so the much of the chance of cryonics working depends on big technological advances happening within a century.
I would say that conditional on cryonics working (which is a major thing to condition on), the chance of the “black swan” (which I would define more precisely) is over 10%, which is enough for it to do better on that front. And the black swan can occur even if the cryopreservation process doesn’t store the key information, which would further increase its advantage.
If we assume cryonics requires advances within the century, it’s still true that those advances are more likely to come LATER than sooner. Cryonics means I survive whether the advance comes tomorrow or the day before the company would have thrown in the towel.
So the odds still favor the cryonaut over the African kids, because the cryonaut has longer for that advance to occur. Also, the cryonaut is someone who has the resources and culture to invest in a long-shot like cryonics DESPITE it being unpopular and fringe, whereas the African kids are unlikely to do any such thing. The African Kids only survive if there’s a massive world-wide change like the Singularity or Friendly AI.
Keep in mind, we live in a society where millions die of starvation simply because we’re inefficient at distributing food—we have enough to go around, it just doesn’t end up where it’s needed. We’re talking a VERY radical change, and it needs to be before most of the kids are already dead (if it happens in exactly 60 years, statistically most of the kids are probably already dead)