Unfortunately, Holden has struggled to clearly express his reasons for rejecting astronomical waste arguments.
It looks to me like he is using a bounded utility function with a really low bound. See this passage:
I feel that humanity’s future may end up being massively better than its past, and unexpected new developments (particularly technological innovation) may move us toward such a future with surprising speed. Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable. In other words, if I somehow knew that economic and technological development were equally likely to lead to human extinction or to a brighter long-term future, it’s easy for me to imagine that I could still prefer such development to stagnation.
If the best possible future that Holden can imagine (which the rest of the post makes clear does includes space colonization) doesn’t have much more than twice the utility of stagnation (setting extinction to be the zero point), then “astronomical waste” obviously isn’t very astronomical in terms of Holden’s utility function.
He gave a lower bound, sufficient to motivate the view that we should not seek stagnation, which is what he seems to be talking about there. Why interpret a lower bound (when this is all that is needed to establish the point, and less controversial) which is “easy” into a near-upper-bound?
Stagnation on Earth means astronomical waste almost exactly as much as near-term extinction (and also cuts us off from very high standards of living that might be achieved). Holden is saying that the conclusion that growth with plausible risk levels beats permanent stagnation is robust. Talking about 100:1 tradeoffs would be less robust.
I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write “Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable”, someone who thinks it’s at least a thousand or a billion times better probably wouldn’t.
It looks to me like he is using a bounded utility function with a really low bound. See this passage:
If the best possible future that Holden can imagine (which the rest of the post makes clear does includes space colonization) doesn’t have much more than twice the utility of stagnation (setting extinction to be the zero point), then “astronomical waste” obviously isn’t very astronomical in terms of Holden’s utility function.
He gave a lower bound, sufficient to motivate the view that we should not seek stagnation, which is what he seems to be talking about there. Why interpret a lower bound (when this is all that is needed to establish the point, and less controversial) which is “easy” into a near-upper-bound?
Stagnation on Earth means astronomical waste almost exactly as much as near-term extinction (and also cuts us off from very high standards of living that might be achieved). Holden is saying that the conclusion that growth with plausible risk levels beats permanent stagnation is robust. Talking about 100:1 tradeoffs would be less robust.
I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write “Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable”, someone who thinks it’s at least a thousand or a billion times better probably wouldn’t.