He gave a lower bound, sufficient to motivate the view that we should not seek stagnation, which is what he seems to be talking about there. Why interpret a lower bound (when this is all that is needed to establish the point, and less controversial) which is “easy” into a near-upper-bound?
Stagnation on Earth means astronomical waste almost exactly as much as near-term extinction (and also cuts us off from very high standards of living that might be achieved). Holden is saying that the conclusion that growth with plausible risk levels beats permanent stagnation is robust. Talking about 100:1 tradeoffs would be less robust.
I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write “Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable”, someone who thinks it’s at least a thousand or a billion times better probably wouldn’t.
He gave a lower bound, sufficient to motivate the view that we should not seek stagnation, which is what he seems to be talking about there. Why interpret a lower bound (when this is all that is needed to establish the point, and less controversial) which is “easy” into a near-upper-bound?
Stagnation on Earth means astronomical waste almost exactly as much as near-term extinction (and also cuts us off from very high standards of living that might be achieved). Holden is saying that the conclusion that growth with plausible risk levels beats permanent stagnation is robust. Talking about 100:1 tradeoffs would be less robust.
I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write “Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable”, someone who thinks it’s at least a thousand or a billion times better probably wouldn’t.