I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write “Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable”, someone who thinks it’s at least a thousand or a billion times better probably wouldn’t.
I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write “Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable”, someone who thinks it’s at least a thousand or a billion times better probably wouldn’t.