When arguing about the future, the imaginable is not all there is. You essentially gave several imaginable futures (some in which risks continue to arise, and others in which they do not) and did some handwaving about which class you considered likely to be larger. There are three ways to dispute this: to dispute your handwaving (eg, you consider compression of subjective time to be a conclusive argument, as if this is inevitable), to propose not-considered classes of future (eg, technology continues to increase, but some immutable law of the universe means that there are only a finite number of apocalyptic technologies), or to maintain that there are large classes of future which cannot possibly be imagined because they do not clearly fall into any categories such as we are likely to define in the present. If you use the latter dispute, arguing about probability is just arguing about which uninformative prior to use.
I’m not pretending this is an airtight case. If you previously assumed that existential threats converge to zero as rationality increases; or that rationality is always the best policy; or that rationality means expectation maximization; and now you question one of those things; then you’ve gotten something out of it.
homung suggests that there may be immutable laws of the universe that mean there are only a finite number of apocalyptic technologies. Note that even if the probability of such technological limits is small, in order for Phil’s argument to work, either that probability would have to be infinitesimal, or some of the doomsday devices have to continue to be threatening after the various attack/defense strategies reach a very mature level of development. All of the probabilities look finite to me.
When arguing about the future, the imaginable is not all there is. You essentially gave several imaginable futures (some in which risks continue to arise, and others in which they do not) and did some handwaving about which class you considered likely to be larger. There are three ways to dispute this: to dispute your handwaving (eg, you consider compression of subjective time to be a conclusive argument, as if this is inevitable), to propose not-considered classes of future (eg, technology continues to increase, but some immutable law of the universe means that there are only a finite number of apocalyptic technologies), or to maintain that there are large classes of future which cannot possibly be imagined because they do not clearly fall into any categories such as we are likely to define in the present. If you use the latter dispute, arguing about probability is just arguing about which uninformative prior to use.
I’m not pretending this is an airtight case. If you previously assumed that existential threats converge to zero as rationality increases; or that rationality is always the best policy; or that rationality means expectation maximization; and now you question one of those things; then you’ve gotten something out of it.
homung suggests that there may be immutable laws of the universe that mean there are only a finite number of apocalyptic technologies. Note that even if the probability of such technological limits is small, in order for Phil’s argument to work, either that probability would have to be infinitesimal, or some of the doomsday devices have to continue to be threatening after the various attack/defense strategies reach a very mature level of development. All of the probabilities look finite to me.
No; that probability about a property of the universe is a one-shot trial. It only has to be false once, out of one trial.
So your thesis is not that rationality dooms civilization, but only that as far as we know, it might. I get it now.