I argue that we may be underinvesting in scenarios where AI comes soon even though these scenarios are relatively unlikely, because we will not have time later to address them.
Edit: Separately...
p(X) denotes the probability that we will face problem X. Note that this is meant to be an absolute probability, not conditional on getting to the the point where we might face X.
Are you assuming a hard takeoff intelligence explosion? If not, shouldn’t you also be interested in the probability of UFAI given future advances that may lead to it?
Kurzweil seems to think we will pass some unambiguous signposts on the way to superhuman AI. I would grant this scenario a nonzero probability.
Edit: Separately...
Are you assuming a hard takeoff intelligence explosion? If not, shouldn’t you also be interested in the probability of UFAI given future advances that may lead to it?
Kurzweil seems to think we will pass some unambiguous signposts on the way to superhuman AI. I would grant this scenario a nonzero probability.
Nitpick: “the the” is a typo.