it doesn’t seem useful to get a feeling for “how far off of ideal are we likely to be” when that is composed of: 1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?
No, these cases aren’t included. The definition is: “an existential catastrophe that could have been avoided had humanity’s development, deployment or governance of AI been otherwise”. Physics cannot be changed by humanity’s development/deployment/governance decisions. (I agree that cases 2 and 3 are included).
Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances.
That’s correct. The survey wasn’t intended to understand respondents’ views on interventions. It was only intended to understand: if something goes wrong, what do respondents think that was? Someone could run another survey that asks about interventions (in fact, this other recent survey does that). For the reasons given in the Motivation section of this post, we chose to limit our scope to threat models, rather than interventions.
Thanks for the reply—a couple of responses:
No, these cases aren’t included. The definition is: “an existential catastrophe that could have been avoided had humanity’s development, deployment or governance of AI been otherwise”. Physics cannot be changed by humanity’s development/deployment/governance decisions. (I agree that cases 2 and 3 are included).
That’s correct. The survey wasn’t intended to understand respondents’ views on interventions. It was only intended to understand: if something goes wrong, what do respondents think that was? Someone could run another survey that asks about interventions (in fact, this other recent survey does that). For the reasons given in the Motivation section of this post, we chose to limit our scope to threat models, rather than interventions.