Is one question combining the risk of “too much” AI use and “too little” AI use?
Yes, it is. Combining these cases seems reasonable to me, though we definitely should have clarified this in the survey instructions. They’re both cases where humanity could avoided an existential catastrophe by making different decisions with respect to AI.
But the action needed to avoid/mitigate in those cases is very different, so it doesn’t seem useful to get a feeling for “how far off of ideal are we likely to be” when that is composed of: 1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?
2. What is the range of desirable outcomes within that range? - ie what should we do?
3. How will politics, incumbent interests, etc. play out? - ie what will we actually do?
Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances. It could be “attempt to shut down all AI research” or “put more funding into AI research” or “it doesn’t matter because the two majority cases are “General AI is impossible − 40%” and “General AI is inevitable and will wreck us − 50%”″
it doesn’t seem useful to get a feeling for “how far off of ideal are we likely to be” when that is composed of: 1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?
No, these cases aren’t included. The definition is: “an existential catastrophe that could have been avoided had humanity’s development, deployment or governance of AI been otherwise”. Physics cannot be changed by humanity’s development/deployment/governance decisions. (I agree that cases 2 and 3 are included).
Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances.
That’s correct. The survey wasn’t intended to understand respondents’ views on interventions. It was only intended to understand: if something goes wrong, what do respondents think that was? Someone could run another survey that asks about interventions (in fact, this other recent survey does that). For the reasons given in the Motivation section of this post, we chose to limit our scope to threat models, rather than interventions.
That seems like a really bad conflation? Is one question combining the risk of “too much” AI use and “too little” AI use?
That’s even worse than the already widely smashed distinctions between “can we?” “should we?” And “will we?”
Yes, it is. Combining these cases seems reasonable to me, though we definitely should have clarified this in the survey instructions. They’re both cases where humanity could avoided an existential catastrophe by making different decisions with respect to AI.
But the action needed to avoid/mitigate in those cases is very different, so it doesn’t seem useful to get a feeling for “how far off of ideal are we likely to be” when that is composed of:
1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?
2. What is the range of desirable outcomes within that range? - ie what should we do?
3. How will politics, incumbent interests, etc. play out? - ie what will we actually do?
Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances. It could be “attempt to shut down all AI research” or “put more funding into AI research” or “it doesn’t matter because the two majority cases are “General AI is impossible − 40%” and “General AI is inevitable and will wreck us − 50%”″
Thanks for the reply—a couple of responses:
No, these cases aren’t included. The definition is: “an existential catastrophe that could have been avoided had humanity’s development, deployment or governance of AI been otherwise”. Physics cannot be changed by humanity’s development/deployment/governance decisions. (I agree that cases 2 and 3 are included).
That’s correct. The survey wasn’t intended to understand respondents’ views on interventions. It was only intended to understand: if something goes wrong, what do respondents think that was? Someone could run another survey that asks about interventions (in fact, this other recent survey does that). For the reasons given in the Motivation section of this post, we chose to limit our scope to threat models, rather than interventions.