To me, the arguments from both sides, both arguing for and against worrying about existential risk from AI, make sense. People have different priors and biased access to information. However, even if everyone agreed on all matters of fact that can be currently established, the disagreement would persist. The issue is that predicting the future is very hard and we can’t expect to be in any way certain what will happen. I think the interesting difference between how people “pro” and “contra” AI-x-risk think about this is in dealing with this uncertainty.
Imagine you have a model of the world, which is the best model you have been able to come up with after trying very hard. This model is about the future and predicts catastrophe unless something is done about it now. It’s impossible to check if the model holds up, other than by waiting until it’s too late. Crucially, your model seems unlikely to make true predictions: it’s about the future and rests on a lot of unverifiable assumptions. What do you do?
People “pro-x-risk” might say: “we made the best model we could make, it says we should not build AI. So let’s not do that, at least until our models are improved and say it’s safe enough to try. The default option is not to do something that seems very risky.”.
The opponents might say: “this model is almost certainly wrong, we should ignore what it says. Building risky stuff has kinda worked so far, let’s just see what happens. Besides, somebody will do it anyway.”
My feeling when listening to eleborate and abstract discussions is that people mainly disagree on this point. “What’s the default action?” or, in other words, “who has the burden of proof?”. That proof is basically impossible to give for either side.
It’s obviously great that people are trying to improve their models. That might get harder to do the more politicized the issue becomes.
To me, the arguments from both sides, both arguing for and against worrying about existential risk from AI, make sense. People have different priors and biased access to information. However, even if everyone agreed on all matters of fact that can be currently established, the disagreement would persist. The issue is that predicting the future is very hard and we can’t expect to be in any way certain what will happen. I think the interesting difference between how people “pro” and “contra” AI-x-risk think about this is in dealing with this uncertainty.
Imagine you have a model of the world, which is the best model you have been able to come up with after trying very hard. This model is about the future and predicts catastrophe unless something is done about it now. It’s impossible to check if the model holds up, other than by waiting until it’s too late. Crucially, your model seems unlikely to make true predictions: it’s about the future and rests on a lot of unverifiable assumptions. What do you do?
People “pro-x-risk” might say: “we made the best model we could make, it says we should not build AI. So let’s not do that, at least until our models are improved and say it’s safe enough to try. The default option is not to do something that seems very risky.”.
The opponents might say: “this model is almost certainly wrong, we should ignore what it says. Building risky stuff has kinda worked so far, let’s just see what happens. Besides, somebody will do it anyway.”
My feeling when listening to eleborate and abstract discussions is that people mainly disagree on this point. “What’s the default action?” or, in other words, “who has the burden of proof?”. That proof is basically impossible to give for either side.
It’s obviously great that people are trying to improve their models. That might get harder to do the more politicized the issue becomes.