I basically agree with the premise, though I’m not so sure that bad but not existential disasters are more likely than very good or very bad outcomes.
The only way I see us getting the global momentum necessary to ban strong AI is if it causes some huge but not existential disaster. Short of that I think the average human is too dumb and too ignorant to identify risk from AI, let alone do anything about it.
My worry isn’t so much with the average human, but instead that collective action problems like these are impossible to solve without imposing your own values, but even if you do, it’s still very hard to actually impose your values embodied in law because everyone individually is rational to race to AI, conditional on AI having massive impacts.
I basically agree with the premise, though I’m not so sure that bad but not existential disasters are more likely than very good or very bad outcomes.
The only way I see us getting the global momentum necessary to ban strong AI is if it causes some huge but not existential disaster. Short of that I think the average human is too dumb and too ignorant to identify risk from AI, let alone do anything about it.
My worry isn’t so much with the average human, but instead that collective action problems like these are impossible to solve without imposing your own values, but even if you do, it’s still very hard to actually impose your values embodied in law because everyone individually is rational to race to AI, conditional on AI having massive impacts.
How is it individually rational to race to AI if it’s very likely to kill you?