My worry isn’t so much with the average human, but instead that collective action problems like these are impossible to solve without imposing your own values, but even if you do, it’s still very hard to actually impose your values embodied in law because everyone individually is rational to race to AI, conditional on AI having massive impacts.
My worry isn’t so much with the average human, but instead that collective action problems like these are impossible to solve without imposing your own values, but even if you do, it’s still very hard to actually impose your values embodied in law because everyone individually is rational to race to AI, conditional on AI having massive impacts.
How is it individually rational to race to AI if it’s very likely to kill you?