This is a good question. It’s worth examining the assumption if it’s the basis of our whole plan.
When I say “we”, I mean “the currently listening audience”, roughly the AI safety community. We don’t have the power to convince humanity to shut down AI research.
There are a few reasons I think this. The primary one is that humanity isn’t a single individual. People have different perspectives. Some will not not be likely to change their minds. There are even some individuals for whom building an AGI would actually be a good idea. Those are people who care more about personal gain than they do about the safety or future of humanity. Sociopaths of one sort or another are thought to make up perhaps 10% of the population (the 1% diagnosed are the ones who get caught). For a sociopath, it’s a good bet to risk the future of humanity against a chance of becoming the most powerful person alive. There are thought to be a lot of sociopaths in government, even in democratic countries.
So, sooner or later, you’re going to see a government or rich individual working on AGI with the vastly improved compute and algorithmic resources that continued advances in hardware and software will bring. The only way to enforce a permanent ban would be to ban computers, or have a global panopticon that monitors what every computer is doing. That might well lead to a repressive global regime that stays in power permanently. That is an S-risk; a scenario in which humanity suffers forever. That’s arguably worse than dying in an attempt to achieve AGI.
Those are weak and loose arguments, but I think that describes the core of my and probably many others’ thinking on the topic.
This is a good question. It’s worth examining the assumption if it’s the basis of our whole plan.
When I say “we”, I mean “the currently listening audience”, roughly the AI safety community. We don’t have the power to convince humanity to shut down AI research.
There are a few reasons I think this. The primary one is that humanity isn’t a single individual. People have different perspectives. Some will not not be likely to change their minds. There are even some individuals for whom building an AGI would actually be a good idea. Those are people who care more about personal gain than they do about the safety or future of humanity. Sociopaths of one sort or another are thought to make up perhaps 10% of the population (the 1% diagnosed are the ones who get caught). For a sociopath, it’s a good bet to risk the future of humanity against a chance of becoming the most powerful person alive. There are thought to be a lot of sociopaths in government, even in democratic countries.
So, sooner or later, you’re going to see a government or rich individual working on AGI with the vastly improved compute and algorithmic resources that continued advances in hardware and software will bring. The only way to enforce a permanent ban would be to ban computers, or have a global panopticon that monitors what every computer is doing. That might well lead to a repressive global regime that stays in power permanently. That is an S-risk; a scenario in which humanity suffers forever. That’s arguably worse than dying in an attempt to achieve AGI.
Those are weak and loose arguments, but I think that describes the core of my and probably many others’ thinking on the topic.