If you estimate a high chance of this action destroying humanity then trying to get through that bottleneck with slightly better than 75% chance of surviving is almost certainly better than trying to stamp out such research and buying a few years in exchange for replacing 75% with a near certainty. The only argument against that I can see if one accepts the 75% number is that forced delaying until we have uploads might help matters since uploads would have moral systems close to those of their original humans, and uploads can will have a better chance at solving the FAI problem or if not solving it being able to counteract any unFriendly or unfriendly AI.
AI research is hard. It’s not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years. Communication, collaboration, recruitment, funding… all of these would be much more difficulty. Even more, since current AI researchers are open about their work they would be the easiest to track after a ban, thus any new AI research would have to come from green researchers.
That aside, I agree that a ban whose goal is simply indefinite postponement of AGI is unlikely to work (and I’m dubious of any ban in general). Still, it isn’t hard for me to imagine that a ban could buy us 10 years, and that a similar amount of political might could also greatly accelerate an upload project.
The biggest argument against, in my opinion, is that the only way the political will could be formed is if the threat of AGI was already so imminent that a ban really would be worse than worthless.
AI research is hard. It’s not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years.
The other thing to consider is just what the ban would achieve. I would expect it to lower the 75% chance by giving us the opportunity to go extinct in another way before making a mistake with AI. When I say ‘extinct’ I include (d)evolving to an equilibrium (such as those described by Robin Hanson from time to time).
How well defined is AI research? My assumption is that if AI is reasonably possible for humans to create, then it’s going to become much easier as computers become more powerful and human minds and brains become better understood.
A ban seems highly implausible to me. What is the case for considering it? Do you really think that enough people will become convinced that there is a significant danger?
I agree, it seems highly implausible to me as well. However, the subject at hand (AI, AGI, FAI, uploads, etc) is riddled with extremes, so I’m hesitant to throw out any possibility simply because it would be incredibly difficult.
Do you really think that enough people will become convinced that there is a significant danger?
See the last line of the comment you responded to.
If you estimate a high chance of this action destroying humanity then trying to get through that bottleneck with slightly better than 75% chance of surviving is almost certainly better than trying to stamp out such research and buying a few years in exchange for replacing 75% with a near certainty. The only argument against that I can see if one accepts the 75% number is that forced delaying until we have uploads might help matters since uploads would have moral systems close to those of their original humans, and uploads can will have a better chance at solving the FAI problem or if not solving it being able to counteract any unFriendly or unfriendly AI.
AI research is hard. It’s not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years. Communication, collaboration, recruitment, funding… all of these would be much more difficulty. Even more, since current AI researchers are open about their work they would be the easiest to track after a ban, thus any new AI research would have to come from green researchers.
That aside, I agree that a ban whose goal is simply indefinite postponement of AGI is unlikely to work (and I’m dubious of any ban in general). Still, it isn’t hard for me to imagine that a ban could buy us 10 years, and that a similar amount of political might could also greatly accelerate an upload project.
The biggest argument against, in my opinion, is that the only way the political will could be formed is if the threat of AGI was already so imminent that a ban really would be worse than worthless.
The other thing to consider is just what the ban would achieve. I would expect it to lower the 75% chance by giving us the opportunity to go extinct in another way before making a mistake with AI. When I say ‘extinct’ I include (d)evolving to an equilibrium (such as those described by Robin Hanson from time to time).
How well defined is AI research? My assumption is that if AI is reasonably possible for humans to create, then it’s going to become much easier as computers become more powerful and human minds and brains become better understood.
A ban seems highly implausible to me. What is the case for considering it? Do you really think that enough people will become convinced that there is a significant danger?
I agree, it seems highly implausible to me as well. However, the subject at hand (AI, AGI, FAI, uploads, etc) is riddled with extremes, so I’m hesitant to throw out any possibility simply because it would be incredibly difficult.
See the last line of the comment you responded to.