I think there are two more general questions that need to be answered first.
How would we find out for sure whether there any tractable methods to put the brakes on a particular arm of technological progress?
What would be the tradeoffs of such a potent investigation into our civilizational capacity?
We clearly do have the capacity to do (1) to some extent:
Religious activists have managed to slow progress in stem cell research, though the advent of iPSCs has created a way to bypass this issue to some extent.
The anti-nuclear movement has probably helped slow down progress in nuclear power research, though ironically they don’t seem to have slowed down research on nuclear bombs (I could be wrong here).
Some people argue that the current structure of using cost-benefit analysis to allocate research funding does more harm than good, and thus it could be considered a decelerating force. But I’m not sure that’s true, and even if it is, I’m not sure that applies to a field like AI with so many commercial purposes.
But these are clearly not the durable, carefully calibrated brakes we’re talking about.
Why do you think we need to find out 1 before trying? I would say, if it is indeed a good idea to postpone, then we can just start trying to postpone. Why would we need to know beforehand how effective that will be? Can’t we find that out by trial and error if needed? Worst case, we would be postponing less. That is of course, as long as the flavor of postponement does not have serious negative side effects.
Or rephrased, why do these brakes need to be carefully calibrated?
Presuming that this is a serious topic, then we need to understand what the world would look like if we could put the brakes on technology. Right now, we can’t. What would it look like if we as a civilization were really trying hard to stop a certain branch of research? Would we like that state of affairs?
So we’d have all major military nations agreeing to a ban on artificial intelligence research, while all of them simultaneously acknowledge that AI research is key to their military edge? And then trusting each other not to carry out such research in secret? While policing anybody who crosses some undefinable line about what constitutes banned AI research?
That sounds both intractable and like a policing nightmare to me—one that would have no end in sight. If poorly executed, it could be both repressive and ineffective.
So I would like to know what a plan to permanently and effectively repress a whole wing of scientific inquiry on a global scale would look like.
The most tractable way seems like it would be to treat it like an illegal biological weapons program. That might be a model.
The difference is that people generally interested in the study of bacteria and viruses still have many other outlets. Also, bioweapons haven’t been a crucial element in any nation’s arsenal. They don’t have a positive purpose.
None of this applies to AI. So I see it as having some important differences from a bioweapons program.
Would we be willing to launch such an intrusive program of global policing, with all the attendant risks of permanent infringement of human rights, and risk setting up a system that both fails to achieve its intended purpose and sucks to live under?
Would such a system actually reduce the chance of unsafe GAI long term? Or, as you’ve pointed out, would it risk creating a climate of urgency, secrecy, and distrust among nations and among scientists?
I’d welcome work to investigate such plans, but it doesn’t seem on its face to be an obviously great solution.
I think there are two more general questions that need to be answered first.
How would we find out for sure whether there any tractable methods to put the brakes on a particular arm of technological progress?
What would be the tradeoffs of such a potent investigation into our civilizational capacity?
We clearly do have the capacity to do (1) to some extent:
Religious activists have managed to slow progress in stem cell research, though the advent of iPSCs has created a way to bypass this issue to some extent.
The anti-nuclear movement has probably helped slow down progress in nuclear power research, though ironically they don’t seem to have slowed down research on nuclear bombs (I could be wrong here).
Some people argue that the current structure of using cost-benefit analysis to allocate research funding does more harm than good, and thus it could be considered a decelerating force. But I’m not sure that’s true, and even if it is, I’m not sure that applies to a field like AI with so many commercial purposes.
But these are clearly not the durable, carefully calibrated brakes we’re talking about.
Why do you think we need to find out 1 before trying? I would say, if it is indeed a good idea to postpone, then we can just start trying to postpone. Why would we need to know beforehand how effective that will be? Can’t we find that out by trial and error if needed? Worst case, we would be postponing less. That is of course, as long as the flavor of postponement does not have serious negative side effects.
Or rephrased, why do these brakes need to be carefully calibrated?
Presuming that this is a serious topic, then we need to understand what the world would look like if we could put the brakes on technology. Right now, we can’t. What would it look like if we as a civilization were really trying hard to stop a certain branch of research? Would we like that state of affairs?
I’m imagining an international treaty, national laws, and enforcement from police. That’s a serious proposal.
So we’d have all major military nations agreeing to a ban on artificial intelligence research, while all of them simultaneously acknowledge that AI research is key to their military edge? And then trusting each other not to carry out such research in secret? While policing anybody who crosses some undefinable line about what constitutes banned AI research?
That sounds both intractable and like a policing nightmare to me—one that would have no end in sight. If poorly executed, it could be both repressive and ineffective.
So I would like to know what a plan to permanently and effectively repress a whole wing of scientific inquiry on a global scale would look like.
The most tractable way seems like it would be to treat it like an illegal biological weapons program. That might be a model.
The difference is that people generally interested in the study of bacteria and viruses still have many other outlets. Also, bioweapons haven’t been a crucial element in any nation’s arsenal. They don’t have a positive purpose.
None of this applies to AI. So I see it as having some important differences from a bioweapons program.
Would we be willing to launch such an intrusive program of global policing, with all the attendant risks of permanent infringement of human rights, and risk setting up a system that both fails to achieve its intended purpose and sucks to live under?
Would such a system actually reduce the chance of unsafe GAI long term? Or, as you’ve pointed out, would it risk creating a climate of urgency, secrecy, and distrust among nations and among scientists?
I’d welcome work to investigate such plans, but it doesn’t seem on its face to be an obviously great solution.