Why do you think we need to find out 1 before trying? I would say, if it is indeed a good idea to postpone, then we can just start trying to postpone. Why would we need to know beforehand how effective that will be? Can’t we find that out by trial and error if needed? Worst case, we would be postponing less. That is of course, as long as the flavor of postponement does not have serious negative side effects.
Or rephrased, why do these brakes need to be carefully calibrated?
Presuming that this is a serious topic, then we need to understand what the world would look like if we could put the brakes on technology. Right now, we can’t. What would it look like if we as a civilization were really trying hard to stop a certain branch of research? Would we like that state of affairs?
So we’d have all major military nations agreeing to a ban on artificial intelligence research, while all of them simultaneously acknowledge that AI research is key to their military edge? And then trusting each other not to carry out such research in secret? While policing anybody who crosses some undefinable line about what constitutes banned AI research?
That sounds both intractable and like a policing nightmare to me—one that would have no end in sight. If poorly executed, it could be both repressive and ineffective.
So I would like to know what a plan to permanently and effectively repress a whole wing of scientific inquiry on a global scale would look like.
The most tractable way seems like it would be to treat it like an illegal biological weapons program. That might be a model.
The difference is that people generally interested in the study of bacteria and viruses still have many other outlets. Also, bioweapons haven’t been a crucial element in any nation’s arsenal. They don’t have a positive purpose.
None of this applies to AI. So I see it as having some important differences from a bioweapons program.
Would we be willing to launch such an intrusive program of global policing, with all the attendant risks of permanent infringement of human rights, and risk setting up a system that both fails to achieve its intended purpose and sucks to live under?
Would such a system actually reduce the chance of unsafe GAI long term? Or, as you’ve pointed out, would it risk creating a climate of urgency, secrecy, and distrust among nations and among scientists?
I’d welcome work to investigate such plans, but it doesn’t seem on its face to be an obviously great solution.
Why do you think we need to find out 1 before trying? I would say, if it is indeed a good idea to postpone, then we can just start trying to postpone. Why would we need to know beforehand how effective that will be? Can’t we find that out by trial and error if needed? Worst case, we would be postponing less. That is of course, as long as the flavor of postponement does not have serious negative side effects.
Or rephrased, why do these brakes need to be carefully calibrated?
Presuming that this is a serious topic, then we need to understand what the world would look like if we could put the brakes on technology. Right now, we can’t. What would it look like if we as a civilization were really trying hard to stop a certain branch of research? Would we like that state of affairs?
I’m imagining an international treaty, national laws, and enforcement from police. That’s a serious proposal.
So we’d have all major military nations agreeing to a ban on artificial intelligence research, while all of them simultaneously acknowledge that AI research is key to their military edge? And then trusting each other not to carry out such research in secret? While policing anybody who crosses some undefinable line about what constitutes banned AI research?
That sounds both intractable and like a policing nightmare to me—one that would have no end in sight. If poorly executed, it could be both repressive and ineffective.
So I would like to know what a plan to permanently and effectively repress a whole wing of scientific inquiry on a global scale would look like.
The most tractable way seems like it would be to treat it like an illegal biological weapons program. That might be a model.
The difference is that people generally interested in the study of bacteria and viruses still have many other outlets. Also, bioweapons haven’t been a crucial element in any nation’s arsenal. They don’t have a positive purpose.
None of this applies to AI. So I see it as having some important differences from a bioweapons program.
Would we be willing to launch such an intrusive program of global policing, with all the attendant risks of permanent infringement of human rights, and risk setting up a system that both fails to achieve its intended purpose and sucks to live under?
Would such a system actually reduce the chance of unsafe GAI long term? Or, as you’ve pointed out, would it risk creating a climate of urgency, secrecy, and distrust among nations and among scientists?
I’d welcome work to investigate such plans, but it doesn’t seem on its face to be an obviously great solution.