Hm.… So we’re not talking about banning GPUs, we’re talking about banning certain kinds of organizations. Like, DeepMind isn’t allowed to advertise as an AI research place, isn’t allowed to publish results, and so on; and they have to have a bunch of operational security and buy-in from employees and lie to their governments, or else relocate to somewhere with less restrictive regulations; and investors and clients maybe have to do shenanigans. Is the commitment to the ban strong enough to lead to military invasions to enforce the ban globally? Relocating to a less Western country is enough of a cost to slow down research a little, maybe, yeah. There’s still nuclear power plants in non-US places, and my impression is that there’s biotech research that’s pretty sketchy by US / Western standards going on in other places (e.g. Wuhan?).
Hm.… So we’re not talking about banning GPUs, we’re talking about banning certain kinds of organizations. Like, DeepMind isn’t allowed to advertise as an AI research place, isn’t allowed to publish results, and so on; and they have to have a bunch of operational security and buy-in from employees and lie to their governments, or else relocate to somewhere with less restrictive regulations; and investors and clients maybe have to do shenanigans.
Correct, and a bunch of those things you listed even push them towards operational adequacy instead of being just delaying tactics. I’d be pedantic and say DeepMind is probably the faction that causes the disaster in this tail-end scenario and is thus completely dismantled, but that’s not really getting at the point.
Is the commitment to the ban strong enough to lead to military invasions to enforce the ban globally?
Not necessarily, and that would depend on the particular severity of the event. If AI killed a million plus young people I think it’s not implausible.
If all of the relevant researchers are citizens of or present in the U.S. and U.K. however, and thus subject to U.S. and U.K. law, and there’s no other country with strong enough network effects, then it can still have a tremendous, outsized effect on research progress. Note that the FDA seems to degrade the ability of the global medical establishment to accomplish groundbreaking research without having some sort of global pseudo-jurisdiction, just by preventing it from happening in the U.S. and the downstream effects of that for developing nations. People have tried going to e.g. the Philippines and doing good nuclear power work there (link pending). Unfortunately the “go to ${X} and do ${Y} there if it’s illegal in ${Z}” strategy rarely tends to be workable in practice for goods more complicated than narcotics; you lose all of that nice Google funding, for one.
There’s still nuclear power plants in non-US places, and my impression is that there’s biotech research that’s pretty sketchy by US / Western standards going on in other places (e.g. Wuhan?).
Like what? It seems qualitatively apparent to me that there is less going on in biotech than in IT, because the country that does most of the world’s innovation has outlawed it. When China’s researchers get caught doing sketchy stuff like CRISPR, the global medical establishment applies some light pressure and they go to jail. We would outlaw AI research like they have effectively outlawed gene editing. There would be a bunch of second order effects on the broader IT industry but we would still, kind of, accomplish the primary goal.
(I’m not sure about this, thinking aloud; you may be right.)
AI is hard to regulate because
It’s hard to understand what it is, hence hard to point at it, hence hard to enforce bans. For nuclear stuff, you need lumps of stuff dug out of mines that can be detected by waving a little device over it. For bio, you have to have, like, big expensive machines? If you’re not just banning GPUs, what are you banning? Banning certain kinds of organizations is banning branding, and it doesn’t seem that hard to do AGI research with different branding that still works for recruitment. (This is me a little bit changing my mind; I think I agree that a ban could cause a temporary slowdown by breaking up conspicuous AGI research orgs, like DM or whatnot, but I think it’s not that much of a slowdown.) How could you ban compute? Could you ban having large clusters? What about networked piece-meal compute? How much slower would the latter be?
It looks like the next big superweapon. Nuclear plants are regulated, but before that, and after we knew what nuclear weapons meant, there was an arms race and thousands of nukes made. This hasn’t as much happened for biotech? The ban on chemical / bio weapons basically worked?
Its inputs are ubiquitous. You can’t order a gene synthesis machine for a couple hundred bucks with <week shipping, you can’t order a pile of uranium, but you can order GPUs, on your own, as much as you want. Compute is fungible, easy to store, cheap, safe (until it’s not), robust, and has a thriving multifarious economy supporting its production and R&D.
It’s highly shareable. You can’t stop the signal, so you can’t stop source code, tools, and ideas from being shared. (Which is a good thing, except for AGI...) And there’s a fairly strong culture of sharing in AI.
It’s highly scalable. Source code can be copied and run wherever, whenever, by whoever, and to some lesser extent also ideas. Costly inputs more temper the scalability of nuclear and bio stuff.
Prerequisite knowledge is privately, individually accessible. It’s easy to, on your own without anyone knowing, get a laptop and start learning to program, learning to program AI, and learning to experiment with AI. If you’re super talented, people might pay you to do this! I would guess that this is a lot less true with nuclear and bio stuff?
There’s lots of easily externally checkable benchmarks and test applications to notice progress.
Hm.… So we’re not talking about banning GPUs, we’re talking about banning certain kinds of organizations. Like, DeepMind isn’t allowed to advertise as an AI research place, isn’t allowed to publish results, and so on; and they have to have a bunch of operational security and buy-in from employees and lie to their governments, or else relocate to somewhere with less restrictive regulations; and investors and clients maybe have to do shenanigans. Is the commitment to the ban strong enough to lead to military invasions to enforce the ban globally? Relocating to a less Western country is enough of a cost to slow down research a little, maybe, yeah. There’s still nuclear power plants in non-US places, and my impression is that there’s biotech research that’s pretty sketchy by US / Western standards going on in other places (e.g. Wuhan?).
Correct, and a bunch of those things you listed even push them towards operational adequacy instead of being just delaying tactics. I’d be pedantic and say DeepMind is probably the faction that causes the disaster in this tail-end scenario and is thus completely dismantled, but that’s not really getting at the point.
Not necessarily, and that would depend on the particular severity of the event. If AI killed a million plus young people I think it’s not implausible.
If all of the relevant researchers are citizens of or present in the U.S. and U.K. however, and thus subject to U.S. and U.K. law, and there’s no other country with strong enough network effects, then it can still have a tremendous, outsized effect on research progress. Note that the FDA seems to degrade the ability of the global medical establishment to accomplish groundbreaking research without having some sort of global pseudo-jurisdiction, just by preventing it from happening in the U.S. and the downstream effects of that for developing nations. People have tried going to e.g. the Philippines and doing good nuclear power work there (link pending). Unfortunately the “go to ${X} and do ${Y} there if it’s illegal in ${Z}” strategy rarely tends to be workable in practice for goods more complicated than narcotics; you lose all of that nice Google funding, for one.
Like what? It seems qualitatively apparent to me that there is less going on in biotech than in IT, because the country that does most of the world’s innovation has outlawed it. When China’s researchers get caught doing sketchy stuff like CRISPR, the global medical establishment applies some light pressure and they go to jail. We would outlaw AI research like they have effectively outlawed gene editing. There would be a bunch of second order effects on the broader IT industry but we would still, kind of, accomplish the primary goal.
(I’m not sure about this, thinking aloud; you may be right.)
AI is hard to regulate because
It’s hard to understand what it is, hence hard to point at it, hence hard to enforce bans. For nuclear stuff, you need lumps of stuff dug out of mines that can be detected by waving a little device over it. For bio, you have to have, like, big expensive machines? If you’re not just banning GPUs, what are you banning? Banning certain kinds of organizations is banning branding, and it doesn’t seem that hard to do AGI research with different branding that still works for recruitment. (This is me a little bit changing my mind; I think I agree that a ban could cause a temporary slowdown by breaking up conspicuous AGI research orgs, like DM or whatnot, but I think it’s not that much of a slowdown.) How could you ban compute? Could you ban having large clusters? What about networked piece-meal compute? How much slower would the latter be?
It looks like the next big superweapon. Nuclear plants are regulated, but before that, and after we knew what nuclear weapons meant, there was an arms race and thousands of nukes made. This hasn’t as much happened for biotech? The ban on chemical / bio weapons basically worked?
Its inputs are ubiquitous. You can’t order a gene synthesis machine for a couple hundred bucks with <week shipping, you can’t order a pile of uranium, but you can order GPUs, on your own, as much as you want. Compute is fungible, easy to store, cheap, safe (until it’s not), robust, and has a thriving multifarious economy supporting its production and R&D.
It’s highly shareable. You can’t stop the signal, so you can’t stop source code, tools, and ideas from being shared. (Which is a good thing, except for AGI...) And there’s a fairly strong culture of sharing in AI.
It’s highly scalable. Source code can be copied and run wherever, whenever, by whoever, and to some lesser extent also ideas. Costly inputs more temper the scalability of nuclear and bio stuff.
Prerequisite knowledge is privately, individually accessible. It’s easy to, on your own without anyone knowing, get a laptop and start learning to program, learning to program AI, and learning to experiment with AI. If you’re super talented, people might pay you to do this! I would guess that this is a lot less true with nuclear and bio stuff?
There’s lots of easily externally checkable benchmarks and test applications to notice progress.