Feel free to elaborate on that. As I point out in this parable, nuclear tech looks eerily like slow takeoff and arms race scenarios once you delete the names, and elites failed to deal with it in any meaningful way other than accelerating the arms race & hoping we’d all survive.
1) AGI doesn’t require obscure, hard-to-process materials that can be physically controlled.
2) AGI is software and therefore trivially copyable—you can have the design for a nuclear bomb and the materials, but still need lots of specialists with experience in constructing nuclear bombs in order to build one. An AGI, on the other hand, could be built to run on commodity hardware.
3) AGI is merely instrumental to weaponization in the high-probability risk scenarios. It’s a low-cost Manhattan project in a box. A pariah country would use an AGI to direct their nuclear bomb project for example, not actually “deploy an AGI in the field”, whatever that means. So there’s a meta-level difference here: whereas peaceful nuclear technology actually generates weapons grade material as a side-product, AGI itself doesn’t carry that risk.
4) It’s hard to analyze what the exact risk is you are predicating this story on. What was the slow-takeoff failure that cost hundreds of thousands of people their lives? It’s hard to criticize specifically what you had in mind without knowing what you had in mind, other than that it involved a human-hostile, confrontational hard takeoff in a “properly” regulated project. As a general category of failures I assign very little probability mass there.
5) I would argue that the first AGI doesn’t require a Manhattan-scale project to construct, although I recognize that is a controversial opinion.
1) AGI doesn’t require obscure, hard-to-process materials that can be physically controlled.
Yes, it does: it requires obscene amounts of computing power, which require enormous extremely efficient multi-billion-dollar chip fabs to create, each of which currently cost more than the entire Manhattan Project did and draw upon exotic materials & specialties; see my discussion in http://www.gwern.net/Slowing%20Moore%27s%20Law
2) AGI is software and therefore trivially copyable—you can have the design for a nuclear bomb and the materials, but still need lots of specialists with experience in constructing nuclear bombs in order to build one. An AGI, on the other hand, could be built to run on commodity hardware.
You also need a lot of specialists to run a supercomputer. Amusingly, supercomputers have always been closely linked to nuclear bomb development, from the Manhattan Project to the national laboratories like Livermore.
whereas peaceful nuclear technology actually generates weapons grade material as a side-product
Only some nuclear technologies are inherent proliferation risks. Specific kinds of reactors, sure. But lots of other things like cesium for medicine? No way.
A pariah country would use an AGI to direct their nuclear bomb project for example, not actually “deploy an AGI in the field”, whatever that means.
Dead is dead. Does it matter if you’re dead because an AGI hacked your local dam and drowned you or piloted a drone or developed a nuclear bomb?
AGI itself doesn’t carry that risk.
Any AGI is going to carry the risk of being misapplied in all the ways that humans have done harm with their general intelligence throughout history. What are you going to do, install DRM on it?
4) It’s hard to analyze what the exact risk is you are predicating this story on. What was the slow-takeoff failure that cost hundreds of thousands of people their lives?
The slow takeoff was the development of atomic bombs from custom low kilotonnage bombs which weighed tons and could only be delivered by slow vulnerable heavy bombers deployed near the target to mass-produced lightweight megatonnage warheads which could be fit on ICBMs and represented a global threat with no defense. I thought this was straightforward, was I assuming too much knowledge of the Cold War and nuclear politics when I wrote it?
5) I would argue that the first AGI doesn’t require a Manhattan-scale project to construct
Maybe, maybe not. If it didn’t, I think that would support my thesis, by implying that an AGI arms race could be much faster and more volatile than the nuclear arms race was.
I would argue that the first AGI doesn’t require a Manhattan-scale project to construct
I would argue that the high tech world is, mostly unwittingly, currently undertaking a much bigger than Manhattan-scale project to construct AGI. Think of all the resources going into making computers smarter, faster, and cheaper. I don’t believe that the Internet is going to wake up and automatically become an AGI, but markets are strongly pushing tech companies towards creating the hardware likely necessary for AGI.
I thought this was straightforward, was I assuming too much knowledge of the Cold War and nuclear politics when I wrote it?
It was very straightforward and transparent. But it was supposed to be an allegory, right? So what’s the analog in the AGI interpretation?
Maybe, maybe not. If it didn’t, I think that would support my thesis, by implying that an AGI arms race could be much faster and more volatile than the nuclear arms race was.
My point is that this isn’t an arms race. The whole cold war concept doesn’t make sense for AGI.
The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years. (This is different from the hard takeoff scenario which suggests the AGI improves rapidly at an exponential rate due to the recursiveness of the improvements.)
My point is that this isn’t an arms race. The whole cold war concept doesn’t make sense for AGI.
How does that lead to hundreds of thousands dying in some impoverished foreign country?
Huh? That was what happened with the first use of nuclear bombs, it’s not necessarily what will happen with AGI. We should be so lucky!
I think you aren’t understanding my point here of the parable. I thought it was clear in the middle, but to repeat myself… Even with nuclear bombs, which are as textbook a case of x-risk as you could ever hope to find, with as well-established physics endorsed by brainy specialists as possible, with hundreds of thousands of dead bodies due to an early weak version to underscore for even the most moronic possible politician ‘yes, this is very real and these weapons are really fucking dangerous’ as a ‘sputnik moment’, politicians still did not take meaningful preventive action.
Hence, since AGI will on every dimension be a less clean simple case (harder to understand, harder to predict the power of, less likely to present a clear signal of danger in time to be useful, more useful in civilian applications) than nuclear weapons were, a fortiori, politicians will not take meaningful preventive action about AGI. Political elites failed an easy x-risk test, and so it is unlikely they will pass a harder x-risk test.This is in direct contrast to what lukeprog seems to believe, and you’ll note I allude to his previous posts about how well he thinks elites dealt with past issues.
No, I don’t expect the early AGI prototypes to tip their hand and conveniently warn us like that. Life is not a Hollywood movie where the Evil AI character conveniently slaughters a town and then sits around patiently waiting for the heroes to defeat them. I expect AGI to either not be particularly powerful/dangerous & our concerns entirely groundless, or to not look like a major problem until it’s too late.
The onus is on you to show there is any parallel at all. You’ve asserted there is. Why?
Why do you think there won’t be any arms race? If AGI are militarily powerful and increase in power, that sets up the conditions for an arms race: countries will need to acquire and develop AGI merely to maintain parity, which in turn encourages further development by other countries to maintain their relative level of military power. What part of this do you disagree with? ‘arms race’ is a common and well-understood pattern, it would be helpful if you explained your disagreement (which you still haven’t so far) rather than demand I explicate something fairly obvious.
I don’t believe AGI will be militarily useful, at least moreso than any other technology.
Other technologies have sparked arms races, so that seems like an odd position to take.
Nor do I believe that AGI will be developed on a long enough time scale for an “arms race”.
If you’re a ‘fast takeoff’ proponent, I suppose the parallels to nukes aren’t of much value and you don’t care whether the politicians would handle well or poorly a slow takeoff. I don’t find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.
Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message—hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years—equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical “takeoff” period measured in months or a few years, depending on the underlying architecture. That’s not what most people around here mean by a “fast takeoff”, but it is still too fast for meaningful political reaction.
Feel free to elaborate on that. As I point out in this parable, nuclear tech looks eerily like slow takeoff and arms race scenarios once you delete the names, and elites failed to deal with it in any meaningful way other than accelerating the arms race & hoping we’d all survive.
Well let’s see:
1) AGI doesn’t require obscure, hard-to-process materials that can be physically controlled.
2) AGI is software and therefore trivially copyable—you can have the design for a nuclear bomb and the materials, but still need lots of specialists with experience in constructing nuclear bombs in order to build one. An AGI, on the other hand, could be built to run on commodity hardware.
3) AGI is merely instrumental to weaponization in the high-probability risk scenarios. It’s a low-cost Manhattan project in a box. A pariah country would use an AGI to direct their nuclear bomb project for example, not actually “deploy an AGI in the field”, whatever that means. So there’s a meta-level difference here: whereas peaceful nuclear technology actually generates weapons grade material as a side-product, AGI itself doesn’t carry that risk.
4) It’s hard to analyze what the exact risk is you are predicating this story on. What was the slow-takeoff failure that cost hundreds of thousands of people their lives? It’s hard to criticize specifically what you had in mind without knowing what you had in mind, other than that it involved a human-hostile, confrontational hard takeoff in a “properly” regulated project. As a general category of failures I assign very little probability mass there.
5) I would argue that the first AGI doesn’t require a Manhattan-scale project to construct, although I recognize that is a controversial opinion.
Yes, it does: it requires obscene amounts of computing power, which require enormous extremely efficient multi-billion-dollar chip fabs to create, each of which currently cost more than the entire Manhattan Project did and draw upon exotic materials & specialties; see my discussion in http://www.gwern.net/Slowing%20Moore%27s%20Law
You also need a lot of specialists to run a supercomputer. Amusingly, supercomputers have always been closely linked to nuclear bomb development, from the Manhattan Project to the national laboratories like Livermore.
Only some nuclear technologies are inherent proliferation risks. Specific kinds of reactors, sure. But lots of other things like cesium for medicine? No way.
Dead is dead. Does it matter if you’re dead because an AGI hacked your local dam and drowned you or piloted a drone or developed a nuclear bomb?
Any AGI is going to carry the risk of being misapplied in all the ways that humans have done harm with their general intelligence throughout history. What are you going to do, install DRM on it?
The slow takeoff was the development of atomic bombs from custom low kilotonnage bombs which weighed tons and could only be delivered by slow vulnerable heavy bombers deployed near the target to mass-produced lightweight megatonnage warheads which could be fit on ICBMs and represented a global threat with no defense. I thought this was straightforward, was I assuming too much knowledge of the Cold War and nuclear politics when I wrote it?
Maybe, maybe not. If it didn’t, I think that would support my thesis, by implying that an AGI arms race could be much faster and more volatile than the nuclear arms race was.
I would argue that the high tech world is, mostly unwittingly, currently undertaking a much bigger than Manhattan-scale project to construct AGI. Think of all the resources going into making computers smarter, faster, and cheaper. I don’t believe that the Internet is going to wake up and automatically become an AGI, but markets are strongly pushing tech companies towards creating the hardware likely necessary for AGI.
It was very straightforward and transparent. But it was supposed to be an allegory, right? So what’s the analog in the AGI interpretation?
My point is that this isn’t an arms race. The whole cold war concept doesn’t make sense for AGI.
The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years. (This is different from the hard takeoff scenario which suggests the AGI improves rapidly at an exponential rate due to the recursiveness of the improvements.)
How would it not be an arms race?
How does that lead to hundreds of thousands dying in some impoverished foreign country?
Gwern, it’s your argument. The onus is on you to show there is any parallel at all. You’ve asserted there is. Why?
Huh? That was what happened with the first use of nuclear bombs, it’s not necessarily what will happen with AGI. We should be so lucky!
I think you aren’t understanding my point here of the parable. I thought it was clear in the middle, but to repeat myself… Even with nuclear bombs, which are as textbook a case of x-risk as you could ever hope to find, with as well-established physics endorsed by brainy specialists as possible, with hundreds of thousands of dead bodies due to an early weak version to underscore for even the most moronic possible politician ‘yes, this is very real and these weapons are really fucking dangerous’ as a ‘sputnik moment’, politicians still did not take meaningful preventive action.
Hence, since AGI will on every dimension be a less clean simple case (harder to understand, harder to predict the power of, less likely to present a clear signal of danger in time to be useful, more useful in civilian applications) than nuclear weapons were, a fortiori, politicians will not take meaningful preventive action about AGI. Political elites failed an easy x-risk test, and so it is unlikely they will pass a harder x-risk test.This is in direct contrast to what lukeprog seems to believe, and you’ll note I allude to his previous posts about how well he thinks elites dealt with past issues.
No, I don’t expect the early AGI prototypes to tip their hand and conveniently warn us like that. Life is not a Hollywood movie where the Evil AI character conveniently slaughters a town and then sits around patiently waiting for the heroes to defeat them. I expect AGI to either not be particularly powerful/dangerous & our concerns entirely groundless, or to not look like a major problem until it’s too late.
Why do you think there won’t be any arms race? If AGI are militarily powerful and increase in power, that sets up the conditions for an arms race: countries will need to acquire and develop AGI merely to maintain parity, which in turn encourages further development by other countries to maintain their relative level of military power. What part of this do you disagree with? ‘arms race’ is a common and well-understood pattern, it would be helpful if you explained your disagreement (which you still haven’t so far) rather than demand I explicate something fairly obvious.
It’s only obvious to you, apparently.
I don’t believe AGI will be militarily useful, at least moreso than any other technology.
Nor do I believe that AGI will be developed on a long enough time scale for an “arms race”.
Nor do I think politicians will be involved, at all.
Other technologies have sparked arms races, so that seems like an odd position to take.
If you’re a ‘fast takeoff’ proponent, I suppose the parallels to nukes aren’t of much value and you don’t care whether the politicians would handle well or poorly a slow takeoff. I don’t find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.
Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message—hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years—equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical “takeoff” period measured in months or a few years, depending on the underlying architecture. That’s not what most people around here mean by a “fast takeoff”, but it is still too fast for meaningful political reaction.
Chernobyl.
I’m asking about AGI technology....