The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years. (This is different from the hard takeoff scenario which suggests the AGI improves rapidly at an exponential rate due to the recursiveness of the improvements.)
My point is that this isn’t an arms race. The whole cold war concept doesn’t make sense for AGI.
How does that lead to hundreds of thousands dying in some impoverished foreign country?
Huh? That was what happened with the first use of nuclear bombs, it’s not necessarily what will happen with AGI. We should be so lucky!
I think you aren’t understanding my point here of the parable. I thought it was clear in the middle, but to repeat myself… Even with nuclear bombs, which are as textbook a case of x-risk as you could ever hope to find, with as well-established physics endorsed by brainy specialists as possible, with hundreds of thousands of dead bodies due to an early weak version to underscore for even the most moronic possible politician ‘yes, this is very real and these weapons are really fucking dangerous’ as a ‘sputnik moment’, politicians still did not take meaningful preventive action.
Hence, since AGI will on every dimension be a less clean simple case (harder to understand, harder to predict the power of, less likely to present a clear signal of danger in time to be useful, more useful in civilian applications) than nuclear weapons were, a fortiori, politicians will not take meaningful preventive action about AGI. Political elites failed an easy x-risk test, and so it is unlikely they will pass a harder x-risk test.This is in direct contrast to what lukeprog seems to believe, and you’ll note I allude to his previous posts about how well he thinks elites dealt with past issues.
No, I don’t expect the early AGI prototypes to tip their hand and conveniently warn us like that. Life is not a Hollywood movie where the Evil AI character conveniently slaughters a town and then sits around patiently waiting for the heroes to defeat them. I expect AGI to either not be particularly powerful/dangerous & our concerns entirely groundless, or to not look like a major problem until it’s too late.
The onus is on you to show there is any parallel at all. You’ve asserted there is. Why?
Why do you think there won’t be any arms race? If AGI are militarily powerful and increase in power, that sets up the conditions for an arms race: countries will need to acquire and develop AGI merely to maintain parity, which in turn encourages further development by other countries to maintain their relative level of military power. What part of this do you disagree with? ‘arms race’ is a common and well-understood pattern, it would be helpful if you explained your disagreement (which you still haven’t so far) rather than demand I explicate something fairly obvious.
I don’t believe AGI will be militarily useful, at least moreso than any other technology.
Other technologies have sparked arms races, so that seems like an odd position to take.
Nor do I believe that AGI will be developed on a long enough time scale for an “arms race”.
If you’re a ‘fast takeoff’ proponent, I suppose the parallels to nukes aren’t of much value and you don’t care whether the politicians would handle well or poorly a slow takeoff. I don’t find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.
Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message—hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years—equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical “takeoff” period measured in months or a few years, depending on the underlying architecture. That’s not what most people around here mean by a “fast takeoff”, but it is still too fast for meaningful political reaction.
The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years. (This is different from the hard takeoff scenario which suggests the AGI improves rapidly at an exponential rate due to the recursiveness of the improvements.)
How would it not be an arms race?
How does that lead to hundreds of thousands dying in some impoverished foreign country?
Gwern, it’s your argument. The onus is on you to show there is any parallel at all. You’ve asserted there is. Why?
Huh? That was what happened with the first use of nuclear bombs, it’s not necessarily what will happen with AGI. We should be so lucky!
I think you aren’t understanding my point here of the parable. I thought it was clear in the middle, but to repeat myself… Even with nuclear bombs, which are as textbook a case of x-risk as you could ever hope to find, with as well-established physics endorsed by brainy specialists as possible, with hundreds of thousands of dead bodies due to an early weak version to underscore for even the most moronic possible politician ‘yes, this is very real and these weapons are really fucking dangerous’ as a ‘sputnik moment’, politicians still did not take meaningful preventive action.
Hence, since AGI will on every dimension be a less clean simple case (harder to understand, harder to predict the power of, less likely to present a clear signal of danger in time to be useful, more useful in civilian applications) than nuclear weapons were, a fortiori, politicians will not take meaningful preventive action about AGI. Political elites failed an easy x-risk test, and so it is unlikely they will pass a harder x-risk test.This is in direct contrast to what lukeprog seems to believe, and you’ll note I allude to his previous posts about how well he thinks elites dealt with past issues.
No, I don’t expect the early AGI prototypes to tip their hand and conveniently warn us like that. Life is not a Hollywood movie where the Evil AI character conveniently slaughters a town and then sits around patiently waiting for the heroes to defeat them. I expect AGI to either not be particularly powerful/dangerous & our concerns entirely groundless, or to not look like a major problem until it’s too late.
Why do you think there won’t be any arms race? If AGI are militarily powerful and increase in power, that sets up the conditions for an arms race: countries will need to acquire and develop AGI merely to maintain parity, which in turn encourages further development by other countries to maintain their relative level of military power. What part of this do you disagree with? ‘arms race’ is a common and well-understood pattern, it would be helpful if you explained your disagreement (which you still haven’t so far) rather than demand I explicate something fairly obvious.
It’s only obvious to you, apparently.
I don’t believe AGI will be militarily useful, at least moreso than any other technology.
Nor do I believe that AGI will be developed on a long enough time scale for an “arms race”.
Nor do I think politicians will be involved, at all.
Other technologies have sparked arms races, so that seems like an odd position to take.
If you’re a ‘fast takeoff’ proponent, I suppose the parallels to nukes aren’t of much value and you don’t care whether the politicians would handle well or poorly a slow takeoff. I don’t find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.
Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message—hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years—equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical “takeoff” period measured in months or a few years, depending on the underlying architecture. That’s not what most people around here mean by a “fast takeoff”, but it is still too fast for meaningful political reaction.
Chernobyl.
I’m asking about AGI technology....