Eliezer, sometimes in a conversation one needs a rapid back and forth
Yeah, unfortunately I’m sort of in the middle of resetting my sleep cycle at the moment so I’m out of sync with you for purposes of conducting rapidfire comments. Should be fixed in a few days.
Carl and Roko, I really wasn’t trying to lay out a moral position, though I was expressing mild horror at encouraging total war, a horror I expected (incorrectly it seems) would be widely shared...
Suppose that a pack of socialists is having a discussion, and a libertarian (who happens to be a friend of theirs) wanders by. After listening for a few moments, the libertarian says, “I’m shocked! You want babies to starve! Doesn’t even discussing that make it more socially acceptable, and thereby increase the probability of it happening?”
“Eh?” say the socialists. “No one here said a single word about starving babies except you.”
Now I’ve set up this example so that the libertarian occupies the sympathetic position. Nonetheless, it seems to me that if the these parties wish to conduct a Disagreement, then they should have some sort of “Aha!” moment at this point where they realize that they’re working from rather different assumptions, and that it is important to bring out and explicate those assumptions.
Robin, I did not say anything about total war, you did. I think you should realize at this point that on my worldview I am not doing anything to encourage war, or setting humanity on a course for total war. You can say that I am wrong, of course; the libertarian can try to explain to the socialists why capitalism is the basic force that feeds babies. But the libertarian should also have the empathy to realize that the socialists do not believe this particular fact at the start of the conversation.
There are clear differences of worldview clashing here, which have nothing to do with the speed of an AI takeoff per se, but rather has something to do with what kind of technological progress parameters imply what sort of consequences. I was talking about large localized jumps in capability; you made a leap to total war. I can guess at some of your beliefs behind this but it would only be a guess.
The libertarian and the socialists are unlikely to have much luck conducting their Disagreement about whether babies should starve, but they might find it fruitful to try and figure out why they think that babies will starve or not starve under different circumstances. I think they would be unwise to ignore this minor-seeming point and pass directly to the main part of their discussion about whether an economy needs regulation.
Oh, to answer Eliezer’s direct question directly, if I know that I am in a total war, I fight. I fight to make myself, or if that is impossible those who most share my values, win.
That’s not much of a Line of Retreat. It would be like my saying, “Well, if a hard takeoff is impossible, I guess I’ll try to make sure we have as much fun as we can in our short lives.” If I actually believed an AI hard takeoff were impossible, I wouldn’t pass directly to the worst-case scenario and give up on all other hopes. I would pursue the path of human intelligence enhancement, or uploading, or non-takeoff AI, and promote cryonics more heavily.
If you actually came to believe in large localized capability jumps, I do not think you would say, “Oh, well, guess I’m inevitably in a total war, now I need to fight a zero-sum game and damage all who are not my allies as much as possible.” I think you would say, “Okay, so, how do we avoid a total war in this kind of situation?” If you can work out in advance what you would do then, that’s your line of retreat.
I’m sorry for this metaphor, but it just seems like a very useful and standard one if one can strip away the connotations: suppose I asked a theist to set up a Line of Retreat if there is no God, and they replied, “Then I’ll just go through my existence trying to ignore the gaping existential void in my heart”. That’s not a line of retreat—that’s a reinvocation of the same forces holding the original belief in place. I have the same problem with my asking “Can you set up a line of retreat for yourself if there is a large localized capability jump?” and your replying “Then I guess I would do my best to win the total war.”
If you can make the implication explicit, and really look for loopholes, and fail to find them, then there is no line of retreat; but to me, at least, it looks like a line of retreat really should exist here.
Robin:
Yeah, unfortunately I’m sort of in the middle of resetting my sleep cycle at the moment so I’m out of sync with you for purposes of conducting rapidfire comments. Should be fixed in a few days.
Suppose that a pack of socialists is having a discussion, and a libertarian (who happens to be a friend of theirs) wanders by. After listening for a few moments, the libertarian says, “I’m shocked! You want babies to starve! Doesn’t even discussing that make it more socially acceptable, and thereby increase the probability of it happening?”
“Eh?” say the socialists. “No one here said a single word about starving babies except you.”
Now I’ve set up this example so that the libertarian occupies the sympathetic position. Nonetheless, it seems to me that if the these parties wish to conduct a Disagreement, then they should have some sort of “Aha!” moment at this point where they realize that they’re working from rather different assumptions, and that it is important to bring out and explicate those assumptions.
Robin, I did not say anything about total war, you did. I think you should realize at this point that on my worldview I am not doing anything to encourage war, or setting humanity on a course for total war. You can say that I am wrong, of course; the libertarian can try to explain to the socialists why capitalism is the basic force that feeds babies. But the libertarian should also have the empathy to realize that the socialists do not believe this particular fact at the start of the conversation.
There are clear differences of worldview clashing here, which have nothing to do with the speed of an AI takeoff per se, but rather has something to do with what kind of technological progress parameters imply what sort of consequences. I was talking about large localized jumps in capability; you made a leap to total war. I can guess at some of your beliefs behind this but it would only be a guess.
The libertarian and the socialists are unlikely to have much luck conducting their Disagreement about whether babies should starve, but they might find it fruitful to try and figure out why they think that babies will starve or not starve under different circumstances. I think they would be unwise to ignore this minor-seeming point and pass directly to the main part of their discussion about whether an economy needs regulation.
That’s not much of a Line of Retreat. It would be like my saying, “Well, if a hard takeoff is impossible, I guess I’ll try to make sure we have as much fun as we can in our short lives.” If I actually believed an AI hard takeoff were impossible, I wouldn’t pass directly to the worst-case scenario and give up on all other hopes. I would pursue the path of human intelligence enhancement, or uploading, or non-takeoff AI, and promote cryonics more heavily.
If you actually came to believe in large localized capability jumps, I do not think you would say, “Oh, well, guess I’m inevitably in a total war, now I need to fight a zero-sum game and damage all who are not my allies as much as possible.” I think you would say, “Okay, so, how do we avoid a total war in this kind of situation?” If you can work out in advance what you would do then, that’s your line of retreat.
I’m sorry for this metaphor, but it just seems like a very useful and standard one if one can strip away the connotations: suppose I asked a theist to set up a Line of Retreat if there is no God, and they replied, “Then I’ll just go through my existence trying to ignore the gaping existential void in my heart”. That’s not a line of retreat—that’s a reinvocation of the same forces holding the original belief in place. I have the same problem with my asking “Can you set up a line of retreat for yourself if there is a large localized capability jump?” and your replying “Then I guess I would do my best to win the total war.”
If you can make the implication explicit, and really look for loopholes, and fail to find them, then there is no line of retreat; but to me, at least, it looks like a line of retreat really should exist here.