Interesting, thanks. Yeah, I currently think the range of possible outcomes in warfare seems to be more smeared out, across a variety of different results, than the range of possible outcomes for humanity with respect to AGI. The bulk of the probability mass in the AGI case, IMO, is concentrated in “Total victory of unaligned, not-near-miss AGIs” and then there are smaller chunks concentrated in “Total victory of unaligned, near-miss AGIs” (near-miss means what they care about is similar enough to what we care about that it is either noticeably better, or noticeably worse, than human extinction.) and of course “human victory,” which can itself be subdivided depending on the details of how that goes.
Whereas with warfare, there’s almost a continuous range of outcomes ranging from “total annihilation and/or enslavement of our people” to “total victory” with pretty much everything in between a live possibility, and indeed some sort of negotiated settlement more likely than not.
I do agree that there are a variety of different outcomes with AGI, but I think if people think seriously about the spread of outcomes (instead of being daunted and deciding not to think about it because it’s so speculative) they’ll conclude that they fall into the buckets I described.
Separately, I think that even if it was less binary than warfare, it would still be good to talk about p(doom). I think it’s pretty helpful for orienting people & also I think a lot of harm comes from people having insufficiently high p(doom). Like, a lot of people are basically feeling/thinking “yeah it looks like things could go wrong but probably things will be fine probably we’ll figure it out, so I’m going to keep working on capabilities at the AGI lab and/or keep building status and prestige and influence and not rock the boat too much because who knows what the future might bring but anyhow we don’t want to do anything drastic that would get us ridiculed and excluded now.” If they are actually correct that there’s, say, a 5% chance of AI doom, coming from worlds in which things are harder than we expect and an unfortunate chain of events occurs and people make a bunch of mistakes or bad people seize power, maybe something in this vicinity is justified. But if instead we are in a situation where doom is the default and we need a bunch of unlikely things to happen and/or a bunch of people to wake up and work very hard and very smart and coordinate well, in order to NOT suffer unaligned AI takeover...
Interesting, thanks. Yeah, I currently think the range of possible outcomes in warfare seems to be more smeared out, across a variety of different results, than the range of possible outcomes for humanity with respect to AGI. The bulk of the probability mass in the AGI case, IMO, is concentrated in “Total victory of unaligned, not-near-miss AGIs” and then there are smaller chunks concentrated in “Total victory of unaligned, near-miss AGIs” (near-miss means what they care about is similar enough to what we care about that it is either noticeably better, or noticeably worse, than human extinction.) and of course “human victory,” which can itself be subdivided depending on the details of how that goes.
Whereas with warfare, there’s almost a continuous range of outcomes ranging from “total annihilation and/or enslavement of our people” to “total victory” with pretty much everything in between a live possibility, and indeed some sort of negotiated settlement more likely than not.
I do agree that there are a variety of different outcomes with AGI, but I think if people think seriously about the spread of outcomes (instead of being daunted and deciding not to think about it because it’s so speculative) they’ll conclude that they fall into the buckets I described.
Separately, I think that even if it was less binary than warfare, it would still be good to talk about p(doom). I think it’s pretty helpful for orienting people & also I think a lot of harm comes from people having insufficiently high p(doom). Like, a lot of people are basically feeling/thinking “yeah it looks like things could go wrong but probably things will be fine probably we’ll figure it out, so I’m going to keep working on capabilities at the AGI lab and/or keep building status and prestige and influence and not rock the boat too much because who knows what the future might bring but anyhow we don’t want to do anything drastic that would get us ridiculed and excluded now.” If they are actually correct that there’s, say, a 5% chance of AI doom, coming from worlds in which things are harder than we expect and an unfortunate chain of events occurs and people make a bunch of mistakes or bad people seize power, maybe something in this vicinity is justified. But if instead we are in a situation where doom is the default and we need a bunch of unlikely things to happen and/or a bunch of people to wake up and work very hard and very smart and coordinate well, in order to NOT suffer unaligned AI takeover...