A lot of this depends on your definition of doomsday/apocalypse. I took it to mean the end of humanity, and a state of the world we consider worse than our continued existence. If we valued the actual end state of the world more than continuing to exist, it would be easy to argue it was a good thing, and not a doom at all. (I don’t think the second condition is likely to come up for a very long time as a reason for something to not be doomsday.) For instance, if each person created a sapient race of progeny that weren’t human, but they valued as their own children, and who had good lives/civilizations, then the fact humanity ceased to exist due to a simple lack of biological children would not be that bad. This could in some cases be caused by AGI, but wouldn’t be a problem. (It would also be in the far future.)
AI doomsday never (though it is far from impossible). Not doomsday never, it’s just unlikely to be AGI. I believe we both aren’t that close, and that ‘takeoff’ would be best described as glacial, and we’ll have plenty of time to get it right. I am unsure of the risk level of unaligned moderately superhuman AI, but I believe (very confidently) that tech level for minimal AGI is much lower than the tech level for doomsday AGI. If I was wrong about that, I would obviously change my mind about the likelihood of AGI doomsday. (I think I put something like 1 in 10 million in the next fifty years. [Though in percentages.] Everything else was 0, though in the case of 25 years, I just didn’t know how many 0s to give it.)
‘Tragic AGI disasters’ are fairly likely though. For example, an AGI that alters traffic light timing to make crashes occur, or intentionally sabotages things it is supposed to repair. Or even an AGI that is well aligned to the wrong people or moral framework doing things like refusing to allow necessary medical procedures due to expense even when people are willing to use their own money to pay (since it thinks the person is worth less than the cost of the procedure, and thus has negative utility, perhaps.). Alternately, it could predict that the people wanting the procedure were being incoherent, and actually would value their kids getting the money more, but feel like they have to try. Whether this is correct or not, it would still be AGI killing people.
I would actually rate the risk of Tool AI as higher, because humans will be using those to try to defeat other humans, and those could very well be strong enough to notably enhance the things humans are bad at. (And most of the things moderately superhuman AGI could do would be doable sooner with tool AI and an unaligned human.) An AI could help humans design a better virus that is like ‘Simian Hemorrhagic Fever’, but that effects humans, and doesn’t apply to people with certain genetic markers (that denote the ethnicity or other traits of the people making it). Humans would then test, manufacture, distribute, and use it to destroy their enemies. Then oops, it mutates, and hits everyone. This is still a very unlikely doom though.
A lot of this depends on your definition of doomsday/apocalypse. I took it to mean the end of humanity, and a state of the world we consider worse than our continued existence. If we valued the actual end state of the world more than continuing to exist, it would be easy to argue it was a good thing, and not a doom at all. (I don’t think the second condition is likely to come up for a very long time as a reason for something to not be doomsday.) For instance, if each person created a sapient race of progeny that weren’t human, but they valued as their own children, and who had good lives/civilizations, then the fact humanity ceased to exist due to a simple lack of biological children would not be that bad. This could in some cases be caused by AGI, but wouldn’t be a problem. (It would also be in the far future.)
AI doomsday never (though it is far from impossible). Not doomsday never, it’s just unlikely to be AGI. I believe we both aren’t that close, and that ‘takeoff’ would be best described as glacial, and we’ll have plenty of time to get it right. I am unsure of the risk level of unaligned moderately superhuman AI, but I believe (very confidently) that tech level for minimal AGI is much lower than the tech level for doomsday AGI. If I was wrong about that, I would obviously change my mind about the likelihood of AGI doomsday. (I think I put something like 1 in 10 million in the next fifty years. [Though in percentages.] Everything else was 0, though in the case of 25 years, I just didn’t know how many 0s to give it.)
‘Tragic AGI disasters’ are fairly likely though. For example, an AGI that alters traffic light timing to make crashes occur, or intentionally sabotages things it is supposed to repair. Or even an AGI that is well aligned to the wrong people or moral framework doing things like refusing to allow necessary medical procedures due to expense even when people are willing to use their own money to pay (since it thinks the person is worth less than the cost of the procedure, and thus has negative utility, perhaps.). Alternately, it could predict that the people wanting the procedure were being incoherent, and actually would value their kids getting the money more, but feel like they have to try. Whether this is correct or not, it would still be AGI killing people.
I would actually rate the risk of Tool AI as higher, because humans will be using those to try to defeat other humans, and those could very well be strong enough to notably enhance the things humans are bad at. (And most of the things moderately superhuman AGI could do would be doable sooner with tool AI and an unaligned human.) An AI could help humans design a better virus that is like ‘Simian Hemorrhagic Fever’, but that effects humans, and doesn’t apply to people with certain genetic markers (that denote the ethnicity or other traits of the people making it). Humans would then test, manufacture, distribute, and use it to destroy their enemies. Then oops, it mutates, and hits everyone. This is still a very unlikely doom though.