It seems that @Scott Alexander believes that there’s a 50%+ chance we all die in the next 100 years if we don’t get AGI (EDIT: how he places his probability mass on existential risk vs catastrophe/social collapse is now unclear to me). This seems like a wild claim to me, but here’s what he said about it in his AI Pause debate post:
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism+mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.
I’m curious to know if anyone here agrees or disagrees. What arguments convince you to be on either side? I can see some probability of existential risk, but 50%+? That seems way higher than I would expect.
a 50%+ chance we all die in the next 100 years if we don’t get AGI
I don’t think that’s what he claimed. He said (emphasis added):
if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela
Which fits with his earlier sentence about various factors that will “impoverish the world and accelerate its decaying institutional quality”.
(On the other hand, he did say “I expect the future to be short and grim”, not short or grim. So I’m not sure exactly what he was predicting. Perhaps decline → complete vulnerability to whatever existential risk comes along next.)
It seems that @Scott Alexander believes that there’s a 50%+ chance we all die
It’s “we end up dead or careening towards Venezuela” in the original, which is not the same thing. Venezuela has survivors. Existence of survivors is the crucial distinction between extinction and global catastrophe. AGI would be a much more reasonable issue if it was merely risking global catastrophe.
In the first couple sentences he says “if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology.” So it seems he’s putting most of his probability mass on everyone dying.
But then after he says: “But if we ban all gameboard-flipping technologies, then we do end up with bioweapon catastrophe or social collapse.”
I think people who responding are seemingly only reading the Venezuela part and assuming most of the probability mass he’s putting in the 50% is just a ‘catastrophe’ like Venezuela. But then why would he say he expects the future to be short conditional on no AI?
It’s a bit ambiguous, but “bioweapon catastrophe or social collapse” is not literal extinction, and I’m reading “I expect the future to be short and grim” as plausibly referring to destruction of uninterrupted global civilization, which might well recover after 3000 years. The text doesn’t seem to rule out this interpretation.
Sufficiently serious synthetic biology catastrophes prevent more serious further catastrophes, including by destroying civilization, and it’s not very likely that this involves literal extinction. As a casual reader of his blogs over the years, I’m not aware of Scott’s statements to the effect that his position is different from this, either clearly stated or in aggregate from many vague claims.
It seems like a really surprising take to me, and I disagree. None of the things listed seem like candidates for actual extinction. Fertility collapse seems approximately impossible to cause extinction given the extremely strong selection effects against it. I don’t see how totalitarianism or illiberalism or mobocracy leads to extinction either.
Maybe the story is that all of these will very likely happen in concert and half human progress very reliably. I would find this quite surprising.
I don’t see how totalitarianism or illiberalism or mobocracy leads to extinction either.
That’s not what Scott says, as I understand it. The 50%+ chance is for “death or Venezuela”.
Most likely we kill ourselves (...) If not, some combination of (...) will impoverish the world and accelerate its decaying institutional quality.
I am just guessing here, but I think the threat model here is authoritarian regimes become more difficult to overthrow in a technologically advanced society. The most powerful technology will all be controlled by the government (the rebels cannot build their nukes while hiding in a forest). Technology makes mass surveillance much easier (heck, just make it illegal to go anywhere without your smartphone, and you can already track literally everyone today). Something like GPT-4 could already censor social networks and report suspicious behavior (if the government controls their equivalent of Facebook, and other social networks are illegal, you have control over most of online communication). An army of drones will be able to suppress any uprising. Shortly, once an authoritarian regime has a sufficiently good technology, it becomes almost impossible to overthrow. On the other hand, democracies occasionally evolve to authoritarianism, so the long-term trend seems one way.
And the next assumption, I guess, is that authoritarianism leads to stagnation or dystopia.
It seems that @Scott Alexander believes that there’s a 50%+ chance we all die in the next 100 years if we don’t get AGI (EDIT: how he places his probability mass on existential risk vs catastrophe/social collapse is now unclear to me). This seems like a wild claim to me, but here’s what he said about it in his AI Pause debate post:
I’m curious to know if anyone here agrees or disagrees. What arguments convince you to be on either side? I can see some probability of existential risk, but 50%+? That seems way higher than I would expect.
I don’t think that’s what he claimed. He said (emphasis added):
Which fits with his earlier sentence about various factors that will “impoverish the world and accelerate its decaying institutional quality”.
(On the other hand, he did say “I expect the future to be short and grim”, not short or grim. So I’m not sure exactly what he was predicting. Perhaps decline → complete vulnerability to whatever existential risk comes along next.)
It’s “we end up dead or careening towards Venezuela” in the original, which is not the same thing. Venezuela has survivors. Existence of survivors is the crucial distinction between extinction and global catastrophe. AGI would be a much more reasonable issue if it was merely risking global catastrophe.
In the first couple sentences he says “if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology.” So it seems he’s putting most of his probability mass on everyone dying.
But then after he says: “But if we ban all gameboard-flipping technologies, then we do end up with bioweapon catastrophe or social collapse.”
I think people who responding are seemingly only reading the Venezuela part and assuming most of the probability mass he’s putting in the 50% is just a ‘catastrophe’ like Venezuela. But then why would he say he expects the future to be short conditional on no AI?
It’s a bit ambiguous, but “bioweapon catastrophe or social collapse” is not literal extinction, and I’m reading “I expect the future to be short and grim” as plausibly referring to destruction of uninterrupted global civilization, which might well recover after 3000 years. The text doesn’t seem to rule out this interpretation.
Sufficiently serious synthetic biology catastrophes prevent more serious further catastrophes, including by destroying civilization, and it’s not very likely that this involves literal extinction. As a casual reader of his blogs over the years, I’m not aware of Scott’s statements to the effect that his position is different from this, either clearly stated or in aggregate from many vague claims.
It seems like a really surprising take to me, and I disagree. None of the things listed seem like candidates for actual extinction. Fertility collapse seems approximately impossible to cause extinction given the extremely strong selection effects against it. I don’t see how totalitarianism or illiberalism or mobocracy leads to extinction either.
Maybe the story is that all of these will very likely happen in concert and half human progress very reliably. I would find this quite surprising.
That’s not what Scott says, as I understand it. The 50%+ chance is for “death or Venezuela”.
I am just guessing here, but I think the threat model here is authoritarian regimes become more difficult to overthrow in a technologically advanced society. The most powerful technology will all be controlled by the government (the rebels cannot build their nukes while hiding in a forest). Technology makes mass surveillance much easier (heck, just make it illegal to go anywhere without your smartphone, and you can already track literally everyone today). Something like GPT-4 could already censor social networks and report suspicious behavior (if the government controls their equivalent of Facebook, and other social networks are illegal, you have control over most of online communication). An army of drones will be able to suppress any uprising. Shortly, once an authoritarian regime has a sufficiently good technology, it becomes almost impossible to overthrow. On the other hand, democracies occasionally evolve to authoritarianism, so the long-term trend seems one way.
And the next assumption, I guess, is that authoritarianism leads to stagnation or dystopia.