Sorry, I think the logic of my 60% figure was imprecise.
If we’re just talking about 1 person attempting to deliberately induce societal collapse, the chances of any kind of impact, either way, would be relatively low (depending on the person), so the 60% would seem a bit meaningless there.
If we’re talking about whether it’s worth developing a more serious movement to initiate societal collapse (potentially in an optimal way) or to plan for it as an option, I think there are arguments both ways, but I’d lean against it being a good idea because of the risks laid out in the post. This is what my 60% figure was aiming at.
If we’re talking about a world where we have successfully induced collapse (in a way that allows society to rebuild in some way), would this be a better or worse world, in expectation? This is the question I was really hinting at with this post, and I would definitely dispute your 5 in 1000 claim if this was the question you were thinking of.
If we’re Eliezer-level pessimistic about TAI timelines, serious about the horrors of factory farming (and perhaps antinatalism), and optimistic about moral progress in the absence of technological progress, I think this question gets very interesting.
Sorry, I think the logic of my 60% figure was imprecise.
If we’re just talking about 1 person attempting to deliberately induce societal collapse, the chances of any kind of impact, either way, would be relatively low (depending on the person), so the 60% would seem a bit meaningless there.
If we’re talking about whether it’s worth developing a more serious movement to initiate societal collapse (potentially in an optimal way) or to plan for it as an option, I think there are arguments both ways, but I’d lean against it being a good idea because of the risks laid out in the post. This is what my 60% figure was aiming at.
If we’re talking about a world where we have successfully induced collapse (in a way that allows society to rebuild in some way), would this be a better or worse world, in expectation? This is the question I was really hinting at with this post, and I would definitely dispute your 5 in 1000 claim if this was the question you were thinking of.
If we’re Eliezer-level pessimistic about TAI timelines, serious about the horrors of factory farming (and perhaps antinatalism), and optimistic about moral progress in the absence of technological progress, I think this question gets very interesting.