If we try to answer the question now, it seems very likely we’ll get the answer wrong (given my state of uncertainty about the inputs that go into the question). I want to keep civilization going until we know better how to answer these types of questions. For example if we succeed in building a correctly designed/implemented Singleton FAI, it ought to be able to consider this question at leisure, and if it becomes clear that the existence of mature suffering-hating civilizations actually causes more suffering to be created, then it can decide to not make us into a mature suffering-hating civilization, or take whatever other action is appropriate.
Are you worried that by the time such an FAI (or whatever will control our civilization) figures out the answer, it will be too late? (Why? If we can decide that x-risk reduction is bad, then so can it. If it’s too late to alter or end civilization at that point, why isn’t it already too late for us?) Or are you worried more that the question won’t be answered correctly by whatever will control our civilization?
If you are concerned exclusively with suffering, then increasing the number of mature civilizations is obviously bad and you’d prefer that the average civilization not exist. You might think that our descendants are particularly good to keep around, since we hate suffering so much. But in fact almost all s-risks occur precisely because of civilizations that hate suffering, so it’s not at all clear that creating “the civilization that we will become on reflection” is better than creating “a random civilization” (which is bad).
To be clear, even if we have modest amounts of moral uncertainty I think it could easily justify a “wait and see” style approach. But if we were committed to a suffering-focused view then I don’t think your argument works.
But in fact almost all s-risks occur precisely because of civilizations that hate suffering
It seems just as plausible to me that suffering-hating civilizations reduce the overall amount of suffering in the multiverse, so I think I’d wait until it becomes clear which is the case, even if I was concerned exclusively with suffering. But I haven’t thought about this question much, since I haven’t had a reason to assume an exclusive concern with suffering, until you started asking me to.
To be clear, even if we have modest amounts of moral uncertainty I think it could easily justify a “wait and see” style approach. But if we were committed to a suffering-focused view then I don’t think your argument works.
Earlier in this thread I’d been speaking from the perspective of my own moral uncertainty, not from a purely suffering-focused view, since we were discussing the linked article, and Kaj had written:
The article isn’t specifically negative utilitarian, though—even classical utilitarians would agree that having astronomical amounts of suffering is a bad thing. Nor do you have to be a utilitarian in the first place to think it would be bad: as the article itself notes, pretty much all major value systems probably agree on s-risks being a major Bad Thing
What’s your reason for considering a purely suffering-focused view? Intellectual curiosity? Being nice to or cooperating with people like Brian Tomasik by helping to analyze one of their problems?
Or are you worried more that the question won’t be answered correctly by whatever will control our civilization?
Perhaps this, in case it turns out to be highly important but difficult to get certain ingredients – e.g. priors or decision theory – exactly right. (But I have no idea, it’s also plausible that suboptimal designs could patch themselves well, get rescued somehow, or just have their goals changed without much fuss.)
If we try to answer the question now, it seems very likely we’ll get the answer wrong (given my state of uncertainty about the inputs that go into the question). I want to keep civilization going until we know better how to answer these types of questions. For example if we succeed in building a correctly designed/implemented Singleton FAI, it ought to be able to consider this question at leisure, and if it becomes clear that the existence of mature suffering-hating civilizations actually causes more suffering to be created, then it can decide to not make us into a mature suffering-hating civilization, or take whatever other action is appropriate.
Are you worried that by the time such an FAI (or whatever will control our civilization) figures out the answer, it will be too late? (Why? If we can decide that x-risk reduction is bad, then so can it. If it’s too late to alter or end civilization at that point, why isn’t it already too late for us?) Or are you worried more that the question won’t be answered correctly by whatever will control our civilization?
If you are concerned exclusively with suffering, then increasing the number of mature civilizations is obviously bad and you’d prefer that the average civilization not exist. You might think that our descendants are particularly good to keep around, since we hate suffering so much. But in fact almost all s-risks occur precisely because of civilizations that hate suffering, so it’s not at all clear that creating “the civilization that we will become on reflection” is better than creating “a random civilization” (which is bad).
To be clear, even if we have modest amounts of moral uncertainty I think it could easily justify a “wait and see” style approach. But if we were committed to a suffering-focused view then I don’t think your argument works.
It seems just as plausible to me that suffering-hating civilizations reduce the overall amount of suffering in the multiverse, so I think I’d wait until it becomes clear which is the case, even if I was concerned exclusively with suffering. But I haven’t thought about this question much, since I haven’t had a reason to assume an exclusive concern with suffering, until you started asking me to.
Earlier in this thread I’d been speaking from the perspective of my own moral uncertainty, not from a purely suffering-focused view, since we were discussing the linked article, and Kaj had written:
What’s your reason for considering a purely suffering-focused view? Intellectual curiosity? Being nice to or cooperating with people like Brian Tomasik by helping to analyze one of their problems?
Understanding the recommendations of each plausible theory seems like a useful first step in decision-making under moral uncertainty.
Perhaps this, in case it turns out to be highly important but difficult to get certain ingredients – e.g. priors or decision theory – exactly right. (But I have no idea, it’s also plausible that suboptimal designs could patch themselves well, get rescued somehow, or just have their goals changed without much fuss.)