Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But with extrapolation it is much less clear, that the models of the ants’ extrapolated volition may want to uplift the actual ants to a super-human level, as might our models of human extrapolated volition want to do with us humans. Furthermore given that SCEV would try to maximize coherence between satisfying the various volitions of the included beings, the superintelligence would cause human extinction or similar, only if it were physically impossible for it, independently of how much it was able to self-improve, to cause a more coherent result that respected more humans volitions, this seems unlikely, but is not impossible, so this is something to worry about if this proposal where implemented.
However, importantly, in the paper, I DO NOT argue that we should implement SCEV instead of CEV. I only argue that we have some strong (pro-tanto) reasons to do so, even if we should not ultimately do so, because there are other even stronger (pro-tanto) reasons against doing so. This is why I say this in the conclusion: “In this paper, I have shown why we have some very strong pro-tanto reasons in favour of implementing SCEV instead of CEV. This is the case even if, all things considered, it is still ultimately unclear whether what is best is to try to implement SCEV or another proposal more similar to CEV.”
This is truly what I believe and this is why I have put this conclusion in the paper instead of one that states that we SHOULD implement SCEV, I believe this is wrong and thus I did not put it, even if it would have made the paper less complex and more well-rounded.
I completely agree with you and with the quote, that getting this right is a matter of both vital importance and urgency, and I take this and the possibility of human extinction and s-risks very seriously when conducting my research, it is precisely because of this that I have shifted from doing standard practical/animal ethics to this kind of research. It is great that we can agree on this. Thanks again for your thought-provoking comments, they have lowered my credence in favour of implementing SCEV all things considered (even if we do have the pro-tanto reasons I present in the paper).
These are great points, thank you!
Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But with extrapolation it is much less clear, that the models of the ants’ extrapolated volition may want to uplift the actual ants to a super-human level, as might our models of human extrapolated volition want to do with us humans. Furthermore given that SCEV would try to maximize coherence between satisfying the various volitions of the included beings, the superintelligence would cause human extinction or similar, only if it were physically impossible for it, independently of how much it was able to self-improve, to cause a more coherent result that respected more humans volitions, this seems unlikely, but is not impossible, so this is something to worry about if this proposal where implemented.
However, importantly, in the paper, I DO NOT argue that we should implement SCEV instead of CEV. I only argue that we have some strong (pro-tanto) reasons to do so, even if we should not ultimately do so, because there are other even stronger (pro-tanto) reasons against doing so. This is why I say this in the conclusion: “In this paper, I have shown why we have some very strong pro-tanto reasons in favour of implementing SCEV instead of CEV. This is the case even if, all things considered, it is still ultimately unclear whether what is best is to try to implement SCEV or another proposal more similar to CEV.”
This is truly what I believe and this is why I have put this conclusion in the paper instead of one that states that we SHOULD implement SCEV, I believe this is wrong and thus I did not put it, even if it would have made the paper less complex and more well-rounded.
I completely agree with you and with the quote, that getting this right is a matter of both vital importance and urgency, and I take this and the possibility of human extinction and s-risks very seriously when conducting my research, it is precisely because of this that I have shifted from doing standard practical/animal ethics to this kind of research. It is great that we can agree on this. Thanks again for your thought-provoking comments, they have lowered my credence in favour of implementing SCEV all things considered (even if we do have the pro-tanto reasons I present in the paper).