This seems to me like a very key omission. I’m puzzled that you didn’t restore it, at least on Less Wrong, even if you had to, for some unexplained reason (involving reviewers, I would assume) omit it from your academic publication. I urge you to do so.
However, suppose that, in the near future, biologists established beyond all reasonable doubt that dust mites (for example) did, in fact, sense pain, experience physiological symptoms of distress, and otherwise have senses, and are thus definitely sentient under the standard definition (a relatively simple program of neurological and biochemical experiments, apart from the particularly fine positioning of electrodes required). Once that uncertainty had been eliminated (and doing so is of course a rather urgent matter under your proposed ethical system), would their moral value then deserve equal consideration to that of humans? You say “SCEV would apply an equal consideration of interests principle”, so I assume that means yes?
Obviously the same limited resources that could support a single human could support many millions of ants. So under your proposed SCEV using equal moral weight, AIs would clearly be strongly morally obligated to drive the human species extinct (as soon as it could do without us, and one would hope humanely). Or, if you added ethical rights for a species as a separate entity, or a prohibition on extinction, drive us down to a minimal safe breeding population. Allowing for genetically-engineered insemination from digital genetic data, that would be a small number of individuals, perhaps O(100), certainly no more then O(1000). (While a human is a good source of vegetarian skin flakes for feeding dust mites, these could more efficiently be vat cultured.)
The screed at your link https://www.abolitionist.com/?_gl=1*1iqpkhm*_ga*NzU0NDU1ODY0LjE3MDI5MjUzNDY.*_ga_1MVBX8ZRJ9*MTcwMjkyNTM0NS4xLjEuMTcwMjkyNTUwOS4wLjAuMA.. makes emotionally appealing reading. But it spends only two short paragraphs near the end on this issue, and simply does not attempt to address the technical problems of setting up vegetarian food distribution, healthcare, birth control, retirement facilities, legal representation, and so on and so forth for ~20 quadrillion ants, or indeed a possibly much larger number of dust mites, let alone all the rest of the sentient species (many of them still undiscovered) in every ecosystem on Earth. It merely observes that this will be easier with AI than without it. Nor does it even begin to address how to construct stable ecologies without predation, or diseases, or parasitism. The practical conundrums of supporting the ethical rights of both parasitical species and their natural hosts, for example, are even more intractable that those of predators and their prey that you briefly alluded to in your paper. (I for one fully support President Carter’s effort to drive the guinea worm extinct, even if, as seems very likely to me, guinea worms are sentient: their lifecycle inherently requires them both injuring and causing agonizing pain to humans.) I look forward with interest to reading your future proposals for implementing these inevitable practical consequences of your ethical philosophical arguments.
Please bear in mind, during your career as an AI-aware academic moral philosopher, that we may well have superintelligent AI and need to give it at least a basic outline of an ethical system within the next decade or so, quite possibly without the ability to ever later significantly change our minds once we see the full consequences of this decision, so getting this right is a matter of both vital importance and urgency. As Eleizer Yudkowski has observed, we are now doing Moral Philosophy on a tight deadline. Please try not to come up with a system that will drive us all extinct — this is not merely a philosophical debate.
Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But with extrapolation it is much less clear, that the models of the ants’ extrapolated volition may want to uplift the actual ants to a super-human level, as might our models of human extrapolated volition want to do with us humans. Furthermore given that SCEV would try to maximize coherence between satisfying the various volitions of the included beings, the superintelligence would cause human extinction or similar, only if it were physically impossible for it, independently of how much it was able to self-improve, to cause a more coherent result that respected more humans volitions, this seems unlikely, but is not impossible, so this is something to worry about if this proposal where implemented.
However, importantly, in the paper, I DO NOT argue that we should implement SCEV instead of CEV. I only argue that we have some strong (pro-tanto) reasons to do so, even if we should not ultimately do so, because there are other even stronger (pro-tanto) reasons against doing so. This is why I say this in the conclusion: “In this paper, I have shown why we have some very strong pro-tanto reasons in favour of implementing SCEV instead of CEV. This is the case even if, all things considered, it is still ultimately unclear whether what is best is to try to implement SCEV or another proposal more similar to CEV.”
This is truly what I believe and this is why I have put this conclusion in the paper instead of one that states that we SHOULD implement SCEV, I believe this is wrong and thus I did not put it, even if it would have made the paper less complex and more well-rounded.
I completely agree with you and with the quote, that getting this right is a matter of both vital importance and urgency, and I take this and the possibility of human extinction and s-risks very seriously when conducting my research, it is precisely because of this that I have shifted from doing standard practical/animal ethics to this kind of research. It is great that we can agree on this. Thanks again for your thought-provoking comments, they have lowered my credence in favour of implementing SCEV all things considered (even if we do have the pro-tanto reasons I present in the paper).
This seems to me like a very key omission. I’m puzzled that you didn’t restore it, at least on Less Wrong, even if you had to, for some unexplained reason (involving reviewers, I would assume) omit it from your academic publication. I urge you to do so.
However, suppose that, in the near future, biologists established beyond all reasonable doubt that dust mites (for example) did, in fact, sense pain, experience physiological symptoms of distress, and otherwise have senses, and are thus definitely sentient under the standard definition (a relatively simple program of neurological and biochemical experiments, apart from the particularly fine positioning of electrodes required). Once that uncertainty had been eliminated (and doing so is of course a rather urgent matter under your proposed ethical system), would their moral value then deserve equal consideration to that of humans? You say “SCEV would apply an equal consideration of interests principle”, so I assume that means yes?
Obviously the same limited resources that could support a single human could support many millions of ants. So under your proposed SCEV using equal moral weight, AIs would clearly be strongly morally obligated to drive the human species extinct (as soon as it could do without us, and one would hope humanely). Or, if you added ethical rights for a species as a separate entity, or a prohibition on extinction, drive us down to a minimal safe breeding population. Allowing for genetically-engineered insemination from digital genetic data, that would be a small number of individuals, perhaps O(100), certainly no more then O(1000). (While a human is a good source of vegetarian skin flakes for feeding dust mites, these could more efficiently be vat cultured.)
The screed at your link https://www.abolitionist.com/?_gl=1*1iqpkhm*_ga*NzU0NDU1ODY0LjE3MDI5MjUzNDY.*_ga_1MVBX8ZRJ9*MTcwMjkyNTM0NS4xLjEuMTcwMjkyNTUwOS4wLjAuMA.. makes emotionally appealing reading. But it spends only two short paragraphs near the end on this issue, and simply does not attempt to address the technical problems of setting up vegetarian food distribution, healthcare, birth control, retirement facilities, legal representation, and so on and so forth for ~20 quadrillion ants, or indeed a possibly much larger number of dust mites, let alone all the rest of the sentient species (many of them still undiscovered) in every ecosystem on Earth. It merely observes that this will be easier with AI than without it. Nor does it even begin to address how to construct stable ecologies without predation, or diseases, or parasitism. The practical conundrums of supporting the ethical rights of both parasitical species and their natural hosts, for example, are even more intractable that those of predators and their prey that you briefly alluded to in your paper. (I for one fully support President Carter’s effort to drive the guinea worm extinct, even if, as seems very likely to me, guinea worms are sentient: their lifecycle inherently requires them both injuring and causing agonizing pain to humans.) I look forward with interest to reading your future proposals for implementing these inevitable practical consequences of your ethical philosophical arguments.
Please bear in mind, during your career as an AI-aware academic moral philosopher, that we may well have superintelligent AI and need to give it at least a basic outline of an ethical system within the next decade or so, quite possibly without the ability to ever later significantly change our minds once we see the full consequences of this decision, so getting this right is a matter of both vital importance and urgency. As Eleizer Yudkowski has observed, we are now doing Moral Philosophy on a tight deadline. Please try not to come up with a system that will drive us all extinct — this is not merely a philosophical debate.
These are great points, thank you!
Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But with extrapolation it is much less clear, that the models of the ants’ extrapolated volition may want to uplift the actual ants to a super-human level, as might our models of human extrapolated volition want to do with us humans. Furthermore given that SCEV would try to maximize coherence between satisfying the various volitions of the included beings, the superintelligence would cause human extinction or similar, only if it were physically impossible for it, independently of how much it was able to self-improve, to cause a more coherent result that respected more humans volitions, this seems unlikely, but is not impossible, so this is something to worry about if this proposal where implemented.
However, importantly, in the paper, I DO NOT argue that we should implement SCEV instead of CEV. I only argue that we have some strong (pro-tanto) reasons to do so, even if we should not ultimately do so, because there are other even stronger (pro-tanto) reasons against doing so. This is why I say this in the conclusion: “In this paper, I have shown why we have some very strong pro-tanto reasons in favour of implementing SCEV instead of CEV. This is the case even if, all things considered, it is still ultimately unclear whether what is best is to try to implement SCEV or another proposal more similar to CEV.”
This is truly what I believe and this is why I have put this conclusion in the paper instead of one that states that we SHOULD implement SCEV, I believe this is wrong and thus I did not put it, even if it would have made the paper less complex and more well-rounded.
I completely agree with you and with the quote, that getting this right is a matter of both vital importance and urgency, and I take this and the possibility of human extinction and s-risks very seriously when conducting my research, it is precisely because of this that I have shifted from doing standard practical/animal ethics to this kind of research. It is great that we can agree on this. Thanks again for your thought-provoking comments, they have lowered my credence in favour of implementing SCEV all things considered (even if we do have the pro-tanto reasons I present in the paper).