Given the importance of the word ‘sentient’ in your Sentientist Coherent Extrapolated Volition proposal, it would have been helpful if you had clearly defined this term. You make it clear that your definition of it includes non-human animals, so evidently you don’t mean the same thing as “sapient”. In a context including animals ‘sentient’ is most often used to mean something like “capable of feeling pain, having sensory impressions, etc,” That doesn’t have a very clear lower cutoff (is an amoeba sentient?), but would presumably include, for example, ants, which clearly have senses and experience pain. Would an ant get an equal amount of value/moral worth as a human in SCEV (as only seems fair)? It’s estimated that the world population of ants is roughly 20 quadrillion, so if so, ants alone would morally outweigh all humans by a factor of well over a million. So basically, all our human volitions are just a rounding error to the AI that we’re building for the ants and other insects. Even just domestic chickens (clearly sentient) outnumber humans roughly three-fold. This seems even more problematic than what one might dub the “votes for man-eating tigers” concern around predators that you do mention above. In general, prey species significantly outnumber their predators, and will all object strongly to being eaten, so I assume the predators would all have to go extinct under your proposal, unless the AIs could arrange to provide them all with nutritious textured vegetable protein substitutes? If so, how would one persuade the predators to eat these, rather than their usual prey? How could AIs create an ecosystem that has minimized suffering-per-individual, without causing species-diversity loss? Presumably they should also provide miniature healthcare for insects, too. Then what about population control, to avoid famine, now they eliminated predation and disease? You are proposing abolishing “nature, red-in-tooth-and-claw” and replacing it with what’s basically a planetary-scale zoo — have you considered the practicalities of this?
Or, if the moral weight is not equal per individual, how would you determine an appropriate ratio? Body mass? Synapse count? A definition and decision for this seems rather vital as part of your proposal.
Regarding how to take into account the interests of insects and other animals/digital minds see this passage I have to exclude form publication: [SCEV would apply an equal consideration of interests principle] “However, this does not entail that, for instance, if there is a non-negligible chance that dust mites or future large language models are sentient, the strength of their interests should be weighted the same as the strength of the interests of entities that we have good reasons to believe that it is very likely that they are sentient. The degree of consideration given to the interests or the desires of each being included in the extrapolation base should plausibly be determined by how likely it is that they have such morally relevant interests as a result of being sentient. We should apply something along the lines of Jeff Sebo’s Expected value principle, which is meant to determine the moral value of a given entity in cases of uncertainty about whether or not it is sentient (Sebo, 2018). In determining to what extent the interests, preferences and goals of a given entity (whose capacity for sentience we are uncertain about) should be included in the extrapolation base of SCEV, we should first come up with the best and most reliable credence available about whether the entity in question has morally relevant interests as a result of being sentient. And then we should multiply this credence by the strength (i.e. how bad it would be that those interests were frustrated/how good it would be that they were satisfied) that those interests would have if they were morally relevant as a result of the entity being sentient. The product of this equation should be the extent to which these interests are included in the extrapolation base. When determining our credence about whether the entity in question has morally relevant interests as a result of being sentient, we should also take into account the degree to which we have denied the existence of morally relevant interests to sentient beings different from us in the past. And we should acknowledge the biases present in us against reasonably believing in the extent to which different beings possess capacities that we would deem morally relevant. ”
Regarding intervening in ecosystems, and how to balance the interests/preferences of different animals, I expect that unless the extrapolated volition of non-human animals chose/prefer that the actual animals are uplifted, something like this is what they would prefer: https://www.abolitionist.com/?_gl=1*1iqpkhm*_ga*NzU0NDU1ODY0LjE3MDI5MjUzNDY.*_ga_1MVBX8ZRJ9*MTcwMjkyNTM0NS4xLjEuMTcwMjkyNTUwOS4wLjAuMA.. It does not seem morally problematic to intervene in nature etc, and I believe ether are good arguments to defend this view.
This seems to me like a very key omission. I’m puzzled that you didn’t restore it, at least on Less Wrong, even if you had to, for some unexplained reason (involving reviewers, I would assume) omit it from your academic publication. I urge you to do so.
However, suppose that, in the near future, biologists established beyond all reasonable doubt that dust mites (for example) did, in fact, sense pain, experience physiological symptoms of distress, and otherwise have senses, and are thus definitely sentient under the standard definition (a relatively simple program of neurological and biochemical experiments, apart from the particularly fine positioning of electrodes required). Once that uncertainty had been eliminated (and doing so is of course a rather urgent matter under your proposed ethical system), would their moral value then deserve equal consideration to that of humans? You say “SCEV would apply an equal consideration of interests principle”, so I assume that means yes?
Obviously the same limited resources that could support a single human could support many millions of ants. So under your proposed SCEV using equal moral weight, AIs would clearly be strongly morally obligated to drive the human species extinct (as soon as it could do without us, and one would hope humanely). Or, if you added ethical rights for a species as a separate entity, or a prohibition on extinction, drive us down to a minimal safe breeding population. Allowing for genetically-engineered insemination from digital genetic data, that would be a small number of individuals, perhaps O(100), certainly no more then O(1000). (While a human is a good source of vegetarian skin flakes for feeding dust mites, these could more efficiently be vat cultured.)
The screed at your link https://www.abolitionist.com/?_gl=1*1iqpkhm*_ga*NzU0NDU1ODY0LjE3MDI5MjUzNDY.*_ga_1MVBX8ZRJ9*MTcwMjkyNTM0NS4xLjEuMTcwMjkyNTUwOS4wLjAuMA.. makes emotionally appealing reading. But it spends only two short paragraphs near the end on this issue, and simply does not attempt to address the technical problems of setting up vegetarian food distribution, healthcare, birth control, retirement facilities, legal representation, and so on and so forth for ~20 quadrillion ants, or indeed a possibly much larger number of dust mites, let alone all the rest of the sentient species (many of them still undiscovered) in every ecosystem on Earth. It merely observes that this will be easier with AI than without it. Nor does it even begin to address how to construct stable ecologies without predation, or diseases, or parasitism. The practical conundrums of supporting the ethical rights of both parasitical species and their natural hosts, for example, are even more intractable that those of predators and their prey that you briefly alluded to in your paper. (I for one fully support President Carter’s effort to drive the guinea worm extinct, even if, as seems very likely to me, guinea worms are sentient: their lifecycle inherently requires them both injuring and causing agonizing pain to humans.) I look forward with interest to reading your future proposals for implementing these inevitable practical consequences of your ethical philosophical arguments.
Please bear in mind, during your career as an AI-aware academic moral philosopher, that we may well have superintelligent AI and need to give it at least a basic outline of an ethical system within the next decade or so, quite possibly without the ability to ever later significantly change our minds once we see the full consequences of this decision, so getting this right is a matter of both vital importance and urgency. As Eleizer Yudkowski has observed, we are now doing Moral Philosophy on a tight deadline. Please try not to come up with a system that will drive us all extinct — this is not merely a philosophical debate.
Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But with extrapolation it is much less clear, that the models of the ants’ extrapolated volition may want to uplift the actual ants to a super-human level, as might our models of human extrapolated volition want to do with us humans. Furthermore given that SCEV would try to maximize coherence between satisfying the various volitions of the included beings, the superintelligence would cause human extinction or similar, only if it were physically impossible for it, independently of how much it was able to self-improve, to cause a more coherent result that respected more humans volitions, this seems unlikely, but is not impossible, so this is something to worry about if this proposal where implemented.
However, importantly, in the paper, I DO NOT argue that we should implement SCEV instead of CEV. I only argue that we have some strong (pro-tanto) reasons to do so, even if we should not ultimately do so, because there are other even stronger (pro-tanto) reasons against doing so. This is why I say this in the conclusion: “In this paper, I have shown why we have some very strong pro-tanto reasons in favour of implementing SCEV instead of CEV. This is the case even if, all things considered, it is still ultimately unclear whether what is best is to try to implement SCEV or another proposal more similar to CEV.”
This is truly what I believe and this is why I have put this conclusion in the paper instead of one that states that we SHOULD implement SCEV, I believe this is wrong and thus I did not put it, even if it would have made the paper less complex and more well-rounded.
I completely agree with you and with the quote, that getting this right is a matter of both vital importance and urgency, and I take this and the possibility of human extinction and s-risks very seriously when conducting my research, it is precisely because of this that I have shifted from doing standard practical/animal ethics to this kind of research. It is great that we can agree on this. Thanks again for your thought-provoking comments, they have lowered my credence in favour of implementing SCEV all things considered (even if we do have the pro-tanto reasons I present in the paper).
Rereading this, I’m sorry for dumping all of these objections on you as once (and especially if I sounded like they were obvious). I did actually think about an ethical system along the lines of the one you propose for O(6 months), and tried a variety of different ways to fix it, before regretfully abandoning it as unworkable.
On the non-equal moral weight version, see if you can find one that doesn’t give the AIs perverse incentives to mess with ecosystems. I couldn’t, but the closest I found involved species average adult mass (because biamass is roughly conserved), probability of reaching adulthood (r-strategy species are a nightmare), and average adult synapse count, My advice is that making anything logarithmic feels appealing but never seems to work.
Given the importance of the word ‘sentient’ in your Sentientist Coherent Extrapolated Volition proposal, it would have been helpful if you had clearly defined this term. You make it clear that your definition of it includes non-human animals, so evidently you don’t mean the same thing as “sapient”. In a context including animals ‘sentient’ is most often used to mean something like “capable of feeling pain, having sensory impressions, etc,” That doesn’t have a very clear lower cutoff (is an amoeba sentient?), but would presumably include, for example, ants, which clearly have senses and experience pain. Would an ant get an equal amount of value/moral worth as a human in SCEV (as only seems fair)? It’s estimated that the world population of ants is roughly 20 quadrillion, so if so, ants alone would morally outweigh all humans by a factor of well over a million. So basically, all our human volitions are just a rounding error to the AI that we’re building for the ants and other insects. Even just domestic chickens (clearly sentient) outnumber humans roughly three-fold. This seems even more problematic than what one might dub the “votes for man-eating tigers” concern around predators that you do mention above. In general, prey species significantly outnumber their predators, and will all object strongly to being eaten, so I assume the predators would all have to go extinct under your proposal, unless the AIs could arrange to provide them all with nutritious textured vegetable protein substitutes? If so, how would one persuade the predators to eat these, rather than their usual prey? How could AIs create an ecosystem that has minimized suffering-per-individual, without causing species-diversity loss? Presumably they should also provide miniature healthcare for insects, too. Then what about population control, to avoid famine, now they eliminated predation and disease? You are proposing abolishing “nature, red-in-tooth-and-claw” and replacing it with what’s basically a planetary-scale zoo — have you considered the practicalities of this?
Or, if the moral weight is not equal per individual, how would you determine an appropriate ratio? Body mass? Synapse count? A definition and decision for this seems rather vital as part of your proposal.
Regarding how to take into account the interests of insects and other animals/digital minds see this passage I have to exclude form publication: [SCEV would apply an equal consideration of interests principle] “However, this does not entail that, for instance, if there is a non-negligible chance that dust mites or future large language models are sentient, the strength of their interests should be weighted the same as the strength of the interests of entities that we have good reasons to believe that it is very likely that they are sentient. The degree of consideration given to the interests or the desires of each being included in the extrapolation base should plausibly be determined by how likely it is that they have such morally relevant interests as a result of being sentient. We should apply something along the lines of Jeff Sebo’s Expected value principle, which is meant to determine the moral value of a given entity in cases of uncertainty about whether or not it is sentient (Sebo, 2018). In determining to what extent the interests, preferences and goals of a given entity (whose capacity for sentience we are uncertain about) should be included in the extrapolation base of SCEV, we should first come up with the best and most reliable credence available about whether the entity in question has morally relevant interests as a result of being sentient. And then we should multiply this credence by the strength (i.e. how bad it would be that those interests were frustrated/how good it would be that they were satisfied) that those interests would have if they were morally relevant as a result of the entity being sentient. The product of this equation should be the extent to which these interests are included in the extrapolation base. When determining our credence about whether the entity in question has morally relevant interests as a result of being sentient, we should also take into account the degree to which we have denied the existence of morally relevant interests to sentient beings different from us in the past. And we should acknowledge the biases present in us against reasonably believing in the extent to which different beings possess capacities that we would deem morally relevant. ”
Regarding intervening in ecosystems, and how to balance the interests/preferences of different animals, I expect that unless the extrapolated volition of non-human animals chose/prefer that the actual animals are uplifted, something like this is what they would prefer: https://www.abolitionist.com/?_gl=1*1iqpkhm*_ga*NzU0NDU1ODY0LjE3MDI5MjUzNDY.*_ga_1MVBX8ZRJ9*MTcwMjkyNTM0NS4xLjEuMTcwMjkyNTUwOS4wLjAuMA.. It does not seem morally problematic to intervene in nature etc, and I believe ether are good arguments to defend this view.
This seems to me like a very key omission. I’m puzzled that you didn’t restore it, at least on Less Wrong, even if you had to, for some unexplained reason (involving reviewers, I would assume) omit it from your academic publication. I urge you to do so.
However, suppose that, in the near future, biologists established beyond all reasonable doubt that dust mites (for example) did, in fact, sense pain, experience physiological symptoms of distress, and otherwise have senses, and are thus definitely sentient under the standard definition (a relatively simple program of neurological and biochemical experiments, apart from the particularly fine positioning of electrodes required). Once that uncertainty had been eliminated (and doing so is of course a rather urgent matter under your proposed ethical system), would their moral value then deserve equal consideration to that of humans? You say “SCEV would apply an equal consideration of interests principle”, so I assume that means yes?
Obviously the same limited resources that could support a single human could support many millions of ants. So under your proposed SCEV using equal moral weight, AIs would clearly be strongly morally obligated to drive the human species extinct (as soon as it could do without us, and one would hope humanely). Or, if you added ethical rights for a species as a separate entity, or a prohibition on extinction, drive us down to a minimal safe breeding population. Allowing for genetically-engineered insemination from digital genetic data, that would be a small number of individuals, perhaps O(100), certainly no more then O(1000). (While a human is a good source of vegetarian skin flakes for feeding dust mites, these could more efficiently be vat cultured.)
The screed at your link https://www.abolitionist.com/?_gl=1*1iqpkhm*_ga*NzU0NDU1ODY0LjE3MDI5MjUzNDY.*_ga_1MVBX8ZRJ9*MTcwMjkyNTM0NS4xLjEuMTcwMjkyNTUwOS4wLjAuMA.. makes emotionally appealing reading. But it spends only two short paragraphs near the end on this issue, and simply does not attempt to address the technical problems of setting up vegetarian food distribution, healthcare, birth control, retirement facilities, legal representation, and so on and so forth for ~20 quadrillion ants, or indeed a possibly much larger number of dust mites, let alone all the rest of the sentient species (many of them still undiscovered) in every ecosystem on Earth. It merely observes that this will be easier with AI than without it. Nor does it even begin to address how to construct stable ecologies without predation, or diseases, or parasitism. The practical conundrums of supporting the ethical rights of both parasitical species and their natural hosts, for example, are even more intractable that those of predators and their prey that you briefly alluded to in your paper. (I for one fully support President Carter’s effort to drive the guinea worm extinct, even if, as seems very likely to me, guinea worms are sentient: their lifecycle inherently requires them both injuring and causing agonizing pain to humans.) I look forward with interest to reading your future proposals for implementing these inevitable practical consequences of your ethical philosophical arguments.
Please bear in mind, during your career as an AI-aware academic moral philosopher, that we may well have superintelligent AI and need to give it at least a basic outline of an ethical system within the next decade or so, quite possibly without the ability to ever later significantly change our minds once we see the full consequences of this decision, so getting this right is a matter of both vital importance and urgency. As Eleizer Yudkowski has observed, we are now doing Moral Philosophy on a tight deadline. Please try not to come up with a system that will drive us all extinct — this is not merely a philosophical debate.
These are great points, thank you!
Remember that what the SCEV does is not directly that which the individuals included in it directly want, but what they would want after an extrapolation/reflection process that converged in the most coherent way possible. This means that almost certainly, the result is not the same as if there were no extrapolation process. If there were no extrapolation process, one real possibility is that something like what you suggest, such as sentient dust mites or ants taking over the utility function would indeed occur. But with extrapolation it is much less clear, that the models of the ants’ extrapolated volition may want to uplift the actual ants to a super-human level, as might our models of human extrapolated volition want to do with us humans. Furthermore given that SCEV would try to maximize coherence between satisfying the various volitions of the included beings, the superintelligence would cause human extinction or similar, only if it were physically impossible for it, independently of how much it was able to self-improve, to cause a more coherent result that respected more humans volitions, this seems unlikely, but is not impossible, so this is something to worry about if this proposal where implemented.
However, importantly, in the paper, I DO NOT argue that we should implement SCEV instead of CEV. I only argue that we have some strong (pro-tanto) reasons to do so, even if we should not ultimately do so, because there are other even stronger (pro-tanto) reasons against doing so. This is why I say this in the conclusion: “In this paper, I have shown why we have some very strong pro-tanto reasons in favour of implementing SCEV instead of CEV. This is the case even if, all things considered, it is still ultimately unclear whether what is best is to try to implement SCEV or another proposal more similar to CEV.”
This is truly what I believe and this is why I have put this conclusion in the paper instead of one that states that we SHOULD implement SCEV, I believe this is wrong and thus I did not put it, even if it would have made the paper less complex and more well-rounded.
I completely agree with you and with the quote, that getting this right is a matter of both vital importance and urgency, and I take this and the possibility of human extinction and s-risks very seriously when conducting my research, it is precisely because of this that I have shifted from doing standard practical/animal ethics to this kind of research. It is great that we can agree on this. Thanks again for your thought-provoking comments, they have lowered my credence in favour of implementing SCEV all things considered (even if we do have the pro-tanto reasons I present in the paper).
Rereading this, I’m sorry for dumping all of these objections on you as once (and especially if I sounded like they were obvious). I did actually think about an ethical system along the lines of the one you propose for O(6 months), and tried a variety of different ways to fix it, before regretfully abandoning it as unworkable.
On the non-equal moral weight version, see if you can find one that doesn’t give the AIs perverse incentives to mess with ecosystems. I couldn’t, but the closest I found involved species average adult mass (because biamass is roughly conserved), probability of reaching adulthood (r-strategy species are a nightmare), and average adult synapse count, My advice is that making anything logarithmic feels appealing but never seems to work.