You argue that CEV should be expanded to SCEV in order to avoid “astronomical suffering (s-risks)”. This seems to be a circular argument to me. We are deciding upon a set of beings to assign moral value to. By declaring that pain in animals is suffering that we have a moral duty to take into account, so we have to include them in the set of being we design our AIs to assign moral value too, you are presupposing that animals in fact have moral value: logically, your argument is circular. One could equally consistently declare that, say, non-human animals have no moral worth, so we are morally free to disregard their pain and not include it in our definition of “suffering” (or if you want to define the word ‘suffer’ to be a biological rather than a moral term, that we have no moral responsibility to care about their suffering because they’re not in the set of beings that we have assigned moral worth to). Their pain carries moral value in our decision if and only if they have moral value, and this doesn’t help us decide what set of beings to assign moral value to. This would clearly be a cold, heartless position, but it’s just as logically consistent as the one you propose. (Similarly, one could equally logically consistently do the same for just males, or just people whose surname is Rockerfeller.) So what you are giving is actually an emotional argument “seeing animals suffer makes me feel bad, so we should do this” (which I have some sympathy for, it does the same thing to me too), rather than the logical argument that you are presenting it as.
This is a specific instance of a general phenomenon. Logical ethical arguments only make any sense in the context of a specific ethical system, just like mathematical logical arguments only make sense in the context of a specific set of mathematical axioms. Every ethical system prefers itself over all alternatives (by definition in its opinion the others all get at least some things wrong). So any time anyone makes what sounds like a logical ethical argument for preferring one ethical system over another, there are only three possibilities: their argument is a tautology, there’s a flaw in their logic, or it’s not in fact a logical ethical argument, it just sounds that way (normally it’s in fact an emotional ethical argument argument). (The only exception to this is pointing out if an ethical system is not even internally logically consistent, i.e. doesn’t make sense even on its own terms: that’s a valid logical ethical argument.)
If that didn’t make sense to you, try the first four paragraphs of the first post in my sequence on Ethics, or for a lot more detail see Roko’s The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. You cannot use logical ethical arguments to choose between ethical systems: you’re just pulling on your own bootstraps if you try. If you don’t want to just pick an ethical system arbitrarily, you have to invoke something that boils down to something along the lines of “I do/don’t feel good about this rule, or its results, or I’m going to pick an ethical system that seems fit-for-purpose for a particular society”. So basically the only way to make a decision about something like SCEV is based on feelings: does it offend the moral instincts that most humans have, and how would most humans feel about the consequences if a society used this ethical system (which generally depend a lot on what society we’re talking about)? So you do need to think through consequences like providing vegetarian meals for predators and healthcare and birth control for insects, before picking an ethical system.
1. (non-human animals deserve moral consideration, and s-risk are bad (I assume this))
We have reasons to believe 2: (we have some pro-tanto reasons to include them in the process of value learning of an artificial superintelligence instead of only including humans).
There are people (whose objections I address in the paper) that accept 1 but do not accept 2. 1 is not justified for the same reasons as 2. 2 is justified for the reasons I present in the paper. 1 is justified by other arguments about animal ethics and the badness of suffering that are intentionally not present in the paper, I cite the places/papers where 1 is argued instead of arguing for it myself in the paper which is standard practice in academic philosophy.
The people who believe 1 but not 2, do not only have different feelings than me, but their objections to my view are (very likely) wrong, as I show when responding to those objections in the objections section.
You argue that CEV should be expanded to SCEV in order to avoid “astronomical suffering (s-risks)”. This seems to be a circular argument to me. We are deciding upon a set of beings to assign moral value to. By declaring that pain in animals is suffering that we have a moral duty to take into account, so we have to include them in the set of being we design our AIs to assign moral value too, you are presupposing that animals in fact have moral value: logically, your argument is circular. One could equally consistently declare that, say, non-human animals have no moral worth, so we are morally free to disregard their pain and not include it in our definition of “suffering” (or if you want to define the word ‘suffer’ to be a biological rather than a moral term, that we have no moral responsibility to care about their suffering because they’re not in the set of beings that we have assigned moral worth to). Their pain carries moral value in our decision if and only if they have moral value, and this doesn’t help us decide what set of beings to assign moral value to. This would clearly be a cold, heartless position, but it’s just as logically consistent as the one you propose. (Similarly, one could equally logically consistently do the same for just males, or just people whose surname is Rockerfeller.) So what you are giving is actually an emotional argument “seeing animals suffer makes me feel bad, so we should do this” (which I have some sympathy for, it does the same thing to me too), rather than the logical argument that you are presenting it as.
This is a specific instance of a general phenomenon. Logical ethical arguments only make any sense in the context of a specific ethical system, just like mathematical logical arguments only make sense in the context of a specific set of mathematical axioms. Every ethical system prefers itself over all alternatives (by definition in its opinion the others all get at least some things wrong). So any time anyone makes what sounds like a logical ethical argument for preferring one ethical system over another, there are only three possibilities: their argument is a tautology, there’s a flaw in their logic, or it’s not in fact a logical ethical argument, it just sounds that way (normally it’s in fact an emotional ethical argument argument). (The only exception to this is pointing out if an ethical system is not even internally logically consistent, i.e. doesn’t make sense even on its own terms: that’s a valid logical ethical argument.)
If that didn’t make sense to you, try the first four paragraphs of the first post in my sequence on Ethics, or for a lot more detail see Roko’s The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. You cannot use logical ethical arguments to choose between ethical systems: you’re just pulling on your own bootstraps if you try. If you don’t want to just pick an ethical system arbitrarily, you have to invoke something that boils down to something along the lines of “I do/don’t feel good about this rule, or its results, or I’m going to pick an ethical system that seems fit-for-purpose for a particular society”. So basically the only way to make a decision about something like SCEV is based on feelings: does it offend the moral instincts that most humans have, and how would most humans feel about the consequences if a society used this ethical system (which generally depend a lot on what society we’re talking about)? So you do need to think through consequences like providing vegetarian meals for predators and healthcare and birth control for insects, before picking an ethical system.
I am arguing that given that
1. (non-human animals deserve moral consideration, and s-risk are bad (I assume this))
We have reasons to believe 2: (we have some pro-tanto reasons to include them in the process of value learning of an artificial superintelligence instead of only including humans).
There are people (whose objections I address in the paper) that accept 1 but do not accept 2. 1 is not justified for the same reasons as 2. 2 is justified for the reasons I present in the paper. 1 is justified by other arguments about animal ethics and the badness of suffering that are intentionally not present in the paper, I cite the places/papers where 1 is argued instead of arguing for it myself in the paper which is standard practice in academic philosophy.
The people who believe 1 but not 2, do not only have different feelings than me, but their objections to my view are (very likely) wrong, as I show when responding to those objections in the objections section.