Most vertebrates have at least some moral worth; even most of the ones that lack self-concepts sufficiently strong to have any real preference to exist (beyond any instinctive non-conceptualized self-preservation) nevertheless are capable of experiencing something enough like suffering that they impinge upon moral calculations at least a little bit. (85%)
Objection: Why is the line drawn between vertebrates and invertebrates? True, the nature of spinal cords means vertebrates are generally capable of higher mental processing and therefore have a greater ability to formulate suffering, but you’re counting “ones that lack self-concepts sufficiently strong to have any real preference to exist”. Are you saying the presence of a notochord gives a fish higher moral worth than a crab?
“At least a little bit” is too unclear. Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.
Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.
How so? You mean to the extent that any tiny change has some remote chance of affecting something that someone cares about, or anything more direct than that?
Change, to the extent the notion makes sense (in the map, not territory) already comes with all of its consequences (and causes).
Given any mapping Worlds->Utilities, you get a partition of Worlds on equivalence classes of equal utility. Presumably, exactly equal utility is not easy to arrange, so these classes will be small in some sense. But whatever the case, these classes have boundaries, so that an arbitrarily small change in one direction or the other (from a point on a boundary) determines higher or lower resulting utility. Just make it so that one atom is at a different location.
Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program “print ‘I am experiencing pain’” doesn’t.
(That’s assuming that moral questions have correct answers, and are about something other than the mind of the person asking the question. I’m not too confident about that one way or the other, but my original post should be taken as conditional on that being true, because “My subjective emotivist intuition says that x is valuable, 85%” would not be an interesting claim.)
Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program “print ‘I am experiencing pain’” doesn’t.
If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof (in this case, involving “experience of pain”, which is only one way to see the territory that simultaneously consists of atoms).
If your claim is about “experience of pain”, then talking about resulting moral worth is either a detail of the narrative not adding to the argument (i.e. a property of “experience of pain” that naturally comes to mind and is nice to mention in context), or a lever that is dangerously positioned to be used for rationalizing some conclusion about that claim (e.g. moral worth is important, which by association suggests that “experience of pain” is real).
Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.
I don’t really understand this comment, and I don’t think you were understanding me. Experience of pain in particular is not what I was talking about, nor was I assuming that it is inextricably linked to moral worth. “print ‘I am experiencing pain’” was only an example of something that is clearly not a mind with morally-valuable preferences or experience; I used that as a stand-in for more complicated programs/entities that might engage people’s moral intuitions but which, under reflection, will almost certainly not turn out to have any of their own moral worth (robot dogs, fictional characters, teddy bears, one-day-old human embryos, etc.), as distinguished from more complicated programs that may or may not engage people’s moral intuitions but do have moral worth (biological human minds, human uploads, some subset of possible artificial minds, etc.).
If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof
My claim is about the moral worth of animals, and I will accept any argument about the validity of that claim.
Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.
I would accept that. I definitely think that a world in which a random person gets a dust speck in their eye is better than a world in which a random mammal gets tortured to death (all other things being equal, e.g. it’s not part of any useful medical experiment). But I suspect I may have to set the bar a bit higher than that (a random person getting slapped in the face, maybe) in order for it to be disagreeable enough for the Irrationality Game while still being something I actually agree with.
Most vertebrates have at least some moral worth; even most of the ones that lack self-concepts sufficiently strong to have any real preference to exist (beyond any instinctive non-conceptualized self-preservation) nevertheless are capable of experiencing something enough like suffering that they impinge upon moral calculations at least a little bit. (85%)
Objection: Why is the line drawn between vertebrates and invertebrates? True, the nature of spinal cords means vertebrates are generally capable of higher mental processing and therefore have a greater ability to formulate suffering, but you’re counting “ones that lack self-concepts sufficiently strong to have any real preference to exist”. Are you saying the presence of a notochord gives a fish higher moral worth than a crab?
That’s a good point—there are almost certainly invertebrate species on the same side of the line. Squid, for example.
“At least a little bit” is too unclear. Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.
How so? You mean to the extent that any tiny change has some remote chance of affecting something that someone cares about, or anything more direct than that?
Change, to the extent the notion makes sense (in the map, not territory) already comes with all of its consequences (and causes).
Given any mapping Worlds->Utilities, you get a partition of Worlds on equivalence classes of equal utility. Presumably, exactly equal utility is not easy to arrange, so these classes will be small in some sense. But whatever the case, these classes have boundaries, so that an arbitrarily small change in one direction or the other (from a point on a boundary) determines higher or lower resulting utility. Just make it so that one atom is at a different location.
Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program “print ‘I am experiencing pain’” doesn’t.
(That’s assuming that moral questions have correct answers, and are about something other than the mind of the person asking the question. I’m not too confident about that one way or the other, but my original post should be taken as conditional on that being true, because “My subjective emotivist intuition says that x is valuable, 85%” would not be an interesting claim.)
If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof (in this case, involving “experience of pain”, which is only one way to see the territory that simultaneously consists of atoms).
If your claim is about “experience of pain”, then talking about resulting moral worth is either a detail of the narrative not adding to the argument (i.e. a property of “experience of pain” that naturally comes to mind and is nice to mention in context), or a lever that is dangerously positioned to be used for rationalizing some conclusion about that claim (e.g. moral worth is important, which by association suggests that “experience of pain” is real).
Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.
I don’t really understand this comment, and I don’t think you were understanding me. Experience of pain in particular is not what I was talking about, nor was I assuming that it is inextricably linked to moral worth. “print ‘I am experiencing pain’” was only an example of something that is clearly not a mind with morally-valuable preferences or experience; I used that as a stand-in for more complicated programs/entities that might engage people’s moral intuitions but which, under reflection, will almost certainly not turn out to have any of their own moral worth (robot dogs, fictional characters, teddy bears, one-day-old human embryos, etc.), as distinguished from more complicated programs that may or may not engage people’s moral intuitions but do have moral worth (biological human minds, human uploads, some subset of possible artificial minds, etc.).
My claim is about the moral worth of animals, and I will accept any argument about the validity of that claim.
I would accept that. I definitely think that a world in which a random person gets a dust speck in their eye is better than a world in which a random mammal gets tortured to death (all other things being equal, e.g. it’s not part of any useful medical experiment). But I suspect I may have to set the bar a bit higher than that (a random person getting slapped in the face, maybe) in order for it to be disagreeable enough for the Irrationality Game while still being something I actually agree with.