Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program “print ‘I am experiencing pain’” doesn’t.
(That’s assuming that moral questions have correct answers, and are about something other than the mind of the person asking the question. I’m not too confident about that one way or the other, but my original post should be taken as conditional on that being true, because “My subjective emotivist intuition says that x is valuable, 85%” would not be an interesting claim.)
Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program “print ‘I am experiencing pain’” doesn’t.
If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof (in this case, involving “experience of pain”, which is only one way to see the territory that simultaneously consists of atoms).
If your claim is about “experience of pain”, then talking about resulting moral worth is either a detail of the narrative not adding to the argument (i.e. a property of “experience of pain” that naturally comes to mind and is nice to mention in context), or a lever that is dangerously positioned to be used for rationalizing some conclusion about that claim (e.g. moral worth is important, which by association suggests that “experience of pain” is real).
Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.
I don’t really understand this comment, and I don’t think you were understanding me. Experience of pain in particular is not what I was talking about, nor was I assuming that it is inextricably linked to moral worth. “print ‘I am experiencing pain’” was only an example of something that is clearly not a mind with morally-valuable preferences or experience; I used that as a stand-in for more complicated programs/entities that might engage people’s moral intuitions but which, under reflection, will almost certainly not turn out to have any of their own moral worth (robot dogs, fictional characters, teddy bears, one-day-old human embryos, etc.), as distinguished from more complicated programs that may or may not engage people’s moral intuitions but do have moral worth (biological human minds, human uploads, some subset of possible artificial minds, etc.).
If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof
My claim is about the moral worth of animals, and I will accept any argument about the validity of that claim.
Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.
I would accept that. I definitely think that a world in which a random person gets a dust speck in their eye is better than a world in which a random mammal gets tortured to death (all other things being equal, e.g. it’s not part of any useful medical experiment). But I suspect I may have to set the bar a bit higher than that (a random person getting slapped in the face, maybe) in order for it to be disagreeable enough for the Irrationality Game while still being something I actually agree with.
Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program “print ‘I am experiencing pain’” doesn’t.
(That’s assuming that moral questions have correct answers, and are about something other than the mind of the person asking the question. I’m not too confident about that one way or the other, but my original post should be taken as conditional on that being true, because “My subjective emotivist intuition says that x is valuable, 85%” would not be an interesting claim.)
If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof (in this case, involving “experience of pain”, which is only one way to see the territory that simultaneously consists of atoms).
If your claim is about “experience of pain”, then talking about resulting moral worth is either a detail of the narrative not adding to the argument (i.e. a property of “experience of pain” that naturally comes to mind and is nice to mention in context), or a lever that is dangerously positioned to be used for rationalizing some conclusion about that claim (e.g. moral worth is important, which by association suggests that “experience of pain” is real).
Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.
I don’t really understand this comment, and I don’t think you were understanding me. Experience of pain in particular is not what I was talking about, nor was I assuming that it is inextricably linked to moral worth. “print ‘I am experiencing pain’” was only an example of something that is clearly not a mind with morally-valuable preferences or experience; I used that as a stand-in for more complicated programs/entities that might engage people’s moral intuitions but which, under reflection, will almost certainly not turn out to have any of their own moral worth (robot dogs, fictional characters, teddy bears, one-day-old human embryos, etc.), as distinguished from more complicated programs that may or may not engage people’s moral intuitions but do have moral worth (biological human minds, human uploads, some subset of possible artificial minds, etc.).
My claim is about the moral worth of animals, and I will accept any argument about the validity of that claim.
I would accept that. I definitely think that a world in which a random person gets a dust speck in their eye is better than a world in which a random mammal gets tortured to death (all other things being equal, e.g. it’s not part of any useful medical experiment). But I suspect I may have to set the bar a bit higher than that (a random person getting slapped in the face, maybe) in order for it to be disagreeable enough for the Irrationality Game while still being something I actually agree with.