Absent any reason to do so, disliking all murders simply because they are murders makes no more sense than disliking all elephants simply because they are elephants. You can choose to do so without being logically inconsistent, but it seems like a weird choice to make for no reason. Did you just arbitrarily choose “murder” as a category worthy of dislike, whether or not it causes harm?
At the risk of committing the genetic fallacy, I would be very surprised if their choice of murder as a thing they dislike for its own sake (rather than, say, elephants) had nothing to do with murder being harmful. And although right now I am simply asserting this rather than arguing it, I think it’s likely that even if they think they have a deductive proof for why murder is wrong regardless of harm, they started by unconsciously making the WAITW and then rationalizing it.
But I agree that if they do think they have this deductive proof, screaming “Worst argument in the world!” at them is useless and counterproductive; at that point you address the proof.
Absent any reason to do so, disliking instances of harm simply because they are instances of harm makes no more sense than disliking all elephants simply because they are elephants.
I don’t want to assume any metaethical baggage here, but I’m not sure why “because it is an instance of harm” is an acceptable answer but “because it is an instance of theft” is not.
Keeping your principle of ignoring meta-ethical baggage, dis-valuing harm only requires one first principle, whereas dis-valuing murder, theft, elephants, etc require an independent (and apparently arbitrary) decision at each concept. Further, it’s very suspicious that this supposedly arbitrary decision almost always picks out actions that are often harmful when there are so very many things one could arbitrarily decide to dislike.
This sounds like the debate about ethical pluralism—maybe values are sufficiently complex that any one principle can’t capture them. If ethical pluralism is wrong, then they can’t make use of this argument. But then they have a very major problem with their metaethics, independant of the WAitW. And what is more, once they solve the problem—getting a single basis for their ethics—they can avoid your accusation, by saying that actually avoiding theft is the sole criteria, and they’re not trying to sneak in irrelivant conotations. After all, if theft was all that mattered, why would you try to sneak in connotations about harm?
Also, I think you’re sneaking in conotations when you use “arbitrary”. Yes, such a person would argue that our aversion to theft isn’t based on any of our other values; but your utilitarian would probably claim the same about their aversion to harm. This doesn’t seem a harmful (pun not intended) case of arbitrariness.
Contrariwise, they might find it very suspicious that your supposedly arbitrary decision as to what is harmful so often picks out actions that constitute theft to a libertarian (e.g. murder, slavery, breach of contract, pollution, trespass, wrongful dismissal...) when there are so very many things one could arbitrarily decide to dislike.
This line of argument seems to err away from the principle that you can’t unwind yourself into an ideal philosopher of perfect emptiness. You’re running on hardware that is physically, through very real principles that apply to everything in the universe, going to react in a certain averse manner to certain stimuli to which we could assign the category label “harm”. This is commonly divided into “pain”, “boredom”, etc.
It is much more unlikely (and much more difficult to truly explain) that a person would, based on such hardware, somehow end up with the terminal value that some abstract, extremely solomonoff-complex interpretation of conjointed mental and physical behaviors is bad—in contrast with reflective negative valuation of harm-potentials both in self and in others (the “in others” being reflected as “harm to self when harm to other members of the tribe”).
Then again, I feel like I’m diving in too deep here. My instinct is to profess and worship my ignorance of this topic.
A particular preference that does not make sense at all is empirically unlikely to exist due to the natural selection process. We should thus, if for whatever reason we prefer correspondence between map and territory, assign reasonable probability that most preferences will “make sense”.
As for why it should, well… I’m not able to conceive of an acceptable answer to that without first tabooing “should” and applying generous amounts of reductionism, recursively within sub-meanings and subspaces within semanticspace.
Absent any reason to do so, disliking all murders simply because they are murders makes no more sense than disliking all elephants simply because they are elephants. You can choose to do so without being logically inconsistent, but it seems like a weird choice to make for no reason. Did you just arbitrarily choose “murder” as a category worthy of dislike, whether or not it causes harm?
At the risk of committing the genetic fallacy, I would be very surprised if their choice of murder as a thing they dislike for its own sake (rather than, say, elephants) had nothing to do with murder being harmful. And although right now I am simply asserting this rather than arguing it, I think it’s likely that even if they think they have a deductive proof for why murder is wrong regardless of harm, they started by unconsciously making the WAITW and then rationalizing it.
But I agree that if they do think they have this deductive proof, screaming “Worst argument in the world!” at them is useless and counterproductive; at that point you address the proof.
Absent any reason to do so, disliking instances of harm simply because they are instances of harm makes no more sense than disliking all elephants simply because they are elephants.
I don’t want to assume any metaethical baggage here, but I’m not sure why “because it is an instance of harm” is an acceptable answer but “because it is an instance of theft” is not.
Keeping your principle of ignoring meta-ethical baggage, dis-valuing harm only requires one first principle, whereas dis-valuing murder, theft, elephants, etc require an independent (and apparently arbitrary) decision at each concept. Further, it’s very suspicious that this supposedly arbitrary decision almost always picks out actions that are often harmful when there are so very many things one could arbitrarily decide to dislike.
This sounds like the debate about ethical pluralism—maybe values are sufficiently complex that any one principle can’t capture them. If ethical pluralism is wrong, then they can’t make use of this argument. But then they have a very major problem with their metaethics, independant of the WAitW. And what is more, once they solve the problem—getting a single basis for their ethics—they can avoid your accusation, by saying that actually avoiding theft is the sole criteria, and they’re not trying to sneak in irrelivant conotations. After all, if theft was all that mattered, why would you try to sneak in connotations about harm?
Also, I think you’re sneaking in conotations when you use “arbitrary”. Yes, such a person would argue that our aversion to theft isn’t based on any of our other values; but your utilitarian would probably claim the same about their aversion to harm. This doesn’t seem a harmful (pun not intended) case of arbitrariness.
Contrariwise, they might find it very suspicious that your supposedly arbitrary decision as to what is harmful so often picks out actions that constitute theft to a libertarian (e.g. murder, slavery, breach of contract, pollution, trespass, wrongful dismissal...) when there are so very many things one could arbitrarily decide to dislike.
This line of argument seems to err away from the principle that you can’t unwind yourself into an ideal philosopher of perfect emptiness. You’re running on hardware that is physically, through very real principles that apply to everything in the universe, going to react in a certain averse manner to certain stimuli to which we could assign the category label “harm”. This is commonly divided into “pain”, “boredom”, etc.
It is much more unlikely (and much more difficult to truly explain) that a person would, based on such hardware, somehow end up with the terminal value that some abstract, extremely solomonoff-complex interpretation of conjointed mental and physical behaviors is bad—in contrast with reflective negative valuation of harm-potentials both in self and in others (the “in others” being reflected as “harm to self when harm to other members of the tribe”).
Then again, I feel like I’m diving in too deep here. My instinct is to profess and worship my ignorance of this topic.
Why should a preference have to “make sense”?
A particular preference that does not make sense at all is empirically unlikely to exist due to the natural selection process. We should thus, if for whatever reason we prefer correspondence between map and territory, assign reasonable probability that most preferences will “make sense”.
As for why it should, well… I’m not able to conceive of an acceptable answer to that without first tabooing “should” and applying generous amounts of reductionism, recursively within sub-meanings and subspaces within semanticspace.
Ok, so replace “abortion is murder” with “abortion harms the fetus”.