I don’t understand how you get from “policy debates should not appear one-sided” to “there should be no shortage of weak arguments ‘on your side’”. Especially if you replace the latter with “there should be no shortage of weak arguments of this sort on your side”—which is necessary for the challenge to be appropriate—since there could be correlations between a person’s political position and which sorts of fallacies are most likely to infect their thinking.
In particular, I predict WAITW use to be correlated with explicit endorsement of sanctity-based rather than harm-based moral values, and we’ve recently been talking about how that might differ between political groups.
I think this is because of the way you’re deconstructing the arguments. In each case, the features you identify which supposedly make us dislike the arcetypal cases are harm-based features. Someone who believed in sanctity instead might identify the category as a value in itself. Attempts to ascribe utilitarian-style values to them, which they supposedly miss the local inapplicability of, risks ignoring what they actually value.
If people genuinely do think murder is wrong simply because it is murder, rather than because it causes harm, then this is not a bad argument.
Absent any reason to do so, disliking all murders simply because they are murders makes no more sense than disliking all elephants simply because they are elephants. You can choose to do so without being logically inconsistent, but it seems like a weird choice to make for no reason. Did you just arbitrarily choose “murder” as a category worthy of dislike, whether or not it causes harm?
At the risk of committing the genetic fallacy, I would be very surprised if their choice of murder as a thing they dislike for its own sake (rather than, say, elephants) had nothing to do with murder being harmful. And although right now I am simply asserting this rather than arguing it, I think it’s likely that even if they think they have a deductive proof for why murder is wrong regardless of harm, they started by unconsciously making the WAITW and then rationalizing it.
But I agree that if they do think they have this deductive proof, screaming “Worst argument in the world!” at them is useless and counterproductive; at that point you address the proof.
Absent any reason to do so, disliking instances of harm simply because they are instances of harm makes no more sense than disliking all elephants simply because they are elephants.
I don’t want to assume any metaethical baggage here, but I’m not sure why “because it is an instance of harm” is an acceptable answer but “because it is an instance of theft” is not.
Keeping your principle of ignoring meta-ethical baggage, dis-valuing harm only requires one first principle, whereas dis-valuing murder, theft, elephants, etc require an independent (and apparently arbitrary) decision at each concept. Further, it’s very suspicious that this supposedly arbitrary decision almost always picks out actions that are often harmful when there are so very many things one could arbitrarily decide to dislike.
This sounds like the debate about ethical pluralism—maybe values are sufficiently complex that any one principle can’t capture them. If ethical pluralism is wrong, then they can’t make use of this argument. But then they have a very major problem with their metaethics, independant of the WAitW. And what is more, once they solve the problem—getting a single basis for their ethics—they can avoid your accusation, by saying that actually avoiding theft is the sole criteria, and they’re not trying to sneak in irrelivant conotations. After all, if theft was all that mattered, why would you try to sneak in connotations about harm?
Also, I think you’re sneaking in conotations when you use “arbitrary”. Yes, such a person would argue that our aversion to theft isn’t based on any of our other values; but your utilitarian would probably claim the same about their aversion to harm. This doesn’t seem a harmful (pun not intended) case of arbitrariness.
Contrariwise, they might find it very suspicious that your supposedly arbitrary decision as to what is harmful so often picks out actions that constitute theft to a libertarian (e.g. murder, slavery, breach of contract, pollution, trespass, wrongful dismissal...) when there are so very many things one could arbitrarily decide to dislike.
This line of argument seems to err away from the principle that you can’t unwind yourself into an ideal philosopher of perfect emptiness. You’re running on hardware that is physically, through very real principles that apply to everything in the universe, going to react in a certain averse manner to certain stimuli to which we could assign the category label “harm”. This is commonly divided into “pain”, “boredom”, etc.
It is much more unlikely (and much more difficult to truly explain) that a person would, based on such hardware, somehow end up with the terminal value that some abstract, extremely solomonoff-complex interpretation of conjointed mental and physical behaviors is bad—in contrast with reflective negative valuation of harm-potentials both in self and in others (the “in others” being reflected as “harm to self when harm to other members of the tribe”).
Then again, I feel like I’m diving in too deep here. My instinct is to profess and worship my ignorance of this topic.
A particular preference that does not make sense at all is empirically unlikely to exist due to the natural selection process. We should thus, if for whatever reason we prefer correspondence between map and territory, assign reasonable probability that most preferences will “make sense”.
As for why it should, well… I’m not able to conceive of an acceptable answer to that without first tabooing “should” and applying generous amounts of reductionism, recursively within sub-meanings and subspaces within semanticspace.
“Burning fossil fuels is environmentally irresponsible!”
EDIT: Are these not Worst Arguments in the World? I have heard arguments for gun control that don’t specify why being in the class “weapons” makes guns subject to additional restrictions. I have also seen the environment or nature treated as a sanctified moral value.
An important part of WAitW is that it uses an atypical member of a category to attach to it the rejection of a typical member of the category. Both abortion and death penalty are atypical members of the “murder” category (if they are), and associating them with “murder” is trying to associate them with the connotation of the “typical” murder.
Guns are quite the typical weapon. They are not border-line weapons like a kitchen knife or a hammer, they are not military grade weapons. Saying “guns are weapons!” doesn’t try to associate guns with something different, it doesn’t add much to the debate, but it doesn’t carry the same attempt to sneak in connotations as “abortion is murder!” is.
For “Burning fossil fuels is environmentally irresponsible!” it’s also quite different. “Environmentally irresponsible” is more a description, a feature that “burning fossil fuels” has (or doesn’t have, but that’s another issue), than a broad category in which we are trying to tie “burning fossil fuels”.
Not every “A is B” satisfies the definition of “the worst argument in the world” (truly a horrible name for a fallacy which should be replaced by something shorter, more descriptive and less exaggerating). “A is B, therefore A is C” qualifies as the discussed fallacy if
A belongs to the category B as far as technical/denotational meaning of B is considered,
using the technical meaning, not all B are C,
most (or most typical / available) members of B are C and therefore B has C in connotations, and
all C-relevant information about A is known, screening off potential C-relevant information about A coming from its membership in B.
In “burning fossil fuels (A) is environmentally irresponsible (B) [and therefore is bad] (C)”
holds
is subjective but for many audiences fails (i.e. “irresponsible” means “likely to cause bad outcomes”, which makes the whole category tautologically bad)
is problematic, since the denotational and connotational meaning of B aren’t different (badness-wise)
fails, since presumably the listener doesn’t know about A’s environmental irresponsibility
The argument may be fallacious if the listener doesn’t care about the environment but is tricked into accepting the badness of A based on connotations of “irresponsible”, but that isn’t exactly the fallacy described in the OP.
In “guns (A) are weapons (B) [and therefore should be banned] (C)”
holds
holds if the listener agrees that all weapons should be banned, else fails
depends on the listener’s idea of a typical weapon, if it is a hydrogen bomb, then (3) holds, if it is a knife, (3) fails, if it is a gun, we are building a circular argument
probably holds
So this argument may qualify, but it is so obviously tautological that I have problems imagining someone actually using it.
“firearms with magazines that hold more than 10 rounds are assault weapons (and therefore should be banned)” seems to be more along the lines of arguments I’ve actually seen. I probably oversimplified in my head when I wrote the first post. Of course, having a Federal statute that happened to define firearms in that way might have directly led to such arguments after the ban expired, but it’s probably appropriate to label some laws as having “The Worst Legal Categorization in the World” as well. What if banning firearms with magazines holding more than 9 rounds would have saved even one extra life?
Obviously one can find any number of weak arguments for anything, but surely the point here was to find weak arguments that have a particular sort of problem but are otherwise at least reasonably credible-sounding.
/
I’m having trouble understanding what part of what I wrote looked like “there’s no chance of finding a suitable argument, so it’s not worth trying”. For the avoidance of doubt, that wasn’t at all what I meant.
Would any of the (at least four) people who have upvoted Eliezer’s comment but not my response—or Eliezer, if he happens still to be reading—like to explain to me in what way Eliezer is right and I’m wrong here? Thanks!
Would any of the (at least four) people who have upvoted Eliezer’s comment but not my response
There’s not necessarily even one of those, let alone four. Four people could have upvoted both of you and then four other people could have downvoted just you.
Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it’s reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can’t really draw any inference from that at all.
I don’t understand how you get from “policy debates should not appear one-sided” to “there should be no shortage of weak arguments ‘on your side’”. Especially if you replace the latter with “there should be no shortage of weak arguments of this sort on your side”—which is necessary for the challenge to be appropriate—since there could be correlations between a person’s political position and which sorts of fallacies are most likely to infect their thinking.
In particular, I predict WAITW use to be correlated with explicit endorsement of sanctity-based rather than harm-based moral values, and we’ve recently been talking about how that might differ between political groups.
I think this is because of the way you’re deconstructing the arguments. In each case, the features you identify which supposedly make us dislike the arcetypal cases are harm-based features. Someone who believed in sanctity instead might identify the category as a value in itself. Attempts to ascribe utilitarian-style values to them, which they supposedly miss the local inapplicability of, risks ignoring what they actually value.
If people genuinely do think murder is wrong simply because it is murder, rather than because it causes harm, then this is not a bad argument.
Absent any reason to do so, disliking all murders simply because they are murders makes no more sense than disliking all elephants simply because they are elephants. You can choose to do so without being logically inconsistent, but it seems like a weird choice to make for no reason. Did you just arbitrarily choose “murder” as a category worthy of dislike, whether or not it causes harm?
At the risk of committing the genetic fallacy, I would be very surprised if their choice of murder as a thing they dislike for its own sake (rather than, say, elephants) had nothing to do with murder being harmful. And although right now I am simply asserting this rather than arguing it, I think it’s likely that even if they think they have a deductive proof for why murder is wrong regardless of harm, they started by unconsciously making the WAITW and then rationalizing it.
But I agree that if they do think they have this deductive proof, screaming “Worst argument in the world!” at them is useless and counterproductive; at that point you address the proof.
Absent any reason to do so, disliking instances of harm simply because they are instances of harm makes no more sense than disliking all elephants simply because they are elephants.
I don’t want to assume any metaethical baggage here, but I’m not sure why “because it is an instance of harm” is an acceptable answer but “because it is an instance of theft” is not.
Keeping your principle of ignoring meta-ethical baggage, dis-valuing harm only requires one first principle, whereas dis-valuing murder, theft, elephants, etc require an independent (and apparently arbitrary) decision at each concept. Further, it’s very suspicious that this supposedly arbitrary decision almost always picks out actions that are often harmful when there are so very many things one could arbitrarily decide to dislike.
This sounds like the debate about ethical pluralism—maybe values are sufficiently complex that any one principle can’t capture them. If ethical pluralism is wrong, then they can’t make use of this argument. But then they have a very major problem with their metaethics, independant of the WAitW. And what is more, once they solve the problem—getting a single basis for their ethics—they can avoid your accusation, by saying that actually avoiding theft is the sole criteria, and they’re not trying to sneak in irrelivant conotations. After all, if theft was all that mattered, why would you try to sneak in connotations about harm?
Also, I think you’re sneaking in conotations when you use “arbitrary”. Yes, such a person would argue that our aversion to theft isn’t based on any of our other values; but your utilitarian would probably claim the same about their aversion to harm. This doesn’t seem a harmful (pun not intended) case of arbitrariness.
Contrariwise, they might find it very suspicious that your supposedly arbitrary decision as to what is harmful so often picks out actions that constitute theft to a libertarian (e.g. murder, slavery, breach of contract, pollution, trespass, wrongful dismissal...) when there are so very many things one could arbitrarily decide to dislike.
This line of argument seems to err away from the principle that you can’t unwind yourself into an ideal philosopher of perfect emptiness. You’re running on hardware that is physically, through very real principles that apply to everything in the universe, going to react in a certain averse manner to certain stimuli to which we could assign the category label “harm”. This is commonly divided into “pain”, “boredom”, etc.
It is much more unlikely (and much more difficult to truly explain) that a person would, based on such hardware, somehow end up with the terminal value that some abstract, extremely solomonoff-complex interpretation of conjointed mental and physical behaviors is bad—in contrast with reflective negative valuation of harm-potentials both in self and in others (the “in others” being reflected as “harm to self when harm to other members of the tribe”).
Then again, I feel like I’m diving in too deep here. My instinct is to profess and worship my ignorance of this topic.
Why should a preference have to “make sense”?
A particular preference that does not make sense at all is empirically unlikely to exist due to the natural selection process. We should thus, if for whatever reason we prefer correspondence between map and territory, assign reasonable probability that most preferences will “make sense”.
As for why it should, well… I’m not able to conceive of an acceptable answer to that without first tabooing “should” and applying generous amounts of reductionism, recursively within sub-meanings and subspaces within semanticspace.
Ok, so replace “abortion is murder” with “abortion harms the fetus”.
“Guns are weapons!”
“Burning fossil fuels is environmentally irresponsible!”
EDIT: Are these not Worst Arguments in the World? I have heard arguments for gun control that don’t specify why being in the class “weapons” makes guns subject to additional restrictions. I have also seen the environment or nature treated as a sanctified moral value.
An important part of WAitW is that it uses an atypical member of a category to attach to it the rejection of a typical member of the category. Both abortion and death penalty are atypical members of the “murder” category (if they are), and associating them with “murder” is trying to associate them with the connotation of the “typical” murder.
Guns are quite the typical weapon. They are not border-line weapons like a kitchen knife or a hammer, they are not military grade weapons. Saying “guns are weapons!” doesn’t try to associate guns with something different, it doesn’t add much to the debate, but it doesn’t carry the same attempt to sneak in connotations as “abortion is murder!” is.
For “Burning fossil fuels is environmentally irresponsible!” it’s also quite different. “Environmentally irresponsible” is more a description, a feature that “burning fossil fuels” has (or doesn’t have, but that’s another issue), than a broad category in which we are trying to tie “burning fossil fuels”.
Not every “A is B” satisfies the definition of “the worst argument in the world” (truly a horrible name for a fallacy which should be replaced by something shorter, more descriptive and less exaggerating). “A is B, therefore A is C” qualifies as the discussed fallacy if
A belongs to the category B as far as technical/denotational meaning of B is considered,
using the technical meaning, not all B are C,
most (or most typical / available) members of B are C and therefore B has C in connotations, and
all C-relevant information about A is known, screening off potential C-relevant information about A coming from its membership in B.
In “burning fossil fuels (A) is environmentally irresponsible (B) [and therefore is bad] (C)”
holds
is subjective but for many audiences fails (i.e. “irresponsible” means “likely to cause bad outcomes”, which makes the whole category tautologically bad)
is problematic, since the denotational and connotational meaning of B aren’t different (badness-wise)
fails, since presumably the listener doesn’t know about A’s environmental irresponsibility
The argument may be fallacious if the listener doesn’t care about the environment but is tricked into accepting the badness of A based on connotations of “irresponsible”, but that isn’t exactly the fallacy described in the OP.
In “guns (A) are weapons (B) [and therefore should be banned] (C)”
holds
holds if the listener agrees that all weapons should be banned, else fails
depends on the listener’s idea of a typical weapon, if it is a hydrogen bomb, then (3) holds, if it is a knife, (3) fails, if it is a gun, we are building a circular argument
probably holds
So this argument may qualify, but it is so obviously tautological that I have problems imagining someone actually using it.
“firearms with magazines that hold more than 10 rounds are assault weapons (and therefore should be banned)” seems to be more along the lines of arguments I’ve actually seen. I probably oversimplified in my head when I wrote the first post. Of course, having a Federal statute that happened to define firearms in that way might have directly led to such arguments after the ban expired, but it’s probably appropriate to label some laws as having “The Worst Legal Categorization in the World” as well. What if banning firearms with magazines holding more than 9 rounds would have saved even one extra life?
Shouldn’t there never be a shortage of weak arguments for anything? Strong arguments can always be weakened.
/
Isn’t there enough chance of finding a weak argument to at least make it worth trying? You never know, you might find a weak argument somewhere.
Obviously one can find any number of weak arguments for anything, but surely the point here was to find weak arguments that have a particular sort of problem but are otherwise at least reasonably credible-sounding.
/
I’m having trouble understanding what part of what I wrote looked like “there’s no chance of finding a suitable argument, so it’s not worth trying”. For the avoidance of doubt, that wasn’t at all what I meant.
Would any of the (at least four) people who have upvoted Eliezer’s comment but not my response—or Eliezer, if he happens still to be reading—like to explain to me in what way Eliezer is right and I’m wrong here? Thanks!
There’s not necessarily even one of those, let alone four. Four people could have upvoted both of you and then four other people could have downvoted just you.
D’oh! Of course you’re right. I should have said: either upvoted Eliezer’s comment but not mine, or downvoted mine but not Eliezer’s.
Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it’s reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can’t really draw any inference from that at all.
Except that when I made my comment, Eliezer’s was at zero. Er, it might have been +1, but it certainly wasn’t +4.