I’m sorry, but I believe you keep misinterpreting my replies, perhaps due to a failure to understand the dialectical structure of this little exchange. So let me try to make that structure clear. Eliezer first claimed that comments of a certain form would be automatically downvoted here at LW, and that my original comment was downvoted because it had that form. I offered in reply a hypothetical situation that involved overt discrimination against certain groups of people, and argued that a comment analogous to mine in reaction to that situation would not and should not be automatically downvoted. You then accused me of morally equating discrimination against such groups of people with the use of non-human animals for human consumption. I then in turn replied that the function of that counterfactual situation had nothing to do with arguing for why meat-eating is wrong, but was instead meant to provide a counterexample to Eliezer’s generalization. Finally, in your most recent comment, you defend what you said in your previous post, stressing the fact that what you said was given as the explanation that Eliezer did not feel inclined to offer. But this is completely besides the point. The explanation that you gave took, as evidence that I was equating meat consumption with racism or sexism, the fact that I had used a situation involving racism and sexism as a reply to Eliezer, when as I have said already my having used that situation constitutes no such evidence.
The Wrong in Less Wrong (LW) is referring to the objective (“mind-independent” or intersubjectively verifiable) failure, error, inconsistency, illogical argumentation, irrational behavior. You are wrong if you take the wrong, or simply less effective, path to reach your goal. LW is trying to improve your map so that you’ll be able to find a better and more effective path. If you were perfectly aware of what you want, then with regard to your epistemic state, there would be the right thing to do. But LW does not claim to be right but merely less wrong. More importantly, LW does not tell you what you ought to want but rather how to figure out what you might actually want and how to obtain it. Therefore to ask how a group that claims to be less wrong can be doing X, which is wrong, implies that they not only claimed to be right, rather than just less wrong, but also that you know about their objectives and that X is the wrong way to reach them. It would be less wrong to argue that doing X is wrong given certain objectives but not that doing X is intrinsically wrong, that X is wrong in and of itself. After all people might simply want to do X or want to reach Z, X being the path leading up to Z.
I might call wedrifid morally bankrupt for eating meat simply because he likes bacon. But since I expect him to be aware of the consequences of eating meat I do not call him wrong. I’m only proclaiming that subjectively, from my point of view, he does have a poor taste. On the other hand, if I believed that he actually not only wanted to minimize suffering but that he also does assign more utility to reducing the death of beings than culinary considerations, I’d call him wrong for eating bacon just because it tastes good. I would call him wrong on the basis of failing to account for his true objectives in what he is actually doing. Yet I would not declare the activity of eating meat to be wrong itself but in the context of certain circumstances as a means to an end regarding his volition.
Eliezer first claimed that comments of a certain form would be automatically downvoted here at LW,
First you claimed there was bias involved in eating animals. It is eminently reasonable to interpret your responses as being connected to that claim.
If I was in error, and you have no care at all about eating animals, and you merely wish to discuss Eliezer’s claim, you are still wrong. Claims of that form should get automatically downvoted into oblivion because 99% of the time they are bullshit. The cost of rationally engaging with 99 bullshit claims of that form is higher than the loss of missing out on 1 correct claim.
That you can easily generate past examples of the 1% where they are not bullshit is not an argument for not downvoting such claims—any more than you easily generating examples of past lottery winners is an argument to play the lottery.
First you claimed there was bias involved in eating animals. It is eminently reasonable to interpret your responses as being connected to that claim.
No, it isn’t at all reasonable. My comment was a direct reply to Eliezer’s and was explicitly addressed as an answer to his rationale for downvoting my comment. You need to make an effort to keep the separate strands of the debate separate, otherwise you’ll misinterpret the structure of the different arguments.
If I was in error, and you have no care at all about eating animals
Wait, why does it follow from my saying you were in error that I had “no care at all about eating animals”?
That you can easily generate past examples of the 1% where they are not bullshit is not an argument for not downvoting such claims—any more than you easily generating examples of past lottery winners is an argument to play the lottery.
I didn’t claim in this context that my ability to generate such examples was an argument for not downvoting such claims. I claimed that my ability to generate such examples was an argument for concluding that Eliezer’s claim was false.
For my actual views on the appropriateness of downvoting such comments, see the other subthreads in this debate.
There is a prescriptive/descriptive divide here, and it does us no good to dance either side of it.
Descriptively:
Eliezer’s claim that comments of that form will get downvoted may be factually incorrect, given that it’s possible to, as you showed, create comments of that form that express sentiments most people would upvote.
The qualifier “almost” placed in front of his comment would suffice to cover these situations.
Prescriptively:
I don’t think that any comment that fits that form should be “automatically [...] downvoted to oblivion”
is a prescriptive statement, and one which I attempted to explain was wrong.
We are tripping over this divide, and several different meanings of ‘wrong’. Basically, we’re making different distinctions to each other, and probably ascribing incorrect intentions to each other. Are there any other possible mismatches I haven’t noticed?
I’m sorry, but I believe you keep misinterpreting my replies, perhaps due to a failure to understand the dialectical structure of this little exchange. So let me try to make that structure clear. Eliezer first claimed that comments of a certain form would be automatically downvoted here at LW, and that my original comment was downvoted because it had that form. I offered in reply a hypothetical situation that involved overt discrimination against certain groups of people, and argued that a comment analogous to mine in reaction to that situation would not and should not be automatically downvoted. You then accused me of morally equating discrimination against such groups of people with the use of non-human animals for human consumption. I then in turn replied that the function of that counterfactual situation had nothing to do with arguing for why meat-eating is wrong, but was instead meant to provide a counterexample to Eliezer’s generalization. Finally, in your most recent comment, you defend what you said in your previous post, stressing the fact that what you said was given as the explanation that Eliezer did not feel inclined to offer. But this is completely besides the point. The explanation that you gave took, as evidence that I was equating meat consumption with racism or sexism, the fact that I had used a situation involving racism and sexism as a reply to Eliezer, when as I have said already my having used that situation constitutes no such evidence.
The Wrong in Less Wrong (LW) is referring to the objective (“mind-independent” or intersubjectively verifiable) failure, error, inconsistency, illogical argumentation, irrational behavior. You are wrong if you take the wrong, or simply less effective, path to reach your goal. LW is trying to improve your map so that you’ll be able to find a better and more effective path. If you were perfectly aware of what you want, then with regard to your epistemic state, there would be the right thing to do. But LW does not claim to be right but merely less wrong. More importantly, LW does not tell you what you ought to want but rather how to figure out what you might actually want and how to obtain it. Therefore to ask how a group that claims to be less wrong can be doing X, which is wrong, implies that they not only claimed to be right, rather than just less wrong, but also that you know about their objectives and that X is the wrong way to reach them. It would be less wrong to argue that doing X is wrong given certain objectives but not that doing X is intrinsically wrong, that X is wrong in and of itself. After all people might simply want to do X or want to reach Z, X being the path leading up to Z.
I might call wedrifid morally bankrupt for eating meat simply because he likes bacon. But since I expect him to be aware of the consequences of eating meat I do not call him wrong. I’m only proclaiming that subjectively, from my point of view, he does have a poor taste. On the other hand, if I believed that he actually not only wanted to minimize suffering but that he also does assign more utility to reducing the death of beings than culinary considerations, I’d call him wrong for eating bacon just because it tastes good. I would call him wrong on the basis of failing to account for his true objectives in what he is actually doing. Yet I would not declare the activity of eating meat to be wrong itself but in the context of certain circumstances as a means to an end regarding his volition.
That is a brilliant explanation; It’s a shame that it is buried here so deeply in a neg-filtered branch.
It’s possible to rescue a buried comment by quoting it in a new branch.
First you claimed there was bias involved in eating animals. It is eminently reasonable to interpret your responses as being connected to that claim.
If I was in error, and you have no care at all about eating animals, and you merely wish to discuss Eliezer’s claim, you are still wrong. Claims of that form should get automatically downvoted into oblivion because 99% of the time they are bullshit. The cost of rationally engaging with 99 bullshit claims of that form is higher than the loss of missing out on 1 correct claim.
That you can easily generate past examples of the 1% where they are not bullshit is not an argument for not downvoting such claims—any more than you easily generating examples of past lottery winners is an argument to play the lottery.
No, it isn’t at all reasonable. My comment was a direct reply to Eliezer’s and was explicitly addressed as an answer to his rationale for downvoting my comment. You need to make an effort to keep the separate strands of the debate separate, otherwise you’ll misinterpret the structure of the different arguments.
Wait, why does it follow from my saying you were in error that I had “no care at all about eating animals”?
I didn’t claim in this context that my ability to generate such examples was an argument for not downvoting such claims. I claimed that my ability to generate such examples was an argument for concluding that Eliezer’s claim was false.
For my actual views on the appropriateness of downvoting such comments, see the other subthreads in this debate.
There is a prescriptive/descriptive divide here, and it does us no good to dance either side of it.
Descriptively:
Eliezer’s claim that comments of that form will get downvoted may be factually incorrect, given that it’s possible to, as you showed, create comments of that form that express sentiments most people would upvote.
The qualifier “almost” placed in front of his comment would suffice to cover these situations.
Prescriptively:
is a prescriptive statement, and one which I attempted to explain was wrong.
We are tripping over this divide, and several different meanings of ‘wrong’. Basically, we’re making different distinctions to each other, and probably ascribing incorrect intentions to each other. Are there any other possible mismatches I haven’t noticed?