I have sometimes seen arguments that fit this pattern, including on Less Wrong —
Your disagreement with me on a point of meta-level theory or ideology implies that you intend harm to me personally, or can’t be trusted not to harm me if the whim strikes you to do so.
It seems to me that something is deficient or abusive about many arguments of this form in the general case, but I’m not sure that it’s always wrong. What are some examples of legitimate arguments of this form?
(A point of clarification: The “meta-level theory or ideology” part is important. That should match propositions such as “consequentialism is true and deontology is false” or “natural-rights theory doesn’t usefully explain why we shouldn’t hurt others”. It should not match propositions such as “other people don’t really suffer when I punch them in the head” or “injury to wiggins has no moral significance”.)
One mistake is overestimating the probability that the other person will act on their ideology.
People compartmentalize. For example, in theory, religious people should kill me for being unbeliever, but in real life I don’t expect this from my neighbors. They will find an excuse not to act according to logical consequences of their faith; and most likely they will not even realize they did this.
(And it’s probably safest if I stop trying to teach them to decompartmentalize. Ethics first, effectiveness can wait. I don’t really need semi-rational Bible maximizers in my universe.)
I don’t think the problem with such argument are so much that they are wrong on a factual basis, but they prevent the discussion of some important ideas.
A feminist can argue that ze can measure with a implicit bias test how biased people are and that the argument that you are making is going to make the average reader more biased. Ze might be completely right, but that doesn’t mean that your argument is wrong on a factual level.
Once you move to political consideration that certain things are not allowed to be said because they support harmful memes, you are in danger of getting mind killed and be left with a world view that doesn’t allow you to make good predictions about reality.
I have sometimes seen arguments that fit this pattern, including on Less Wrong —
It seems to me that something is deficient or abusive about many arguments of this form in the general case, but I’m not sure that it’s always wrong. What are some examples of legitimate arguments of this form?
(A point of clarification: The “meta-level theory or ideology” part is important. That should match propositions such as “consequentialism is true and deontology is false” or “natural-rights theory doesn’t usefully explain why we shouldn’t hurt others”. It should not match propositions such as “other people don’t really suffer when I punch them in the head” or “injury to wiggins has no moral significance”.)
One mistake is overestimating the probability that the other person will act on their ideology.
People compartmentalize. For example, in theory, religious people should kill me for being unbeliever, but in real life I don’t expect this from my neighbors. They will find an excuse not to act according to logical consequences of their faith; and most likely they will not even realize they did this.
(And it’s probably safest if I stop trying to teach them to decompartmentalize. Ethics first, effectiveness can wait. I don’t really need semi-rational Bible maximizers in my universe.)
I don’t think the problem with such argument are so much that they are wrong on a factual basis, but they prevent the discussion of some important ideas.
A feminist can argue that ze can measure with a implicit bias test how biased people are and that the argument that you are making is going to make the average reader more biased. Ze might be completely right, but that doesn’t mean that your argument is wrong on a factual level.
Once you move to political consideration that certain things are not allowed to be said because they support harmful memes, you are in danger of getting mind killed and be left with a world view that doesn’t allow you to make good predictions about reality.