That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change
I’m uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Let’s review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:
If a person likes watching TV, and spends much time watching TV, he must really care about TV, and saying that he’s wrong and actually watching TV is a mistake is just meaningless.
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Correct.
I state that the ethical diversity is less than the currently observed behavioral diversity.
I agree, and agree that the argument form you paraphrase is fallacious.
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn’t see “appeal to consequences” of a belief sufficiently distinct from desire to ensure the truth of the belief.
It seems to me that it’s an appeal to the good consequences of believing that you can be wrong.
Well, obviously. So I’m now curious about what do you read in the discussion, so that you see this remark as worth making?
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
I’m uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Let’s review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
Correct.
I agree, and agree that the argument form you paraphrase is fallacious.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn’t see “appeal to consequences” of a belief sufficiently distinct from desire to ensure the truth of the belief.