Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.
Well, of course I don’t mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
As for the thesis above, its motivation can be stated thusly: If you can’t be wrong, you can never get better.
the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
How do you know what their real values are? Even after everyone’s professed values get destroyed by the truth, it’s not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you’ll see disagreement again. I just don’t see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn’t seem likely to me either; that kind of type distinction doesn’t seem to be built into human values. What could possibly force that kind of convergence?
Even after everyone’s professed values get destroyed by the truth, it’s not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure.
Your conclusion may be right, but the HedWeb isn’t strong evidence—as far as I recall David Pearce holds a philosophically flawed belief called “psychological hedonism” that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.
It’s an argument for it’s being possible that behavior isn’t representative of the actual values. That actual values are more united than the behaviors is a separate issue.
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change
I’m uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Let’s review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:
If a person likes watching TV, and spends much time watching TV, he must really care about TV, and saying that he’s wrong and actually watching TV is a mistake is just meaningless.
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Correct.
I state that the ethical diversity is less than the currently observed behavioral diversity.
I agree, and agree that the argument form you paraphrase is fallacious.
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn’t see “appeal to consequences” of a belief sufficiently distinct from desire to ensure the truth of the belief.
Human values are frequently in conflict with each other—which is the main explanation for all the fighting and wars in human history.
The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes.
Unfortunately, everyone behaves as though they want to maximise the representation of their own genome—and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives—which explains cooperation within families.
This doesn’t seem particularly complicated to me. What exactly is the problem?
Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.
Well, of course I don’t mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
As for the thesis above, its motivation can be stated thusly: If you can’t be wrong, you can never get better.
How do you know what their real values are? Even after everyone’s professed values get destroyed by the truth, it’s not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you’ll see disagreement again. I just don’t see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn’t seem likely to me either; that kind of type distinction doesn’t seem to be built into human values. What could possibly force that kind of convergence?
Okay, I’m writing this one down.
Your conclusion may be right, but the HedWeb isn’t strong evidence—as far as I recall David Pearce holds a philosophically flawed belief called “psychological hedonism” that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.
If “the thesis above” is the unity of values, this is not an argument. (I agree with ZM.)
It’s an argument for it’s being possible that behavior isn’t representative of the actual values. That actual values are more united than the behaviors is a separate issue.
It seems to me that it’s an appeal to the good consequences of believing that you can be wrong.
Well, obviously. So I’m now curious about what do you read in the discussion, so that you see this remark as worth making?
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
I’m uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Let’s review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
Correct.
I agree, and agree that the argument form you paraphrase is fallacious.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn’t see “appeal to consequences” of a belief sufficiently distinct from desire to ensure the truth of the belief.
Human values are frequently in conflict with each other—which is the main explanation for all the fighting and wars in human history.
The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes.
Unfortunately, everyone behaves as though they want to maximise the representation of their own genome—and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives—which explains cooperation within families.
This doesn’t seem particularly complicated to me. What exactly is the problem?