The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. People can be mistaken about what they really want, or what the facts prescribe (given their values).
I linked to this above, but I don’t know if you’ve read it. Essentially, you’re explaining moral disagreement by positing massively improbable mutations, but it’s far more likely to be a combination of bad introspection and non-bayesian updating.
Essentially, you’re explaining moral disagreement by positing massively improbable mutations [...]
Um, different organisms of the same species typically have conflicting interests due to standard genetic diversity—not “massively improbable mutations”.
Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict—because A and B have non-identical genomes. Clearly, no “massively improbable mutations” are required in this explanation. This is pretty-much biology 101.
Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict—because A and B have non-identical genomes.
It’s very hard for A and B to know how much their genomes differ, because they can only observe each other’s phenotypes, and they can’t invest too much time in that either. So they will mostly compete even if their genomes happen to be identical.
The kin recognition that you mention may be tricky, but kin selection is much more widespread—because there are heuristics that allow organisms to favour their kin without the need to examine them closely—like: “be nice to your nestmates”.
Simple limited dispersal often results in organisms being surrounded by their close kin—and this is a pretty common state of affairs for plants and fungi.
Well, for humans, we’ve evolved desires that work interpersonally (fairness, desires for others’ happiness etc,). I think that an AI, which had our values written in, would have no problem figuring out what’s best for us. It would say ‘well, there’s is complex set of values, that sum up to everyone being treated well (or something), and so each party involved should be treated well.’
You’re right though, I hadn’t made clear idea about how this bit worked. Maybe this helps?
Not similar enough to prevent massive conflicts—historically.
Basically, small differences in optimisation targets can result in large conflicts.
And even more simply, if everyone has exactly the same optimization target “benefit myself at the expense of others”, then there’s a big conflict.
The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. People can be mistaken about what they really want, or what the facts prescribe (given their values).
I linked to this above, but I don’t know if you’ve read it. Essentially, you’re explaining moral disagreement by positing massively improbable mutations, but it’s far more likely to be a combination of bad introspection and non-bayesian updating.
Um, different organisms of the same species typically have conflicting interests due to standard genetic diversity—not “massively improbable mutations”.
Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict—because A and B have non-identical genomes. Clearly, no “massively improbable mutations” are required in this explanation. This is pretty-much biology 101.
It’s very hard for A and B to know how much their genomes differ, because they can only observe each other’s phenotypes, and they can’t invest too much time in that either. So they will mostly compete even if their genomes happen to be identical.
The kin recognition that you mention may be tricky, but kin selection is much more widespread—because there are heuristics that allow organisms to favour their kin without the need to examine them closely—like: “be nice to your nestmates”.
Simple limited dispersal often results in organisms being surrounded by their close kin—and this is a pretty common state of affairs for plants and fungi.
Oops.
Yup, I missed something there.
Well, for humans, we’ve evolved desires that work interpersonally (fairness, desires for others’ happiness etc,). I think that an AI, which had our values written in, would have no problem figuring out what’s best for us. It would say ‘well, there’s is complex set of values, that sum up to everyone being treated well (or something), and so each party involved should be treated well.’
You’re right though, I hadn’t made clear idea about how this bit worked. Maybe this helps?