Aumann’s agreement theorem seems to imply that individual extrapolated volition assignments must agree on statements of fact: the setup implies that they’re simulated as perfect reasoners and share a knowledge pool, and the extrapolation process provides for an unbounded number of Bayesian updates. So we can expect extrapolated volition to cohere exactly to the extent that it’s based on common fundamental goals: not immediate desires and not possibly-fallible philosophical results, but the low-level affective assignments that lead us to think of those higher-level results as desirable or undesirable.
To what extent do common fundamental goals drive our moral reasoning? I don’t know, but individual differences do exist (the existence of masochistic people should prove that), and if they’re large enough then CEV may end up looking incomplete or unpleasantly compromise-driven.
I don’t know, but individual differences do exist (the existence of masochistic people should prove that)
But is that relevant to the question that CEV tries to answer? As far as I know, most masochistic people don’t also hold a belief that everybody should be masochistic.
Even if individual differences in fundamental goals are not extended to other people as imperatives, they imply that the ability of a coherent extrapolated volition scheme to satisfy individual preferences must be limited.
Depending on the size of those differences, this may or may not be a big deal. And we’re very likely to have fundamental social goals that do include external imperatives, although masochism isn’t one.
Mea culpa; I seem to have overgeneralized the extrapolation process.
But unless all its flaws are context-independent and uniformly distributed across humanity, I suspect they’d make human volition less likely to cohere, not more.
Aumann’s agreement theorem seems to imply that individual extrapolated volition assignments must agree on statements of fact: the setup implies that they’re simulated as perfect reasoners and share a knowledge pool, and the extrapolation process provides for an unbounded number of Bayesian updates. So we can expect extrapolated volition to cohere exactly to the extent that it’s based on common fundamental goals: not immediate desires and not possibly-fallible philosophical results, but the low-level affective assignments that lead us to think of those higher-level results as desirable or undesirable.
To what extent do common fundamental goals drive our moral reasoning? I don’t know, but individual differences do exist (the existence of masochistic people should prove that), and if they’re large enough then CEV may end up looking incomplete or unpleasantly compromise-driven.
But is that relevant to the question that CEV tries to answer? As far as I know, most masochistic people don’t also hold a belief that everybody should be masochistic.
Even if individual differences in fundamental goals are not extended to other people as imperatives, they imply that the ability of a coherent extrapolated volition scheme to satisfy individual preferences must be limited.
Depending on the size of those differences, this may or may not be a big deal. And we’re very likely to have fundamental social goals that do include external imperatives, although masochism isn’t one.
That would make them throughly non-human in psychology. It’s a possibly useful take on CEV but I’m not sure it’s a standard one.
Mea culpa; I seem to have overgeneralized the extrapolation process.
But unless all its flaws are context-independent and uniformly distributed across humanity, I suspect they’d make human volition less likely to cohere, not more.