Not “anything goes, do what you will”, so much as “all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it”.
We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don’t always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job.
That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.
Not “anything goes, do what you will”, so much as “all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it”.
As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the “we” above is interpreted individually, I should: if it is interpreted collectively, I shouldn’t.
Yes, that is generally considered the core open problem of ethics, once you get past things like “how do we define value” and blah blah blah like that. How do I weigh one person’s utility against another person’s? Unless it’s been solved and nobody told me, that’s a Big Question.
It’s a hell of a lot better than nothing, and it’s entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions.
Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it’s possible to start looking for (or creating) situations where sinister choices don’t have to be made.
Not “anything goes, do what you will”, so much as “all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it”.
We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don’t always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job.
That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.
As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the “we” above is interpreted individually, I should: if it is interpreted collectively, I shouldn’t.
Yes, that is generally considered the core open problem of ethics, once you get past things like “how do we define value” and blah blah blah like that. How do I weigh one person’s utility against another person’s? Unless it’s been solved and nobody told me, that’s a Big Question.
So...what’s the point of CEV, hten?
It’s a hell of a lot better than nothing, and it’s entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions.
Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it’s possible to start looking for (or creating) situations where sinister choices don’t have to be made.
Who said the alternative is nothing? Theres any number of theories of morality, and a further number of theories of friendly .ai.