But the “strong convergence of values” thesis hasn’t really been argued, so I remain unclear as to why Eliezer finds it plausible.
Hasn’t been argued and seems quite implausible to me.
I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think), but I don’t find it meaningful when applied to groups of people, all with their own values.
EY finesses the point by talking about an abstract algorithm, and not clearly specifying what that algorithm actually implements, whether my values, yours, or some unspecified amalgamation of the values of different people. So that the point of moral subjectivism vs. moral universalism is left unspecified, to be filled in by the imagination of the reader.To my ear, sometimes it seems one way, and sometimes the other. My guess was that this was intentional, as clarifying the point wouldn’t take much effort. The discussions of EY’s metaethics always strike me as peculiar, as he’s wandering about here somewhere while people discuss how they’re unclear just what conclusion he had drawn.
I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think),
I can how that could be implemented. However, I don’t see how that would count as morality. It amounts to Anything Goes, or Do What Thou Wilt. I don’t see how a world in which that kind of “moral realism” holds would differ from one where moral subjectivism holds, or nihilism for that matter.
but I don’t find it meaningful when applied to groups of people, all with their own values.
Where meaningful means implementable? Moral realism is not many things, and one of the things it is not is the claim
that everyone gets to keep all their values and behaviour unaltered.
Not “anything goes, do what you will”, so much as “all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it”.
We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don’t always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job.
That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.
Not “anything goes, do what you will”, so much as “all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it”.
As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the “we” above is interpreted individually, I should: if it is interpreted collectively, I shouldn’t.
Yes, that is generally considered the core open problem of ethics, once you get past things like “how do we define value” and blah blah blah like that. How do I weigh one person’s utility against another person’s? Unless it’s been solved and nobody told me, that’s a Big Question.
It’s a hell of a lot better than nothing, and it’s entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions.
Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it’s possible to start looking for (or creating) situations where sinister choices don’t have to be made.
Hasn’t been argued and seems quite implausible to me.
I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think), but I don’t find it meaningful when applied to groups of people, all with their own values.
EY finesses the point by talking about an abstract algorithm, and not clearly specifying what that algorithm actually implements, whether my values, yours, or some unspecified amalgamation of the values of different people. So that the point of moral subjectivism vs. moral universalism is left unspecified, to be filled in by the imagination of the reader.To my ear, sometimes it seems one way, and sometimes the other. My guess was that this was intentional, as clarifying the point wouldn’t take much effort. The discussions of EY’s metaethics always strike me as peculiar, as he’s wandering about here somewhere while people discuss how they’re unclear just what conclusion he had drawn.
I can how that could be implemented. However, I don’t see how that would count as morality. It amounts to Anything Goes, or Do What Thou Wilt. I don’t see how a world in which that kind of “moral realism” holds would differ from one where moral subjectivism holds, or nihilism for that matter.
Where meaningful means implementable? Moral realism is not many things, and one of the things it is not is the claim that everyone gets to keep all their values and behaviour unaltered.
See my previous coment on “Real Magic”: http://lesswrong.com/lw/tv/excluding_the_supernatural/79ng
If you choose not to count the actual moralities that people have as morality, that’s up to you.
Not “anything goes, do what you will”, so much as “all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it”.
We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don’t always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job.
That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.
As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the “we” above is interpreted individually, I should: if it is interpreted collectively, I shouldn’t.
Yes, that is generally considered the core open problem of ethics, once you get past things like “how do we define value” and blah blah blah like that. How do I weigh one person’s utility against another person’s? Unless it’s been solved and nobody told me, that’s a Big Question.
So...what’s the point of CEV, hten?
It’s a hell of a lot better than nothing, and it’s entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions.
Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it’s possible to start looking for (or creating) situations where sinister choices don’t have to be made.
Who said the alternative is nothing? Theres any number of theories of morality, and a further number of theories of friendly .ai.