I’ve never had that conversation with explicitly religious people, and moral realism at the “some things are just wrong and any sufficiently intelligent system will know it” level is hardly unheard of among atheists.
moral realism at the “some things are just wrong and any sufficiently intelligent system will know it” level is hardly unheard of among atheists.
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don’t have some kind of religious/faith-based metaphysics operating, you can’t be a moral realist. What experiment could you possibly perform to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn’t make any sense.
Disagreed, depending on your definition of “morality”. A sufficiently totalitarian God can easily not only decide what is moral but force us to find the proper morality morally compelling.
(There is at least one religion that actually believes something along these lines, though I don’t follow it.)
Ok, that definition is not nonsense. But in that case, it could happen without God too. Maybe the universe’s laws cause people to converge on some morality, either due to the logic of evolutionary cooperation or another principle. It could even be an extra feature of physics that forces this convergence.
Perhaps Eli and you are talking past each other a bit. A certain kind of god would be strong evidence for moral realism, but moral realism wouldn’t be strong evidence for a god of any kind.
You talk as though religion were something that appeared in people’s minds fully formed and without causes, and that the logical fallacies associated with it were then caused by religion.
Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a “perfect” reification.
That’s not moral realism (“morality is a part of the universe itself, external to human beings”), that’s objective moral-cognitivism (“morality is a measurable part of us but has no other grounding in external reality”).
Well if you knew what the goals were and could prove that such goals appeal to all intelligent, rational beings, including but not limited to humans, UFAI, Great Cthulhu, and business corporations...
I don’t need to do that. We are used to the idea that some people don’t find morality appealing, and we have mechanisms such as social disapproval and prisons to get the recalcitrant to play along.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don’t have some kind of religious/faith-based metaphysics operating, you can’t be a moral realist. What experiment could you possibly perform
That would be epistemology...
to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn’t make any sense.
There are rationally acceptable subjects that don’t use empiricism, such as maths, and there are subjects such as economics which have a mixed epistemology.
However, if this epistemological-sounding complaint is actually about metaphysics, ie “what experiment could you perform to detect a non-natural moral property”, the answer is that moral realists have to suppose the existence of special psychological faculty.
I’ve never had that conversation with explicitly religious people, and moral realism at the “some things are just wrong and any sufficiently intelligent system will know it” level is hardly unheard of among atheists.
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don’t have some kind of religious/faith-based metaphysics operating, you can’t be a moral realist. What experiment could you possibly perform to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn’t make any sense.
Oh well.
Moral realism makes no more sense with religion. As CS Lewis said: “Nonsense does not cease to be nonsense when we put the words ‘God can’ before it.”
Disagreed, depending on your definition of “morality”. A sufficiently totalitarian God can easily not only decide what is moral but force us to find the proper morality morally compelling.
(There is at least one religion that actually believes something along these lines, though I don’t follow it.)
Ok, that definition is not nonsense. But in that case, it could happen without God too. Maybe the universe’s laws cause people to converge on some morality, either due to the logic of evolutionary cooperation or another principle. It could even be an extra feature of physics that forces this convergence.
Perhaps Eli and you are talking past each other a bit. A certain kind of god would be strong evidence for moral realism, but moral realism wouldn’t be strong evidence for a god of any kind.
Well sure, but if you’re claiming physics enforces a moral order, you’ve reinvented non-theistic religion.
Why? Beliefs that make no sense are very common. Atheists are no exception.
Actually, if anything, I’d call it the reverse. Religious people know where we’re making unevidenced assumptions.
You talk as though religion were something that appeared in people’s minds fully formed and without causes, and that the logical fallacies associated with it were then caused by religion.
Hmm. Fair point. “We imagine the universe as we are.”
Might I suggest you take a look at the metaethics sequence? This position is explained very well.
Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a “perfect” reification.
That’s not moral realism (“morality is a part of the universe itself, external to human beings”), that’s objective moral-cognitivism (“morality is a measurable part of us but has no other grounding in external reality”).
Can you rule out a form objective moral-cognitivism that applies to any sufficiently rational and intelligent being?
Unless morality consists of game theory, I can rule out any objective moral cognitivism that applies to any intelligent and/or rational being.
Why shouldn’t it consist of optimal rules for achieving certain goals?
Well if you knew what the goals were and could prove that such goals appeal to all intelligent, rational beings, including but not limited to humans, UFAI, Great Cthulhu, and business corporations...
I don’t need to do that. We are used to the idea that some people don’t find morality appealing, and we have mechanisms such as social disapproval and prisons to get the recalcitrant to play along.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
Which position? The metaethics sequence isn’t clearly re4alist, or anything else.
That would be epistemology...
There are rationally acceptable subjects that don’t use empiricism, such as maths, and there are subjects such as economics which have a mixed epistemology.
However, if this epistemological-sounding complaint is actually about metaphysics, ie “what experiment could you perform to detect a non-natural moral property”, the answer is that moral realists have to suppose the existence of special psychological faculty.
You seem to be confusing atheism with positivism. In particular, the kind of positivism that’s self-refuting.
In what fashion is positivism self-refuting?
The proposition “only propositions that can be empirically tested are meaningful” cannot be empirically tested.
Its meaningless by its own epistemology.
Eppur si muove! It still works.
Works at what? Note that it,s not a synonym for science or empiricism , or the scientific method .