I’ve talked to a number of folks who conclude that AIs will be superintelligent and therefore will naturally derive and follow the true morality (you know, the same one we do), and dismiss all that Robot Wars stuff as television crap (not unreasonably, as far as it goes).
I’ve never had that conversation with explicitly religious people, and moral realism at the “some things are just wrong and any sufficiently intelligent system will know it” level is hardly unheard of among atheists.
moral realism at the “some things are just wrong and any sufficiently intelligent system will know it” level is hardly unheard of among atheists.
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don’t have some kind of religious/faith-based metaphysics operating, you can’t be a moral realist. What experiment could you possibly perform to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn’t make any sense.
Disagreed, depending on your definition of “morality”. A sufficiently totalitarian God can easily not only decide what is moral but force us to find the proper morality morally compelling.
(There is at least one religion that actually believes something along these lines, though I don’t follow it.)
Ok, that definition is not nonsense. But in that case, it could happen without God too. Maybe the universe’s laws cause people to converge on some morality, either due to the logic of evolutionary cooperation or another principle. It could even be an extra feature of physics that forces this convergence.
Perhaps Eli and you are talking past each other a bit. A certain kind of god would be strong evidence for moral realism, but moral realism wouldn’t be strong evidence for a god of any kind.
You talk as though religion were something that appeared in people’s minds fully formed and without causes, and that the logical fallacies associated with it were then caused by religion.
Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a “perfect” reification.
That’s not moral realism (“morality is a part of the universe itself, external to human beings”), that’s objective moral-cognitivism (“morality is a measurable part of us but has no other grounding in external reality”).
Well if you knew what the goals were and could prove that such goals appeal to all intelligent, rational beings, including but not limited to humans, UFAI, Great Cthulhu, and business corporations...
I don’t need to do that. We are used to the idea that some people don’t find morality appealing, and we have mechanisms such as social disapproval and prisons to get the recalcitrant to play along.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don’t have some kind of religious/faith-based metaphysics operating, you can’t be a moral realist. What experiment could you possibly perform
That would be epistemology...
to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn’t make any sense.
There are rationally acceptable subjects that don’t use empiricism, such as maths, and there are subjects such as economics which have a mixed epistemology.
However, if this epistemological-sounding complaint is actually about metaphysics, ie “what experiment could you perform to detect a non-natural moral property”, the answer is that moral realists have to suppose the existence of special psychological faculty.
Pedantic complaint about language: moral realism simply says that moral claims do state facts, and at least some of them are true. It takes further assumptions (“internalism”) to claim that these moral facts are universally compelling in the sense of moving any intelligent being to action. (I personally believe the latter assumption to be nonsense, hence AGI is a really bad idea.)
Granted, I don’t know of any nice precise term for that position that all intelligent beings must necessarily do the right thing, possibly because it’s so ridiculous no philosopher would profess it publicly in such words. On the other hand, motivational internalism would seem to be very intuitive, judging by the pervasiveness of the view that AI doesn’t pose any risk.
So, moral motivational internalism. Then I agree that we tend to reject it. For example, here. You can make it work by having “this motivates the person considering it” be incorporated into the definition of “right”, but that results in a relativist definition, and I don’t see any need for it anyway.
No, the idea of motivational internalism is that you can’t judge something as right or wrong without being motivated to pursue or avoid it. Like if the word “right” was short for “this thing matches my terminal values”.
The alternative is externalism, where “right” means {X, Y, Z} and we (some/most/all humans) are motivated to pursue it just because we like {X, Y, Z}.
Does “Intrinsic Motivation” in this context entail that all intelligent beings must necessarily do the right thing?
If so, then I agree that we tend to reject it. As for “without argument”… do you mean you’ve read the local discussions of the topic and find them unconvincing? Or do you mean you believe it hasn’t been discussed at all?
If not, then I don’t know what you’re saying.
If you prefer to continue expressing yourself in gnomic utterances, that’s of course your choice, but I find it an unhelpful way to communicate and will tap out here if so.
Eh, maybe? I’ve seen “convergence thesis” thrown about on LW, but it’s hardly established terminology. Not sure it would be fair to use a phrase so easily confused with Bostrom’s much more reasonable Instrumental Convergence Thesis either. (Also, it has nothing to do with CEV so I don’t see the point of that link.)
Are these religious people? I mean, come on, where do you get moral realism if not from some kind of moral metaphysics?
From abstract reason or psychological facts, or physical facts, or a mixture.
There is a subject called economics. It tells you how to achieve certain goals, such as maximising GDP. It doesn’t do that by corresponding to a metaphysical Economics Object, it does that with a mixture of theoretical reasoning and examination of evidence.
There is a subject called ethics. It tells you how to achieve certain goals, such as maximising happiness....
There is a subject called ethics. It tells you how to achieve certain goals, such as maximising happiness....
Well there’s the problem: ethics does not automatically start out with a happiness-utilitarian goal. Lots of extent ethical systems use other terminal goals. For instance...
Of course economics doesn’t have the well-established laws of physical science: it wouldn’t be much of an analogy for ethics if it did.But having an epistemology that doens’t work very well is not the same as having an epistemology that requires non-natural entities.
The main problem with economics is not its descriptive, but its predictive power. Too many of economics’ calculations need to suppose that everyone will behave rationally, which regular people can’t be trusted to do. Same problem with politics.
I’ve talked to a number of folks who conclude that AIs will be superintelligent and therefore will naturally derive and follow the true morality (you know, the same one we do), and dismiss all that Robot Wars stuff as television crap (not unreasonably, as far as it goes).
Which one’s that, eh ;-)?
Are these religious people? I mean, come on, where do you get moral realism if not from some kind of moral metaphysics?
Certainly it’s not unreasonable. One UFAI versus humans with no FAI to fight back, I wouldn’t call anything so one-sided a war.
(And I’m sooo not making the Dalek reference that I really want to. Someone else should do it.)
I’ve never had that conversation with explicitly religious people, and moral realism at the “some things are just wrong and any sufficiently intelligent system will know it” level is hardly unheard of among atheists.
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don’t have some kind of religious/faith-based metaphysics operating, you can’t be a moral realist. What experiment could you possibly perform to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn’t make any sense.
Oh well.
Moral realism makes no more sense with religion. As CS Lewis said: “Nonsense does not cease to be nonsense when we put the words ‘God can’ before it.”
Disagreed, depending on your definition of “morality”. A sufficiently totalitarian God can easily not only decide what is moral but force us to find the proper morality morally compelling.
(There is at least one religion that actually believes something along these lines, though I don’t follow it.)
Ok, that definition is not nonsense. But in that case, it could happen without God too. Maybe the universe’s laws cause people to converge on some morality, either due to the logic of evolutionary cooperation or another principle. It could even be an extra feature of physics that forces this convergence.
Perhaps Eli and you are talking past each other a bit. A certain kind of god would be strong evidence for moral realism, but moral realism wouldn’t be strong evidence for a god of any kind.
Well sure, but if you’re claiming physics enforces a moral order, you’ve reinvented non-theistic religion.
Why? Beliefs that make no sense are very common. Atheists are no exception.
Actually, if anything, I’d call it the reverse. Religious people know where we’re making unevidenced assumptions.
You talk as though religion were something that appeared in people’s minds fully formed and without causes, and that the logical fallacies associated with it were then caused by religion.
Hmm. Fair point. “We imagine the universe as we are.”
Might I suggest you take a look at the metaethics sequence? This position is explained very well.
Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a “perfect” reification.
That’s not moral realism (“morality is a part of the universe itself, external to human beings”), that’s objective moral-cognitivism (“morality is a measurable part of us but has no other grounding in external reality”).
Can you rule out a form objective moral-cognitivism that applies to any sufficiently rational and intelligent being?
Unless morality consists of game theory, I can rule out any objective moral cognitivism that applies to any intelligent and/or rational being.
Why shouldn’t it consist of optimal rules for achieving certain goals?
Well if you knew what the goals were and could prove that such goals appeal to all intelligent, rational beings, including but not limited to humans, UFAI, Great Cthulhu, and business corporations...
I don’t need to do that. We are used to the idea that some people don’t find morality appealing, and we have mechanisms such as social disapproval and prisons to get the recalcitrant to play along.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
Which position? The metaethics sequence isn’t clearly re4alist, or anything else.
That would be epistemology...
There are rationally acceptable subjects that don’t use empiricism, such as maths, and there are subjects such as economics which have a mixed epistemology.
However, if this epistemological-sounding complaint is actually about metaphysics, ie “what experiment could you perform to detect a non-natural moral property”, the answer is that moral realists have to suppose the existence of special psychological faculty.
You seem to be confusing atheism with positivism. In particular, the kind of positivism that’s self-refuting.
In what fashion is positivism self-refuting?
The proposition “only propositions that can be empirically tested are meaningful” cannot be empirically tested.
Its meaningless by its own epistemology.
Eppur si muove! It still works.
Works at what? Note that it,s not a synonym for science or empiricism , or the scientific method .
Pedantic complaint about language: moral realism simply says that moral claims do state facts, and at least some of them are true. It takes further assumptions (“internalism”) to claim that these moral facts are universally compelling in the sense of moving any intelligent being to action. (I personally believe the latter assumption to be nonsense, hence AGI is a really bad idea.)
Granted, I don’t know of any nice precise term for that position that all intelligent beings must necessarily do the right thing, possibly because it’s so ridiculous no philosopher would profess it publicly in such words. On the other hand, motivational internalism would seem to be very intuitive, judging by the pervasiveness of the view that AI doesn’t pose any risk.
Isn’t it called Convergence?
Are you under the impression that CEV advocates around here believe that all intelligent beings must necessarily do the right thing?
On the whole, confusion reigns, but there is a fairly consistent tendency to reject Intrinsic Motivation without argument.
What’s “Intrinsic Motivation”? The only hits for it on LW are about akrasia.
As in intrinsically motivating states and concpets
So, moral motivational internalism. Then I agree that we tend to reject it. For example, here. You can make it work by having “this motivates the person considering it” be incorporated into the definition of “right”, but that results in a relativist definition, and I don’t see any need for it anyway.
Motivational internalism may not be an obvious truth, but that doesn’t mean its falsehood is the default. I don’t see the relevance of the link.
So, basicly, what we call “terminal values”?
No, the idea of motivational internalism is that you can’t judge something as right or wrong without being motivated to pursue or avoid it. Like if the word “right” was short for “this thing matches my terminal values”.
The alternative is externalism, where “right” means {X, Y, Z} and we (some/most/all humans) are motivated to pursue it just because we like {X, Y, Z}.
Ah, OK. Thanks for explaining.
Does “Intrinsic Motivation” in this context entail that all intelligent beings must necessarily do the right thing?
If so, then I agree that we tend to reject it. As for “without argument”… do you mean you’ve read the local discussions of the topic and find them unconvincing? Or do you mean you believe it hasn’t been discussed at all?
If not, then I don’t know what you’re saying.
If you prefer to continue expressing yourself in gnomic utterances, that’s of course your choice, but I find it an unhelpful way to communicate and will tap out here if so.
If not, I’m
Little argument and none convincing.
Eh, maybe? I’ve seen “convergence thesis” thrown about on LW, but it’s hardly established terminology. Not sure it would be fair to use a phrase so easily confused with Bostrom’s much more reasonable Instrumental Convergence Thesis either. (Also, it has nothing to do with CEV so I don’t see the point of that link.)
From abstract reason or psychological facts, or physical facts, or a mixture.
There is a subject called economics. It tells you how to achieve certain goals, such as maximising GDP. It doesn’t do that by corresponding to a metaphysical Economics Object, it does that with a mixture of theoretical reasoning and examination of evidence.
There is a subject called ethics. It tells you how to achieve certain goals, such as maximising happiness....
Well there’s the problem: ethics does not automatically start out with a happiness-utilitarian goal. Lots of extent ethical systems use other terminal goals. For instance...
“Such as”
Sufficient rationality will tell you how to maximize any goal, once you can clearly define the goal.
Rationality is quite helpful for clarifying goals too.
Problem is, economics is not a science:
http://www.theatlantic.com/business/archive/2013/04/the-laws-of-economics-dont-exist/274901/
Of course economics doesn’t have the well-established laws of physical science: it wouldn’t be much of an analogy for ethics if it did.But having an epistemology that doens’t work very well is not the same as having an epistemology that requires non-natural entities.
The main problem with economics is not its descriptive, but its predictive power. Too many of economics’ calculations need to suppose that everyone will behave rationally, which regular people can’t be trusted to do. Same problem with politics.