Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer’s sequences, the term was “fake utility function”. The presence of “fake” in the term is important, it reminds of incorrectness of the view.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I’ve gotten comments from others that it was highly clarifying for them. Maybe they’re more used to the meaning of ‘meaning’ from linguistics.
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intuitive concept of ‘ought’. In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for thousands of years, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
It’s not clear from this paragraph whether “intuitive concept” refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it’s also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version:
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intended meaning of ‘ought’ (to which you don’t have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for centuries, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer’s sequences, the term was “fake utility function”. The presence of “fake” in the term is important, it reminds of incorrectness of the view.
So far, you’ve managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I’ve gotten comments from others that it was highly clarifying for them. Maybe they’re more used to the meaning of ‘meaning’ from linguistics.
Does this new paragraph at the end of this section in PMR help?
It’s not clear from this paragraph whether “intuitive concept” refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it’s also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Here’s the way I’m thinking about it.
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Does that make sense?
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version: