Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version:
But one must not fall into the trap of thinking that a definition you’ve stipulated (aloud or in your head) for ‘ought’ must match up to your intended meaning of ‘ought’ (to which you don’t have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of ‘ought’ language can go in circles for centuries, and why any stipulated meaning of ‘ought’ is a fake utility function. To see clearly to our intuitive concept of ought, we’ll have to try empathic metaethics (see below).
Here’s the way I’m thinking about it.
Roger has an intuitive concept of ‘morally good’, the intended meaning of which he doesn’t fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of ‘morally good’ is ‘that which produces the greatest pleasure for the greatest number’.
The conceptual analyst comes along and says: “Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?”
ROGER: Huh. I guess that’s not quite what I mean by ‘morally good’. I think what I mean by ‘morally good’ is ‘that which produces the greatest subjective satisfaction of wants in the greatest number’.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone’s brains up to ‘The Matrix’ and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that’s not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for ‘morally good’ by this intuitive process is that it brings into play, as you say, all the ‘oafish tools’ in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Does that make sense?
That intuition says the same thing as “pleasure-maximization”, or that intended meaning can be captured as “pleasure-maximization”? Even if intuition is saying exactly “pleasure-maximization”, it’s not necessarily the intended meaning, and so it’s unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Understood. I think I’m trying to figure out if there’s a better way to talk about this ‘intended meaning’ (that we don’t yet have access to) than to say ‘intended meaning’ or ‘intuitive meaning’. But maybe I’ll just have to say ‘intended meaning (that we don’t yet have access to)’.
New paragraph version: