The problem is not that there is no way to identify ‘good’ or ‘right’ (as used intuitively, without tabooing) with a certain X. The problem is that X is huge and complicated and we don’t (yet) have access to its structure.
Strictly speaking, we can exhibit any definition of “good”, even one that doesn’t make any of the errors you pointed out, and still ask “Is it good?”. The criteria for exhibiting a particular definition are ultimately non-rigorous, even if the selected definition is, so we can always examine them further.
Moore’s argument might fail in the unintended use case of post-FAI morality not because at some point there might be no more potential for asking the question, but because, as with “Does 2+2 equal 4?”, there is a point at which we are certain enough to turn to other projects, even if in principle some uncertainty and lack of clarity in the intended meaning remains. It’s not at all clear this will ever happen to morality.
Strictly speaking, we can exhibit any definition of “good”, even one that doesn’t make any of the errors you pointed out, and still ask “Is it good?”.
Moore was correct that no alternate concrete meaning is identical to ‘good,’ his mistake was thinking that ‘good’ had its own concrete meaning. As Paul Ziff put it, good means ‘answers to an interest’ where the interest is semantically variable.
In math terms, the open question argument would be like asking the value of f(z) and when someone answers f(3), pointing out that f(z) is not the same thing as f(3).
I think the ‘huge and complicated’ X that Luke mentions is supposed to be the set all of inputs to f(z) that a given person is disposed to use. Or maybe the aggregate of all such sets for people.
Strictly speaking, we can exhibit any definition of “good”, even one that doesn’t make any of the errors you pointed out, and still ask “Is it good?”. The criteria for exhibiting a particular definition are ultimately non-rigorous, even if the selected definition is, so we can always examine them further.
Moore’s argument might fail in the unintended use case of post-FAI morality not because at some point there might be no more potential for asking the question, but because, as with “Does 2+2 equal 4?”, there is a point at which we are certain enough to turn to other projects, even if in principle some uncertainty and lack of clarity in the intended meaning remains. It’s not at all clear this will ever happen to morality.
Moore was correct that no alternate concrete meaning is identical to ‘good,’ his mistake was thinking that ‘good’ had its own concrete meaning. As Paul Ziff put it, good means ‘answers to an interest’ where the interest is semantically variable.
In math terms, the open question argument would be like asking the value of f(z) and when someone answers f(3), pointing out that f(z) is not the same thing as f(3).
I think the ‘huge and complicated’ X that Luke mentions is supposed to be the set all of inputs to f(z) that a given person is disposed to use. Or maybe the aggregate of all such sets for people.