So, let us assume there exists some structure S3 in my head that implements my terminal values.
Maybe that’s eudaimonia, maybe that’s Hungarian goulash, I don’t really know what it is, and am not convinced that it’s anything internally coherent (that is, I’m perfectly prepared to believe that my S3 includes mutually exclusive states of the world).
I agree that when I label S3 “morality” I’m doing just what I do when I label S2 “eudaimonia” or label some other structure “prime”. There’s nothing special about the label “morality” in this sense. And if it turns out that you and I have close-enough S3s and we both label our S3s “morality,” then we mean the same thing by “morality.” Awesome.
If, OTOH, I have S3 implementing my terminal values and you have some different structure, S4, which you also label “morality”, then we might mean different things by “morality”.
Some day I might come to understand S3 and S4 well enough that I have a clear sense of the difference between them. At that point I have a lexical choice.
I can keep associating S3 with the label “morality” or “right” and apply some other label to S4 (e.g., “pseudo-morality” or “nsheppard’s right” or whatever). You might do the same thing. In that case, if (as you say) the only thing that might be remotely special about the label “morality” or “right” is that it might happens to refer to human terminal value, then it follows that there’s nothing special about that label in that case, since in that case it doesn’t refer to a common terminal value. It’s just another word.
Conversely, I can choose to associate the label “morality” or “right” with some new S5, a synthesis of S3 and S4… perhaps their intersection, perhaps something else. You might do the same thing. At that point we agree that “morality” means S5, even though S5 does not implement either of our terminal values.
So, let us assume there exists some structure S3 in my head that implements my terminal values.
Maybe that’s eudaimonia, maybe that’s Hungarian goulash, I don’t really know what it is, and am not convinced that it’s anything internally coherent (that is, I’m perfectly prepared to believe that my S3 includes mutually exclusive states of the world).
I agree that when I label S3 “morality” I’m doing just what I do when I label S2 “eudaimonia” or label some other structure “prime”. There’s nothing special about the label “morality” in this sense. And if it turns out that you and I have close-enough S3s and we both label our S3s “morality,” then we mean the same thing by “morality.” Awesome.
If, OTOH, I have S3 implementing my terminal values and you have some different structure, S4, which you also label “morality”, then we might mean different things by “morality”.
Some day I might come to understand S3 and S4 well enough that I have a clear sense of the difference between them. At that point I have a lexical choice.
I can keep associating S3 with the label “morality” or “right” and apply some other label to S4 (e.g., “pseudo-morality” or “nsheppard’s right” or whatever). You might do the same thing. In that case, if (as you say) the only thing that might be remotely special about the label “morality” or “right” is that it might happens to refer to human terminal value, then it follows that there’s nothing special about that label in that case, since in that case it doesn’t refer to a common terminal value. It’s just another word.
Conversely, I can choose to associate the label “morality” or “right” with some new S5, a synthesis of S3 and S4… perhaps their intersection, perhaps something else. You might do the same thing. At that point we agree that “morality” means S5, even though S5 does not implement either of our terminal values.