If what is stated above is your meaning then I think yes. However, if that is the case then this:
Not every set of claims is reducible to every other set of claims. There is nothing special about the set “claims about the state of the world, including one’s place in it and ability to affect it.” If you add, however, ought-claims, then you will get a very special set—the set of all information you need to make correct decisions.
Doesn’t make as much sense to me. Maybe you could clarify it for me?
In particular; it is unclear to me why Ought-claims in general, as opposed to some strict subset of ought-claims like “Action X affords me maximum expected utility relative to my utility function” ⇔ “I ought to do X”, are relevant to making decisions. If that is the case, why not dispense with “ought” altogether? Or is that what you’re actually aiming at?
Maybe because the information they signal is useful? But then there are other utterances that fall into this category too some of which are not, strictly speaking, words. So taking after that sense, the set would be incomplete. So I assume that probably isn’t what you mean either.
Also judging by this:
In this essay I talk about what I believe about rather than what I care about. What I care about seems like an entirely emotional question to me. I cannot Shut Up And Multiply about what I care about. If I do, in fact, Shut Up and Multiply, then it is because I believe that doing so is right. Suppose I believe that my future emotions will follow multiplication. I would have to, then, believe that I am going to self-modify into someone who multiplies. I would only do this because of a belief that doing so is right.
Would it be safe to say that your stance is essentially an emotivist one? Or is there a distinction I am missing here?
In particular; it is unclear to me why Ought-claims in general, as opposed to some strict subset of ought-claims like “Action X affords me maximum expected utility relative to my utility function” ⇔ “I ought to do X”, are relevant to making decisions. If that is the case, why not dispense with “ought” altogether? Or is that what you’re actually aiming at?
Well I guess strictly speaking not all “ought” claims are relevant to decision-making. So then I guess the argument that they form a natural category is more subtle.
I mean, technically, you don’t have to describe all aspects of the correct utility function. but the boundary around “the correct utility function” is simpler than the boundary around “the relevant parts of the correct utility function”
Would it be safe to say that your stance is essentially an emotivist one? Or is there a distinction I am missing here?
No. I think it’s propositional, not emotional. I’m arguing against an emotivist stance on the grounds that it doesn’t justify certain kinds of moral reasoning.
If what is stated above is your meaning then I think yes. However, if that is the case then this:
Doesn’t make as much sense to me. Maybe you could clarify it for me?
In particular; it is unclear to me why Ought-claims in general, as opposed to some strict subset of ought-claims like “Action X affords me maximum expected utility relative to my utility function” ⇔ “I ought to do X”, are relevant to making decisions. If that is the case, why not dispense with “ought” altogether? Or is that what you’re actually aiming at?
Maybe because the information they signal is useful? But then there are other utterances that fall into this category too some of which are not, strictly speaking, words. So taking after that sense, the set would be incomplete. So I assume that probably isn’t what you mean either.
Also judging by this:
Would it be safe to say that your stance is essentially an emotivist one? Or is there a distinction I am missing here?
Well I guess strictly speaking not all “ought” claims are relevant to decision-making. So then I guess the argument that they form a natural category is more subtle.
I mean, technically, you don’t have to describe all aspects of the correct utility function. but the boundary around “the correct utility function” is simpler than the boundary around “the relevant parts of the correct utility function”
No. I think it’s propositional, not emotional. I’m arguing against an emotivist stance on the grounds that it doesn’t justify certain kinds of moral reasoning.