If there’s a standard alternative term in moral philosophy then do please let me know.
As far as I know, there is not. In moral philosophy, when deontologists talk about morality, they are typically talking about things that are for the benefit of others. Indeed, they even have conversations about how to balance between self-interest and the demands of morality. In contrast, consequentialists have a theory that already accounts for the benefit of the agent who is doing the decision making: it counts just as much as anyone else. Thus for consequentialists, there is typically no separate conflict between self-interest and morality: morality for them already takes this into account. So in summary, many moral philosophers are aware of the distinction, but I don’t know of any pre-existing terms for it.
By the way, don’t worry too much about explaining all pre-requisites before making a post. Explaining some of them afterwards in response to comments can be a more engaging way to do it. In particular, it means that us readers can see which parts we are skeptical of and then just focus our attention on posts which defend that aspect, skimming the ones that we already agree with. Even when it comes to the book, it will probably be worth giving a sketch of where you want to end up early on, with forward references to the appropriate later chapters as needed. This will let the readers read the pre-requisite chapters in a more focused way.
Eliezer,
I’ve just reread your article and was wondering if this is a good quick summary of your position (leaving apart how you got to it):
‘I should X’ means that I would attempt to X were I fully informed.
Here ‘fully informed’ is supposed to include complete relevant empirical information and also access to all the best relevant philosophical arguments.