The contents of utility functions are arational. There is nothing contradictory about a rational paperclip maximizer. If it acts in ways that prevents it from maximizing paperclips, it would be an irrational paperclip maximizer. Rationality is about how you pursue your utility function (among other things), not what that utility function seeks to maximize.
If you have a strictly selfish utility function, then, yes, acting to maximize it would be rational. Not everyone has a strictly selfish utility function. In fact, I would go so far as to say that the vast majority of people do not have strictly selfish utility functions. I have seen nothing on this site that would suggest a strictly selfish utility function is any more rational than any other utility function.
Thus, this conclusion really is trivial. You’ve used “rational” to imply a highly specific (and, I’m pretty sure, uncommon) utility function, when the use of the term on LW generally has no implication about the contents of a utility function. If you do not force “selfish utility function” into rationality, your conclusion does not follow from your premises.
I can, using the same method, prove that all rationalists can breath underwater, so long as “rationalist” means “fish.” That’s what I mean by trivial.
The problem and confusion with this term is that you call the utility function “selfish” even when the agent cares about nothing except helping others. I think this is about the only reason people complain about this terminology or misinterpret you, thinking that whatever concept you mean by this term should somehow exclude helping others from terminal values.
The contents of utility functions are arational. There is nothing contradictory about a rational paperclip maximizer. If it acts in ways that prevents it from maximizing paperclips, it would be an irrational paperclip maximizer. Rationality is about how you pursue your utility function (among other things), not what that utility function seeks to maximize.
If you have a strictly selfish utility function, then, yes, acting to maximize it would be rational. Not everyone has a strictly selfish utility function. In fact, I would go so far as to say that the vast majority of people do not have strictly selfish utility functions. I have seen nothing on this site that would suggest a strictly selfish utility function is any more rational than any other utility function.
Thus, this conclusion really is trivial. You’ve used “rational” to imply a highly specific (and, I’m pretty sure, uncommon) utility function, when the use of the term on LW generally has no implication about the contents of a utility function. If you do not force “selfish utility function” into rationality, your conclusion does not follow from your premises.
I can, using the same method, prove that all rationalists can breath underwater, so long as “rationalist” means “fish.” That’s what I mean by trivial.
By “selfish utility function” I mean exactly the same as “private utility function”. I mean that it is that agent’s utility function.
The problem and confusion with this term is that you call the utility function “selfish” even when the agent cares about nothing except helping others. I think this is about the only reason people complain about this terminology or misinterpret you, thinking that whatever concept you mean by this term should somehow exclude helping others from terminal values.