Rationality, as generally used on this site, refers to a method of understanding the world rather than a specific utility function.
Most people here, when they say an agent is rational, mean that agent maximizes the expected value of its utility function. That is the definition I was using. That means it is a selfish utility maximizer—selfish because it maximizes its own utility.
I used the term because I want to contrast agents that maximize their own utility functions, with agents that rewrite their utility functions to incorporate someone else’s, or that have an extra bias towards altruism (and thus are not maximizing their utility function).
The contents of utility functions are arational. There is nothing contradictory about a rational paperclip maximizer. If it acts in ways that prevents it from maximizing paperclips, it would be an irrational paperclip maximizer. Rationality is about how you pursue your utility function (among other things), not what that utility function seeks to maximize.
If you have a strictly selfish utility function, then, yes, acting to maximize it would be rational. Not everyone has a strictly selfish utility function. In fact, I would go so far as to say that the vast majority of people do not have strictly selfish utility functions. I have seen nothing on this site that would suggest a strictly selfish utility function is any more rational than any other utility function.
Thus, this conclusion really is trivial. You’ve used “rational” to imply a highly specific (and, I’m pretty sure, uncommon) utility function, when the use of the term on LW generally has no implication about the contents of a utility function. If you do not force “selfish utility function” into rationality, your conclusion does not follow from your premises.
I can, using the same method, prove that all rationalists can breath underwater, so long as “rationalist” means “fish.” That’s what I mean by trivial.
The problem and confusion with this term is that you call the utility function “selfish” even when the agent cares about nothing except helping others. I think this is about the only reason people complain about this terminology or misinterpret you, thinking that whatever concept you mean by this term should somehow exclude helping others from terminal values.
The most obvious interpretation of “selfish utility maximizer” is someone who has a selfish utility function, and if you meant something else, you should have clarified. The context suggests that “selfish utility function” is exactly what you meant. Moreover, your conclusions require that “selfish utility function” is what you meant. Under this reading, being a selfish utility maximizer has no relationship to being rational; the contents of utility functions are arational. Because rationality does not imply anything about your utility function, your conclusions simply don’t follow. You argument seems to center on this:
Rationality means acting to achieve your goals. There will almost always be other people who are more powerful than you and who have resources that you need, and they don’t want you to achieve your goals. They want you to achieve their goals.
“They don’t want you to achieve your goals” is probably, in almost all cases where you apply it, false. My lover probably does want me to achieve my goals. My employer is, at the worst, indifferent as to whether I achieve my goals or not. Except of course where my goals coincide(oppose) their goals, then they want me to succeed(fail). But “your” and “their” in this context are not inherently oppositional, and your entire argument revolves around assuming that they are. As it is, there is simply no reason for them to prefer an irrational actor to a rational one. They prefer someone who achieves their goals. Being rational is not strictly better or worse than being irrational; it’s a combination of their utility function and how efficiently they pursue their utility function. Rationality is only half of that and, in many ways, the less important half.
Most people here, when they say an agent is rational, mean that agent maximizes the expected value of its utility function. That is the definition I was using. That means it is a selfish utility maximizer—selfish because it maximizes its own utility.
I used the term because I want to contrast agents that maximize their own utility functions, with agents that rewrite their utility functions to incorporate someone else’s, or that have an extra bias towards altruism (and thus are not maximizing their utility function).
The contents of utility functions are arational. There is nothing contradictory about a rational paperclip maximizer. If it acts in ways that prevents it from maximizing paperclips, it would be an irrational paperclip maximizer. Rationality is about how you pursue your utility function (among other things), not what that utility function seeks to maximize.
If you have a strictly selfish utility function, then, yes, acting to maximize it would be rational. Not everyone has a strictly selfish utility function. In fact, I would go so far as to say that the vast majority of people do not have strictly selfish utility functions. I have seen nothing on this site that would suggest a strictly selfish utility function is any more rational than any other utility function.
Thus, this conclusion really is trivial. You’ve used “rational” to imply a highly specific (and, I’m pretty sure, uncommon) utility function, when the use of the term on LW generally has no implication about the contents of a utility function. If you do not force “selfish utility function” into rationality, your conclusion does not follow from your premises.
I can, using the same method, prove that all rationalists can breath underwater, so long as “rationalist” means “fish.” That’s what I mean by trivial.
By “selfish utility function” I mean exactly the same as “private utility function”. I mean that it is that agent’s utility function.
The problem and confusion with this term is that you call the utility function “selfish” even when the agent cares about nothing except helping others. I think this is about the only reason people complain about this terminology or misinterpret you, thinking that whatever concept you mean by this term should somehow exclude helping others from terminal values.
The most obvious interpretation of “selfish utility maximizer” is someone who has a selfish utility function, and if you meant something else, you should have clarified. The context suggests that “selfish utility function” is exactly what you meant. Moreover, your conclusions require that “selfish utility function” is what you meant. Under this reading, being a selfish utility maximizer has no relationship to being rational; the contents of utility functions are arational. Because rationality does not imply anything about your utility function, your conclusions simply don’t follow. You argument seems to center on this:
“They don’t want you to achieve your goals” is probably, in almost all cases where you apply it, false. My lover probably does want me to achieve my goals. My employer is, at the worst, indifferent as to whether I achieve my goals or not. Except of course where my goals coincide(oppose) their goals, then they want me to succeed(fail). But “your” and “their” in this context are not inherently oppositional, and your entire argument revolves around assuming that they are. As it is, there is simply no reason for them to prefer an irrational actor to a rational one. They prefer someone who achieves their goals. Being rational is not strictly better or worse than being irrational; it’s a combination of their utility function and how efficiently they pursue their utility function. Rationality is only half of that and, in many ways, the less important half.