Rationality means achieving your goals and values efficiently and effectively.
The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals.
This is a false dichotomy. Just because a value is not of negative utility doesn’t mean it is optimized to benefit the genes. Scott Alexander for example is asexual and there are plenty of gay people.
As to old Utilitarianism 1.0, where somebody just declares by fiat that we are all interested in the greatest good for the greatest number of people—that isn’t on the table anymore. People don’t do that.
GiveWell exists, Peter Singer exists. The Effective Altrusim movement exists. They may not be perfect utilitarians but most rationalists aren’t perfect either, neither are most christians and they still exist.
This ended up giving him worse-than-human morality, because he assumes that humans are not actually moral—that humans don’t derive utility from helping others. He ended up convincing himself to do the selfish things that he thinks are “in his own best interests” in order to be a good rationalist, even in cases where he didn’t really want to be selfish
I finally remembered the Less Wrong meta-ethics sequence which you should read. This in particular.
Rationality means achieving your goals and values efficiently and effectively.
This is a false dichotomy. Just because a value is not of negative utility doesn’t mean it is optimized to benefit the genes. Scott Alexander for example is asexual and there are plenty of gay people.
GiveWell exists, Peter Singer exists. The Effective Altrusim movement exists. They may not be perfect utilitarians but most rationalists aren’t perfect either, neither are most christians and they still exist.
I finally remembered the Less Wrong meta-ethics sequence which you should read. This in particular.