While I don’t think that there’s anything wrong with preferring to be consistent about one’s selfishness, I think it’s just that: a preference.
The common argument seems to be that you should be consistent about your preferences because that way you’ll maximize your expected utility. But that’s tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I’ve written a partial post about this, which I can try to finish if people are interested.)
For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I’m not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as “preferences”, while the egoist faction is more motivated by just doing whatever feels good at the moment and isn’t that interested in the long-term consequences.
Another way of putting this: If you divide the things you do between “selfish” and “altruistic” things, then it seems to make sense to sign up for cryonics as an efficient part of the “selfish” component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between “Near mode decisions” and “Far mode decisions”. Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.
A huge number of people save money for a retirement that won’t start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.
That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn’t, then I suspect many of those who currently do it wouldn’t do it for similar reasons to those why they don’t sign up for cryonics.
I suspect most people’s reasons for not signing up for cryonics amount to “I don’t think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal’s Mugging.” I don’t see how that would apply to retirement—would people in such a society seriously think they have only a very small chance of surviving until retirement age?
While I don’t think that there’s anything wrong with preferring to be consistent about one’s selfishness, I think it’s just that: a preference.
The common argument seems to be that you should be consistent about your preferences because that way you’ll maximize your expected utility. But that’s tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I’ve written a partial post about this, which I can try to finish if people are interested.)
For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I’m not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as “preferences”, while the egoist faction is more motivated by just doing whatever feels good at the moment and isn’t that interested in the long-term consequences.
Another way of putting this: If you divide the things you do between “selfish” and “altruistic” things, then it seems to make sense to sign up for cryonics as an efficient part of the “selfish” component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between “Near mode decisions” and “Far mode decisions”. Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.
A huge number of people save money for a retirement that won’t start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.
That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn’t, then I suspect many of those who currently do it wouldn’t do it for similar reasons to those why they don’t sign up for cryonics.
I suspect most people’s reasons for not signing up for cryonics amount to “I don’t think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal’s Mugging.” I don’t see how that would apply to retirement—would people in such a society seriously think they have only a very small chance of surviving until retirement age?