Eliezer:
That I still have to hold open doors for old ladies, even at the cost of seconds, because if I breeze right past them, I lose more than seconds. I have to strike whatever blows against Death I can.
Why? What is wrong with taking an Expected Utility view of your actions? We’re all working with limited resources. If we don’t choose our battles to maximum effect, we’re unlikely to achieve very much.
I understand your primary reason (it’s easier to argue for cryonics if you’re signed up yourself), but that one only applies to people trying to argue for cryonics, and for whom the financial obligation is less of a cost than the time and persuasiveness lost in these arguments.
I don’t understand the secondary reasons at all.
Transform into centerless altruists, and we would have destroyed a part of what we fought to preserve.
Agreed, but 1/(6.6 10*9) isn’t a very large part, and that’s not even considering future lives. An Expected Utility calculation still suggests that if you can exert any non-negligible effect on the probability of a Friendly Intelligence Explosion or it’s timing, that effect will vastly outweigh whatever happens to yourself (according to most common non-egoistical value systems).
I’m horribly confused by this thread.
Eliezer: That I still have to hold open doors for old ladies, even at the cost of seconds, because if I breeze right past them, I lose more than seconds. I have to strike whatever blows against Death I can.
Why? What is wrong with taking an Expected Utility view of your actions? We’re all working with limited resources. If we don’t choose our battles to maximum effect, we’re unlikely to achieve very much.
I understand your primary reason (it’s easier to argue for cryonics if you’re signed up yourself), but that one only applies to people trying to argue for cryonics, and for whom the financial obligation is less of a cost than the time and persuasiveness lost in these arguments.
I don’t understand the secondary reasons at all.
Transform into centerless altruists, and we would have destroyed a part of what we fought to preserve.
Agreed, but 1/(6.6 10*9) isn’t a very large part, and that’s not even considering future lives. An Expected Utility calculation still suggests that if you can exert any non-negligible effect on the probability of a Friendly Intelligence Explosion or it’s timing, that effect will vastly outweigh whatever happens to yourself (according to most common non-egoistical value systems).