It’s not at all obvious to me that the marginal utility of $120/year (at a time where I’m extremely healthy, as part of a demographic that’s exceptionally long-lived) is greater than that of eg. 20 malarial nets (which is an absolute lower bound for any decision, there are ways that I think can leverage my donations significantly further). Can somebody clarify this intuition for me?
By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not.
I continue to believe that EA is absurd, and it’s absurd for reasons like this. No individual alieves in EA; everyone says at some point that they could do more good by buying malaria nets but they’re going to spend some money on themselves anyway. Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do.
Though it amuses me to see one LW weird idea collide head on with another LW weird idea.
“By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not.”
Your point being?
“Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do.”
I spend money on myself (less than you think, probably) because a) inertia bias and b) signaling/minimizing weirdness points. For a), Perhaps you and your peers are different, but I have substantially more self-control against starting new bad habits than quitting previous bad ones. Thus, it makes sense to apply greater scrutiny to newer actions I might do than to give up things that I emotionally find difficult to lose. If you are more rational than me on this point, I congratulate you.
For b), outside of this tiny microcosm and some affiliated places, cryonics is highly unlikely to bring me greater status/minimize my perceived weirdness. Indeed, my prior is very strongly that it has the opposite effect.
Thus, while having a selective demand for rigor for newer ideas may initially seem offputting to rationalists, I think it makes a lot of sense in practice to do so.
“Though it amuses me to see one LW weird idea collide head on with another LW weird idea.” I doubt Singer is a LW’er, and there are plenty of ethical people of an optimizing variety in the world before Singer or LW. EA as a term is closely tied with the LW-sphere, yes, but it’s really just a collection of obvious-in-retrospect ideas put together. I doubt more than a third of the current population of EA identifies as LW/rationalist (I certainly don’t), and I also strongly suspect that EA will outgrow or outlive LW, but I admit to some perhaps unjustifiable optimism on that front.
If you are more rational than me on this point, I congratulate you.
I am more rational than you on this point in that I conclude that, because EA makes such absurd demands, I should refuse to accept the premises that go into EA.If I don’t think it’s bad to spend money on myself, then rationality doesn’t demand that I stop spending money on myself in order to be good.
If you consider spending money on yourself to be a problem in the way you’ve described, you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). This means you have bitten too many bullets.
I doubt Singer is a LW’er
LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.
I doubt more than a third of the current population of EA identifies as LW/rationalist (I certainly don’t)
You may not identify as LW-rationalist, but you’re acting like a LW-rationalist.
“then rationality doesn’t demand that I stop spending money on myself in order to be good.” Well, yes, because whether you’re “being” good is somewhat irrelevant. Objective conditions of the world don’t change based on what you’re “being” ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
“you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). ” Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn’t. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
“LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.” Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol’ egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
Well, yes, because whether you’re “being” good is somewhat irrelevant.
That’s a semantics objection. Pretend that I said a more appropriate phrase instead of “being good”, such as “maximizing utility” or “doing what you should do”.
Normality is not a terminal value of mine,
Normality serves as a sanity check against taking ideas seriously. Sanity checks aren’t terminal values.
Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y
You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones
Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.
“You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.”
I feel like you’re making a pretty elementary subset error there…
“Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.” No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
I feel like you’re making a pretty elementary subset error there...
I meant what I said. If 1⁄3 of X are Y, but X doesn’t have anywhere near a 1⁄3 prevalence in the general population or in other subgroups that are disproportionately Y for separate reasons, then it’s fair to say that X has a huge influence on Y.
Nor do I see a need to continue this conversation.
The proper way to end a conversation is to just end it, not to say “this is why I am right, now that I am done saying that, I’ll end it”.
The implication of this is that you should just look at cryonics as one possible way to benefit yourself, but realize that there is no reason to criticize someone who doesn’t do it, just as you don’t criticize someone who doesn’t feel like buying himself ice cream.
It’s not at all obvious to me that the marginal utility of $120/year (at a time where I’m extremely healthy, as part of a demographic that’s exceptionally long-lived) is greater than that of eg. 20 malarial nets (which is an absolute lower bound for any decision, there are ways that I think can leverage my donations significantly further). Can somebody clarify this intuition for me?
By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not.
I continue to believe that EA is absurd, and it’s absurd for reasons like this. No individual alieves in EA; everyone says at some point that they could do more good by buying malaria nets but they’re going to spend some money on themselves anyway. Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do.
Though it amuses me to see one LW weird idea collide head on with another LW weird idea.
“By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not.”
Your point being?
“Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do.”
I spend money on myself (less than you think, probably) because a) inertia bias and b) signaling/minimizing weirdness points. For a), Perhaps you and your peers are different, but I have substantially more self-control against starting new bad habits than quitting previous bad ones. Thus, it makes sense to apply greater scrutiny to newer actions I might do than to give up things that I emotionally find difficult to lose. If you are more rational than me on this point, I congratulate you.
For b), outside of this tiny microcosm and some affiliated places, cryonics is highly unlikely to bring me greater status/minimize my perceived weirdness. Indeed, my prior is very strongly that it has the opposite effect.
Thus, while having a selective demand for rigor for newer ideas may initially seem offputting to rationalists, I think it makes a lot of sense in practice to do so.
“Though it amuses me to see one LW weird idea collide head on with another LW weird idea.” I doubt Singer is a LW’er, and there are plenty of ethical people of an optimizing variety in the world before Singer or LW. EA as a term is closely tied with the LW-sphere, yes, but it’s really just a collection of obvious-in-retrospect ideas put together. I doubt more than a third of the current population of EA identifies as LW/rationalist (I certainly don’t), and I also strongly suspect that EA will outgrow or outlive LW, but I admit to some perhaps unjustifiable optimism on that front.
I am more rational than you on this point in that I conclude that, because EA makes such absurd demands, I should refuse to accept the premises that go into EA.If I don’t think it’s bad to spend money on myself, then rationality doesn’t demand that I stop spending money on myself in order to be good.
If you consider spending money on yourself to be a problem in the way you’ve described, you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). This means you have bitten too many bullets.
LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.
You may not identify as LW-rationalist, but you’re acting like a LW-rationalist.
“then rationality doesn’t demand that I stop spending money on myself in order to be good.” Well, yes, because whether you’re “being” good is somewhat irrelevant. Objective conditions of the world don’t change based on what you’re “being” ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
“you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). ” Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn’t. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
“LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.” Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol’ egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
That’s a semantics objection. Pretend that I said a more appropriate phrase instead of “being good”, such as “maximizing utility” or “doing what you should do”.
Normality serves as a sanity check against taking ideas seriously. Sanity checks aren’t terminal values.
You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.
Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.
“You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.” I feel like you’re making a pretty elementary subset error there…
“Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.” No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
Happy New Year.
I meant what I said. If 1⁄3 of X are Y, but X doesn’t have anywhere near a 1⁄3 prevalence in the general population or in other subgroups that are disproportionately Y for separate reasons, then it’s fair to say that X has a huge influence on Y.
The proper way to end a conversation is to just end it, not to say “this is why I am right, now that I am done saying that, I’ll end it”.
The implication of this is that you should just look at cryonics as one possible way to benefit yourself, but realize that there is no reason to criticize someone who doesn’t do it, just as you don’t criticize someone who doesn’t feel like buying himself ice cream.
Your double negatives are confusing me. :) Can you clarify?