If you are more rational than me on this point, I congratulate you.
I am more rational than you on this point in that I conclude that, because EA makes such absurd demands, I should refuse to accept the premises that go into EA.If I don’t think it’s bad to spend money on myself, then rationality doesn’t demand that I stop spending money on myself in order to be good.
If you consider spending money on yourself to be a problem in the way you’ve described, you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). This means you have bitten too many bullets.
I doubt Singer is a LW’er
LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.
I doubt more than a third of the current population of EA identifies as LW/rationalist (I certainly don’t)
You may not identify as LW-rationalist, but you’re acting like a LW-rationalist.
“then rationality doesn’t demand that I stop spending money on myself in order to be good.” Well, yes, because whether you’re “being” good is somewhat irrelevant. Objective conditions of the world don’t change based on what you’re “being” ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
“you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). ” Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn’t. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
“LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.” Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol’ egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
Well, yes, because whether you’re “being” good is somewhat irrelevant.
That’s a semantics objection. Pretend that I said a more appropriate phrase instead of “being good”, such as “maximizing utility” or “doing what you should do”.
Normality is not a terminal value of mine,
Normality serves as a sanity check against taking ideas seriously. Sanity checks aren’t terminal values.
Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y
You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones
Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.
“You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.”
I feel like you’re making a pretty elementary subset error there…
“Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.” No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
I feel like you’re making a pretty elementary subset error there...
I meant what I said. If 1⁄3 of X are Y, but X doesn’t have anywhere near a 1⁄3 prevalence in the general population or in other subgroups that are disproportionately Y for separate reasons, then it’s fair to say that X has a huge influence on Y.
Nor do I see a need to continue this conversation.
The proper way to end a conversation is to just end it, not to say “this is why I am right, now that I am done saying that, I’ll end it”.
I am more rational than you on this point in that I conclude that, because EA makes such absurd demands, I should refuse to accept the premises that go into EA.If I don’t think it’s bad to spend money on myself, then rationality doesn’t demand that I stop spending money on myself in order to be good.
If you consider spending money on yourself to be a problem in the way you’ve described, you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). This means you have bitten too many bullets.
LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.
You may not identify as LW-rationalist, but you’re acting like a LW-rationalist.
“then rationality doesn’t demand that I stop spending money on myself in order to be good.” Well, yes, because whether you’re “being” good is somewhat irrelevant. Objective conditions of the world don’t change based on what you’re “being” ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
“you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). ” Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn’t. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
“LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.” Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol’ egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
That’s a semantics objection. Pretend that I said a more appropriate phrase instead of “being good”, such as “maximizing utility” or “doing what you should do”.
Normality serves as a sanity check against taking ideas seriously. Sanity checks aren’t terminal values.
You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.
Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.
“You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.” I feel like you’re making a pretty elementary subset error there…
“Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.” No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
Happy New Year.
I meant what I said. If 1⁄3 of X are Y, but X doesn’t have anywhere near a 1⁄3 prevalence in the general population or in other subgroups that are disproportionately Y for separate reasons, then it’s fair to say that X has a huge influence on Y.
The proper way to end a conversation is to just end it, not to say “this is why I am right, now that I am done saying that, I’ll end it”.